Darrell A. Dromgoole, Associate Professor and Extension Specialist, Texas A&M AgriLife Extension Service.
Scott Cummings, Associate Department Head and Program Leader; Professor and Extension Specialist, Texas A&M AgriLife Extension Service.
In order to intelligently make decisions, good information on the effectiveness of programs is needed for Extension educators (Fitzpatrick, Sanders, & Worthen, 2004). Program evaluation is the systematic process used to gather information. Diem (2003) identifies five reasons for documenting the impact of programs:
Richardson (2005) reported impact should include substantive social, economic, or environmental benefits to both clientele and the general public. Texas A&M AgriLife Extension Service’s evaluation model is an integral component of the program development model. While Texas A&M AgriLife Extension Service utilizes a hybrid of program evaluation models, the focus on clientele change aligns with Kirkpatrick’s Evaluation Model. Figure 1 illustrates the four levels of evaluation by Kirkpatrick(1959, 1996, 2010):
The Kirkpatrick Model as shown in Figure 1 consists of four levels including: (1) reaction, (2) learning, (3) behavior (or transfer), and (4) results (Kirkpatrick, 1959, 1996, 2010). Reaction is a measure of how participants feel about the program or with Texas A&M AgriLife Extension Service it measures customer satisfaction. Learning is a measure of the knowledge acquired, skills improved, or attitudes changed as a result of the program. Behavior is a measure of the extent to which participants changed behavior, adopted best practices or new technology as a result of the program. Results, the fourth and final level, is the final results or performance change which occurred because of the program such economic, social or environmental impacts. Kirkpatrick (1959, 1996) reports that evaluation becomes more difficult, and complicated as the levels increase; however, the evaluation becomes much more meaningful.
The Kirkpatrick model was originally developed and published by Donald Kirkpatrick in 1959 and has been used as an evaluation model for many state Extension Services. The Kirkpatrick evaluation model has also served as the foundation for additional models in recent years (Bennett, 1975; Hoffman & Grabowski, 2004). This model is significant because it was the initial model developed to examine more tangible measures of impact instead of simply measuring the reactions or feelings of the participants and has served as the basis for Bennett’s Hierarchy and the contemporary logic models.
The Kirkpatrick Model, as well as other models, are tools that help Extension educators conduct program evaluation and collect evidence to document impact. Keith Diem (2003) defined impact in Extension programming as “the positive difference we make in people’s lives as a result of the programs we conduct.” To have an impact, the results of an Extension program must ultimately change people’s attitudes or behavior, or benefit society in other ways (Diem, 1997).
Kirkpatrick’s four levels of evaluation with Texas A&M AgriLife Extension Service’s equivalent in parentheses are presented below (Kirkpatrick, 1959, 1996, 2010):
Texas A&M AgriLife Extension Service programs are evaluated to either demonstrate something (summative evaluation) or to improve something (formative evaluation). When demonstrating something (summative evaluation) Extension educators are able to accomplish the following:
When Extension faculty is working to improve something (formative evaluation) they are able to accomplish the following:
In future Next Step to Success blog posts, we will continue to discuss various aspects of program evaluation.
Bennett, C. (1975). Up the hierarchy. Journal of Extension, 13 (2), 7-12.
Boleman, C., Cummings, S. & Pope. P. (2005). Keys to education that works: Texas Cooperative Extension’s program development model, Texas Cooperative Extension, College Station, Texas. Publication #345.
Diem, K. G. (2003). Program development in a political world—it’s all about impact! Journal of Extension [On-line], 41(1)Article 1FEA6. Available at: http://www.joe.org/joe/2003february/a6.shtm.
Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2004). Program evaluation: Alternative approaches and practical guidelines (3rded.). Boston MA: Pearson.
Hoffman, B., & Grabowski, B. (2004). Smith-Lever 3(d) Extension evaluation and outcome reporting–A scorecard to assist federal program leaders. Journal of Extension. [On-line], 42(6) Article 6FEA1. Available at: http://www.joe.org/joe/2004december/a1.shtm
Kirkpatrick, D. L. (1959). Techniques for evaluating training programs. Journal of American Society of Training Directors, 11, 1-13.
Kirkpatrick, D. (1996, January). Great ideas revisited. Training & Development, 54-59.
Kirkpatrick, D. (2010). 50 years of evaluation. T+D, 64(1), 14.
Richardson, J. G. (2005). Extension, facing current and future realities or else. Proceedings of the 21stAnnual Conference of the Association for International Agricultural and Extension Education (pp. 193-204), San Antonio, TX
.