Darrell A. Dromgoole, Associate Professor and Extension Specialist, Texas A&M AgriLife Extension Service.
Scott Cummings, Associate Department Head and Program Leader; Professor and Extension Specialist, Texas A&M AgriLife Extension Service.
There is no doubt that program evaluation has become more critical to effective Extension programming during the past 20 years. If you have any doubt regarding the prominence of this topic in Extension literature just do a word search in the Journal of Extension and you’ll see that there are more than 5,000 articles on the subject. Just about the time we think we have the “evaluation tiger by the tale” we realize that our stakeholders have evolved, changed and no longer just expects evaluation strategies that produce data and evidence, but stakeholders also expect credible evidence.
Baverman defined the credibility of an evaluation as the likelihood that stakeholders will accept the evaluation results as convincing and will accept the conclusions and recommendations as reasonable and acceptable. Extension educators typically utilize evaluations to demonstrate to stakeholders that audiences were reached and impacted in the ways designated by the program (Baughman, Boyd, & Franz, 2012). It is imperative that Extension programs demonstrate the relevance and impact of Extension work in ways that will be credible to stakeholders with distinct needs, interests, and perspectives. Undeniably, stakeholders often have differing criteria for what they consider to be credible, trustworthy evidence of a program’s impact or quality (Hetherington, Eshbach, &Cuthbertson, 2019). Individual criterions and interests can result in differences in what is seen as credible by different stakeholder groups (Hetherington, Eshbach, &Cuthbertson, 2019). Although these criteria can be interrelated, it is critical that Extension programs balance satisfying demands of credibility to clientele, internal Extension administrators, the professional and scientific community, and those stakeholders providing funding for programs (Hetherington, Eshbach, &Cuthbertson, 2019). Examples of criteria for each of these program stakeholder groups are shown in Table 1 (Hetherington, Eshbach, & Cuthbertson, 2019).
Table 1. Criteria for Credibility among Extension Stakeholder Groups (Hetherington, Eshbach, & Cuthbertson, 2019).
Because Extension educators are responsible for bridging the gap between scientific efforts to practice among Extension clientele, they must maintain balance in meeting the needs of communities in addressing local challenges and the needs of other program stakeholders, while remaining grounded in research- and evidence-based programming (Olson, Welsh, & Perkins, 2015). The Extension mission is best served when programs bridge the gap between implementing rigorous research models and meeting local community needs (Fetsch, MacPhee, & Boyer, 2012). Figure 1 below illustrates Extension’s role in bridging the gap between science and practice where evidence based programs are implemented.
1 Evidence Based Program
Figure 1. Concept map that illustrates Extension’s role in bridging gap between science and practice.
In order to effectively evaluate Extension programs there are some competencies needed to create and utilize credible evidence (Hetherington, Eshbach, &Cuthbertson, 2019). Extension educators should be able to do the following (Hetherington, Eshbach, &Cuthbertson, 2019):
Specific evaluation competencies within these areas are shown in Table 2.
Table 2. Evaluation Competencies for Generating and Using Credible, Actionable Evidence (Hetherington, Eshbach, &Cuthbertson, 2019).
An essential competency in the program development and delivery process is using data to assess the needs of communities and then delivering programming to meet those needs (Hetherington, Eshbach, &Cuthbertson, 2019). Beyond assessing and meeting the needs of communities traditionally served by Extension programs, Extension educators should also be prepared to meet the needs of communities that have traditionally been excluded or marginalized from Extension programs (Hetherington, Eshbach, &Cuthbertson, 2019). Extension educational programs cannot take a “one size fits all” approach, assuming that present programs will meet the needs of, have an impact on, or be credible to all communities (Hetherington, Eshbach, &Cuthbertson, 2019). We cannot assume that what is effective or credible in communities we traditionally serve will also be effective or credible in others. Understanding the cultural or social contexts in an evaluation (e.g., stakeholders’ perspectives on credibility, culturally responsive methodologies) is increasingly being recognized as a critical component of the program planning and evaluation process (Centers for Disease Control and Prevention; 2014).
Extension educator’s ability to build evaluation capacity is a fundamental aspect of building credible and actionable evidence about the effectiveness of Extension programs (Hetherington, Eshbach, &Cuthbertson, 2019). Enhancing Extension educators’ evaluation capacity can strengthen Extension educators’ comprehension of other principles related to program development, implementation, and evaluation processes, consequently advancing Extension’s ability to generate and use credible evidence (Hetherington, Eshbach, &Cuthbertson, 2019). Extension educators should equip themselves with the understanding of how to collect credible evidence about program impacts and to consider varying stakeholder perspectives on what constitutes credible evidence (Hetherington, Eshbach, &Cuthbertson, 2019). When Extension educators enhances their evaluation capacity, it not only builds the capacity to collect high quality data, it also produces the ability to use such data in advocating for and making changes to improve programs, increases the ability to advance as a learning organization, and supports Extension’s ability to have a positive impact on individuals and the communities where we live (Hetherington, Eshbach, &Cuthbertson, 2019).
Baughman, S., Boyd, H. H., & Franz, N. K. (2012). Non-formal educator use of evaluation results. Evaluation and Program Planning, 35(3), 329–336. doi:10.1016/j.evalprogplan.2011.11.008.
Centers for Disease Control and Prevention. (2014). Practical strategies for culturally competent evaluation. Atlanta, GA: US Dept of Health and Human Services. Retrieved from https://www.cdc.gov/dhdsp/docs/cultural_competence_guide.pdf
Hetherington, C., Eschbach, C. &Cuthbertson, C. (2019). Evaluation capacity building as a means to credible evidence in Extension programs. Journal of Human Science and Extension. 7. 175-188.