Darrell A. Dromgoole, Associate Professor and Extension Specialist, Texas A&M AgriLife Extension Service.
Scott Cummings, Associate Department Head and Program Leader; Professor and Extension Specialist, Texas A&M AgriLife Extension Service.
There is no doubt that evidence based programs have the ability to yield positive clientele change in research settings and has prompted tremendous enthusiasm among Extension educators. However, knowledge about how best to take interventions that have been developed and tested in research settings to scale in a local Extension educational setting is considered deficient by some researchers (Fixxsen, Naoom, Blase, Friedman & Wallace, 2005; Spoth et al., 2013; Wanderman et al., 2008). Numerous challenges in attaining high quality replication of evidence-based programs have been identified by numerous researchers (Greenburg, 2006; Lewis et al., 2012; Rogers, 1995).
Transitioning programs from the stage of initial design and testing to large-scale county level implementation is critical. Implementation is defined as a purposeful set of activities undertaken to incorporate defined teaching points that are linked to program objectives and evaluation. Therefore, having a well-defined implementation strategy increases the chances of program success, leading to positive clientele change (Powers et al., 2015).
The topic of program implementation, though referred to by some authors (Duttweiler & Dayton, 2009), has not received the intense analysis that other topics such as program evaluation has faced. Although most organizations such as Texas A&M AgriLife Extension has developed detailed plans for the implementation of evidence based programs, the level of actual adherence or program fidelity to these plans are critical.
At the heart of implementation is the concept of program fidelity, defined as the degree to which a program is implemented as originally planned. Program fidelity consists of five main dimensions: adherence, dosage, quality of delivery, participant responsiveness, and program differentiation (Dane & Schneider, 1998):
Understanding whether or not a program was implemented correctly allows Extension educators to more accurately interpret the relationship between the program and observed outcomes (Durlak, 1998; Gresham & Gansle, 1993; Moncher & Prinz, 1991). Implementation research also helps Extension educators more accurately describe program components and their associated degree of program fidelity, fostering more accurate replication of the intervention (Duerden & Witt, 2012). Without a clear understanding of these issues, difficulties can arise when replicating previously successful programs because Extension educators will lack information regarding how best to implement the program and the degree of fidelity needed to produce observed outcomes (Backer, Liberman, & Kuehnel, 1986).
Table 1 below illustrates a program implementation map for an evidence-based program:
Table 1 Program Implementation Map for Evidence Based Programs.
Effective implementation of evidence-based programs requires Extension educators to clearly understand what a program is supposed to accomplish and how it should be put into practice. When evidence based programs, educational components are altered or educational activities are not sequenced in the recommended manner they are no longer evidence-based programs. Understanding whether or not a program was implemented correctly allows Extension educators to more accurately interpret the relationship between the program and observed outcomes (Durlak, 1998; Gresham & Gansle, 1993; Moncher & Prinz, 1991).
Implementation Theory is one of the most important, and at the same time most neglected, aspects of Extension education. This is unfortunate due to the benefits related to quality program implementation such as:
Backer, T. E., Liberman, R. P., & Kuehnel, T. G. (1986). Dissemination and adoption of innovative psychosocial interventions. Journal of Consulting and Clinical Psychology, 54(1), 111-118
Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and early secondary prevention: Are implementation effects out of control. Clinical Psychology Review, 18, 23–45
Domitrovich, C. E., & Greenberg, M. T. (2000). The study of implementation: Current findings from effective programs that prevent mental disorders in school-aged children. Journal of Educational and Psychological Consultation, 11(2), 193-221.
Duerdon, M., & Witt, P. (2012). Assessing Program Implementation: What it is, why it’s important, and how to do it. Journal of Extension [Online], 50(1) Article 1FEA4. Available at: www.joe.org/joe/2012february/a4p.shtml
Durlak, J. A. (1998). Why program implementation is important. Journal of Prevention and Intervention in the Community, 17(2), 5–18.
Dusenbury, L., Brannigan, R., Falco, M., & Hansen, W. B. (2003). A review of research on fidelity of implementation: Implications for drug abuse prevention in school settings. Health Education Research, 18(2), 237-256.
Greenberg, M. R. (2006). The diffusion of public health innovations. American Journal of Public Health, 96(2), 209–210.
Gresham, F. M., & Gansle, K. A. (1993). Treatment integrity of school-based behavioral intervention studies: 1980-1990. School Psychology Review, 22(2), 254.
Fixsen, D. L., Naoom, S. F., Blase´, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network (FMHI Publication #231). Available at: http://nirn.fmhi.usf.edu/resources/ publications/Monograph/pdf/monograph_full.pdf
Lewis, K., Lesesne, C., Zahniser, S. Wilson, M., Desiderio, G., Wandersman, A. & Green, D. (2012). Developing a prevention synthesis and translation system to promote science-based approaches to teen pregnancy, HIV and STI prevention. American Journal of Community Psychology, 50, 553–571.
Moncher, F. J., & Prinz, R. J. (1991). Treatment fidelity in outcome studies. Clinical Psychology Review, 11(3), 247-266.
Powers, J., Maley, M., Purington, A., Schantz,K, & Dotterweich, J. (2015) Implementing Evidence-Based Programs: Lessons Learned From the Field, Applied Developmental Science, 19:2, 108-116, DOI: 10.1080/10888691.2015.1020155
Rogers, E. M. (1995). Diffusion of innovations (4th ed.). New York: Free Press.
Spoth, R., Rohrbach, L. A., Greenberg, M., Leaf, P., Brown, C. H., Fagan, A. … Society for Prevention Research Type 2 Translational Task Force. (2013). Addressing core challenges for the next generation of type 2 translation research and systems: The translation science to population impact (TSci impact) framework. Prevention Science, 14(4), 319–351.
Wandersman, A., Duffy, J., Flaspohler, P., Noonan, R., Lubell, K., Stillman, L., Blachman, M., Dunville, R., & Saul, J. (2008). Bridging the gap between prevention research and practice: The Interactive Systems Framework for dissemination and implementation. American Journal of Community Psychology, 41, 171–181.