Darrell A. Dromgoole, Associate Professor and Extension Specialist, Texas A&M AgriLife Extension Service.
Scott Cummings, Associate Department Head and Program Leader; Professor and Extension Specialist, Texas A&M AgriLife Extension Service.
Program implementation, though referred to by some Extension educators and researchers (Duttweiler & Dayton, 2009), has not received the intense analysis other topics in Extension education has received, such as program evaluation. Effective implementation of programs requires Extension educators to completely adhere to recommended program design and delivery in order to achieve specified clientele change. Implementation is defined as a purposeful set of activities undertaken to incorporate defined teaching points that are strategically linked to program objectives and evaluation. Having a well-defined implementation strategy increases the chances of program success, leading to positive clientele change (Powers, Maley, Purington, Schantz, & Dotterweich, 2015). According to Duerden and Witt (2012), the steps and components of outcome evaluations are well documented in Extension literature. While outcome evaluations have become common practice for Extension programs (e.g., McCann, Peterson, & Gold, 2009), improvement can be made in the realm of program implementation evaluations.
Most Extension educators develop plans detailing how programs should be implemented, however the level of actual adherence to these plans varies greatly (Durlak & Wells, 1997). Without understanding the degree to which a program was implemented as originally planned, often referred to as “program fidelity,” it becomes difficult to infer and discover relationships between outcomes and programs. Ascertaining a clear picture of how well a program was implemented allows Extension educators to more confidently link programs to observed outcomes (Dobson & Cook, 1980). Additionally, implementation findings provide Extension educators insight into how their programs are being conducted and how they can be improved (Rossi et al., 2004).
Implementation evaluations, when combined with outcome evaluations, can enable Extension educators to identify best practices in terms of program implementation (Duerden & Witt, 2012). This information promotes the dissemination of programs as well as providing insights regarding how programs should be designed and implemented in order to produce observed positive results (Duerden & Witt, 2012). Extension educators can significantly benefit from an increased focus on integrated evaluations that address both implementation and outcomes (Duerden & Witt, 2012).
At the heart of implementation is the concept of program fidelity, defined as the degree to which a program is implemented as originally planned. Program fidelity consists of five main dimensions: adherence, dosage, quality of delivery, participant responsiveness, and program differentiation (Dane & Schneider, 1998):
A review of literature related to implementation fidelity measurement reveals there are two distinct assessments on how the five dimensions of implementation fidelity should be measured (Carroll et al., 2007). One approach reported in literature is that each of these five dimensions represents an alternative way to measure fidelity (Carroll et al., 2007). This approach to implementation fidelity measurement requires measuring either adherence or dosage or quality of delivery (Carroll et al., 2007).
Another approach related implementation measurement is measuring all five dimensions to capture a “comprehensive portrayal” of implementation fidelity (Carroll et al., 2007). Evaluation requires the measurement of adherence, dosage, quality of delivery, participant responsiveness, and program differentiation. While this process of evaluating implementation fidelity is more comprehensive it ignores the fact that relationships between the various dimensions are more complex than such conceptualizations permit (Carroll et al., 2007).
A third conceptual framework for measuring implementation fidelity was advanced by Carroll et al. (2007) and reveals that not only proposes the measurement of all of these dimensions, but unlike all previous attempts to make sense of this concept it also clarifies and explains the function of each and their relationship to one another. Two additional dimensions are also introduced into this new framework: intervention complexity and facilitation strategies (Carroll et al., 2007). The potential influence of intervention complexity on implementation fidelity was suggested to the authors by literature on implementation more broadly – especially a systematic review that focused on identifying facilitators and barriers to the diffusion of innovations in organizations (Carroll et al., 2007). This literature review revealed that the complexity of an idea represented a substantial barrier to its adoption (Carroll et al., 2007). The potential role of facilitation strategies was suggested by research aiming to evaluate the implementation fidelity of specific interventions that put in place strategies to optimize the level of fidelity achieved (Carroll et al., 2007). Such strategies included providing manuals, guidelines, training, monitoring and feedback, and capacity building (Carroll et al., 2007).
All of the dimensions to evaluate implementation fidelity are listed in Table 1, and the relationships between them are shown in the framework depicted in Figure 1 (Carroll et al., 2007):
Table 1. Dimensions to evaluate implementation fidelity (Carroll et al., 2007).
Figure 1. Dimensions to evaluate implementation fidelity (Adapted from Carroll et al., 2007).
The framework outlined in Figure 1 depicts the vital dimension of implementation fidelity and their relationship to one another (Carroll et al., 2007). The measurement of implementation fidelity is the measurement of adherence— how Extension educators delivering an intervention actually adhere to the intervention as it is outlined by its designers (Carroll et al., 2007). Adherence includes the subcategories of content, frequency, duration and coverage (Carroll et al., 2007). The degree to which the intended content or frequency of an intervention is implemented is the degree of implementation fidelity achieved for that intervention (Carroll et al., 2007). The level achieved may be influenced or moderated by certain other variables: intervention complexity, facilitation strategies, quality of delivery, and participant responsiveness. For example, the less enthusiastic participants are about an intervention, the less likely the intervention is to be implemented properly and completely (Carroll et al., 2007).
The broken line in Figure 1 indicates that the relationship between an intervention and its outcomes is external to implementation fidelity, but the degree of implementation fidelity achieved can influence this relationship (Carroll et al., 2007). An analysis of outcomes could identify those components that are essential to the intervention, and must be implemented if the intervention is to achieve its intended outcomes (Carroll et al., 2007). This evaluation could enable the Extension educator to determine the elements of the content of the intervention that are minimum requirements for high implementation fidelity. This evaluation for implementation fidelity will identify the implementation of the essential components of the intervention (Carroll et al., 2007). The following discussion describes the function of each dimension in detail.
Adherence is essentially the bottom-line measurement of implementation fidelity (Carroll et al., 2007). If an implemented intervention adheres completely to the content, frequency, duration, and coverage recommended by its designers, then fidelity is considered high (Carroll et al., 2007). Measuring implementation fidelity means evaluating whether the result of the implementation process is an effective realization of the intervention as planned by its designers (Carroll et al., 2007). Subcategories of adherence concern the frequency, duration, or coverage of the intervention being delivered, which is generally defined as “dosage” in the existing literature (Carroll et al., 2007). The measurement of adherence to an intervention’s predefined components can be quantifiable: an evaluation to assess how much of the intervention’s prescribed content has been delivered, how frequently, and for how long (Carroll et al., 2007).
A high level of adherence or fidelity to an intervention, or its essential components, is not easily achieved (Carroll et al., 2007). Several factors may influence or moderate the degree of fidelity that an intervention is implemented (Carroll et al., 2007). Each of the potential moderators of this relationship includes the following (Carroll et al., 2007):
These moderators are not necessarily disconnected elements (Carroll et al., 2007). There may be relationships at work between two or more moderators (Carroll et al., 2007). An example is where training and guidelines on how to deliver an intervention may have a direct impact on the quality with which an intervention is actually delivered (Carroll et al., 2007). If the amount of training provided is small, then the quality of the resulting delivery may be poor. Facilitation strategies may also influence participant responsiveness: the utilization of incentives could make both Extension educators and participants more responsive to a new intervention. Quality of delivery may function in the same way: a well-delivered intervention making participants more enthusiastic and committed to it (Carroll et al., 2007). One moderator might therefore predict another.
The suggestion of this framework is that any evaluation must measure all the factors that influence the degree of implementation fidelity, such as intervention complexity and the adequacy of facilitation strategies. When measuring implementation using this framework, Extension educators also need to assess participant responsiveness or receptiveness to proposed and implemented interventions (Carroll et al., 2007). With the exception of a few studies that do measure quality of delivery or participant responsiveness, most implementation fidelity research focuses solely on a fidelity score determined almost exclusively by adherence (Carroll et al., 2007). Furthermore, this research seldom reports high implementation fidelity (Carroll et al., 2007). It actually often falls short of the ideal and is sometimes even very poor; however, it is only by measuring the moderators that potential explanations for low or inadequate implementation may be determined or understood (Carroll et al., 2007). It is only by identifying and controlling for the contribution of possible barriers to implementation that such issues can be addressed to achieve higher implementation fidelity (Carroll et al., 2007).
Extension educators can utilize a Program Implementation Fidelity (self check) checklist to rapidly analyze program implementation fidelity:
Table 2. Program Implementation Fidelity Check List.
Understanding whether or not a program was implemented correctly allows Extension educators to more accurately interpret the relationship between the program and observed outcomes (Durlak, 1998; Gresham & Gansle, 1993; Moncher & Prinz, 1991). Implementation research also helps Extension educators more accurately describe program components and their associated degree of program fidelity and fostering more accurate replication of the intervention (Duerden & Witt, 2012). Without a clear understanding of these issues, difficulties can arise when replicating previously successful programs, because Extension educators will lack information regarding how best to implement the program and the degree of fidelity needed to produce intended outcomes (Backer, Liberman, & Kuehnel, 1986).
Effective implementation of programs requires Extension educators to clearly understand what a program is supposed to accomplish and how it should be put into practice. When programs’ educational components are altered or educational activities are not sequenced in the recommended manner, they become less effective in terms of yielding clientele change. Understanding whether or not a program was implemented correctly allows Extension educators to more accurately interpret the relationship between the program and observed clientele change (Durlak, 1998; Gresham & Gansle, 1993; Moncher & Prinz, 1991).
Implementation Theory is one of the most important, and at the same time most neglected aspects of Extension education. This is unfortunate due to the benefits related to quality program implementation such as:
Backer, T. E., Liberman, R. P., & Kuehnel, T. G. (1986). Dissemination and adoption of innovative psychosocial interventions. Journal of Consulting and Clinical Psychology, 54(1), 111-118
Carroll, C., Patterson, M., Wood, S., Booth, A., Rick, J., & Balain, S. (2007). A conceptual framework for implementation fidelity. Implementation science, 2(1), 40.
Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and early secondary prevention: Are implementation effects out of control. Clinical Psychology Review, 18, 23–45
Domitrovich, C. E., & Greenberg, M. T. (2000). The study of implementation: Current findings from effective programs that prevent mental disorders in school-aged children. Journal of Educational and Psychological Consultation, 11(2), 193-221.
Duerdon, M., & Witt, P. (2012). Assessing Program Implementation: What it is, why it’s important, and how to do it. Journal of Extension [Online], 50(1) Article 1FEA4. Available at: www.joe.org/joe/2012february/a4p.shtml
Durlak, J. A. (1998). Why program implementation is important. Journal of Prevention and Intervention in the Community, 17(2), 5–18.
Dusenbury, L., Brannigan, R., Falco, M., & Hansen, W. B. (2003). A review of research on fidelity of implementation: Implications for drug abuse prevention in school settings. Health Education Research, 18(2), 237-256.
Greenberg, M. R. (2006). The diffusion of public health innovations. American Journal of Public Health, 96(2), 209–210.
Gresham, F. M., & Gansle, K. A. (1993). Treatment integrity of school-based behavioral intervention studies: 1980-1990. School Psychology Review, 22(2), 254.
Fixsen, D. L., Naoom, S. F., Blase´, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network (FMHI Publication #231). Available at: http://nirn.fmhi.usf.edu/resources/ publications/Monograph/pdf/monograph_full.pdf
Lewis, K., Lesesne, C., Zahniser, S. Wilson, M., Desiderio, G., Wandersman, A. & Green, D. (2012). Developing a prevention synthesis and translation system to promote science-based approaches to teen pregnancy, HIV and STI prevention. American Journal of Community Psychology, 50, 553–571.
Moncher, F. J., & Prinz, R. J. (1991). Treatment fidelity in outcome studies. Clinical Psychology Review, 11(3), 247-266.
Powers, J., Maley, M., Purington, A., Schantz,K, & Dotterweich, J. (2015) Implementing Evidence-Based Programs: Lessons Learned From the Field, Applied Developmental Science, 19:2, 108-116, DOI: 10.1080/10888691.2015.1020155
Rogers, E. M. (1995). Diffusion of innovations (4th ed.). New York: Free Press.
Spoth, R., Rohrbach, L. A., Greenberg, M., Leaf, P., Brown, C. H., Fagan, A. … Society for Prevention Research Type 2 Translational Task Force. (2013). Addressing core challenges for the next generation of type 2 translation research and systems: The translation science to population impact (TSci impact) framework. Prevention Science, 14(4), 319–351.
Wandersman, A., Duffy, J., Flaspohler, P., Noonan, R., Lubell, K., Stillman, L., Blachman, M., Dunville, R., & Saul, J. (2008). Bridging the gap between prevention research and practice: The Interactive Systems Framework for dissemination and implementation. American Journal of Community Psychology, 41, 171–181.