Darrell A. Dromgoole, Associate Professor and Extension Specialist, Texas A&M AgriLife Extension Service.
Scott Cummings, Associate Department Head and Program Leader; Professor and Extension Specialist, Texas A&M AgriLife Extension Service.
The core mission and vision for Extension is to translate research into practice remains the same today as it did during the inception of Extension work in 1914 with the passage of the Smith-Lever Act, the challenges associated with implementing programs have evolved as communities and our clientele have changed. By understanding research related to program implementation, Extension professionals at the national, state, and local levels can advance the scholarship of Extension and deliver evidence-based programs that continue to meet and exceed clientele (Gagnon, Franz, Garst, & Bumpus, 2015).
There is no doubt that there has been a shift towards evidence-based practice in Texas A&M AgriLife Extension Service, focused on the assessment of program outcomes and clientele change. The evidence-based movement focuses on outcome assessment and clientele change—ensuring that programmatic outcomes are achieved (Aarons, Sommerfeld, Hecht, Silovsky, & Chaffin, 2009). This emphasis on outcomes often sacrifices another important component of programs, their implementation—how programs are delivered (Berkel, Mauricio, Schoenfelder, & Sandler, 2011). Considering implementation assessment in the program development process provides a more complete portrayal of the efficacy of programs and a more robust understanding of why a program succeeded or failed.
The majority of the research related to program implementation has occurred in the prevention and health sciences fields (Duerden & Witt, 2012; Sloboda, Dusenbury, & Petras, 2014). These fields have clear-cut parallels with Extension work in both community-based participatory research (Israel, Eng, Schulz, & Parker, 2013) and transformative learning (Franz, Garst, Baughman, Smith, & Peters, 2009). Implementation research is “the scientific inquiry into questions concerning implementation—the act of carrying an intention into effect” (Peters, Adam, Alonge, Agyepong, & Tran, 2013, p. 1). Put simply, when we investigate implementation, we look at how a program was delivered, rather than what outcomes were achieved (Gagnon et al., 2015). The consideration of program implementation is an essential aspect of the program planning, development, and evaluation process (Berkel et al., 2011; Seevers & Graham, 2012). A well-designed programs can have differing levels of success depending on the quality and quantity of implementation (Gagnon et al., 2015). If only a portion of a program was delivered as designed, it is reasonable to anticipate that only a portion of the desired outcomes (if any) will be achieved (Duerden & Witt, 2012). Conversely, if a program’s content is present but lacks high quality delivery as intended by program designers, implementation value and corresponding outcomes can be compromised (Mihalic, Fagan, & Argamaso, 2008). The importance of implementation is clear: programs delivered with high quality implementation tend to produce positive outcomes more consistently than programs delivered with lower quality implementation (Biglan & Taylor, 2000; Dane & Schneider, 1998; Durlak & DuPre, 2008; Mihalic, 2002). A failure to pay attention to implementation can also impact program outcomes in other ways. Caldwell et al. (2008) reported, “Small effect sizes or findings inconsistent with well-reasoned hypotheses may not be related to the efficacy of the program as it was designed, but rather be related to failure to implement the program as intended” (p. 148).
Another important reason for monitoring program implementation occurs when a program moves from efficacy trials, where researchers typically have a high level of control, to the real world, where the program is delivered to its intended audience with less control by program designers (Mihalic et al., 2008). In this situation, implementation assessment helps determine if research-based programs are practical and transferable in real-world settings (Fixsen, Blase, Naoom, & Wallace, 2009; Johnson, Mellard, Fuchs, & McKnight, 2006). Implementation assessment ensures that programs are delivered consistently across sites and highlights potential explanations for omissions or modifications to a program (Gagnon et al., 2015). Finally, the pairing of implementation assessment with a traditional outcome evaluation provides “the identification of effective programs and practices” (Duerden & Witt, 2012, p. 2), and this pairing provides a benchmark for Extension programs.
The primary factor that makes the evidence-based programs unique and sets them apart from other systems for moving science to practice is the emphasis on statistical analyses of qualified existing studies and the formation of guidelines that have been developed through a rigorous process of analysis and review, all set within a framework that views the science-to-practice continuum as a formal system for diffusion of research (Rogers, 2003). Figure 1 illustrates the science to practice relationship for evidence-base programs:
Figure 1. Science to Practice Relationship for Evidence-Based Programs (Rogers, 2003).
Several factors contribute to effective program implementation, including the following (Gagnon et al., 2015):
Figure 2 illustrates how current literature indicates these factors contribute to program implementation and corresponding program outcomes.
Figure 2. Conceptual Model of the Factors Contributing to Quality Program Implementation and Corresponding Outcomes.
An important consideration regarding program implementation relates to the characteristics of the community where a program is delivered (Gagnon et al., 2015). If a program is designed for more economic stable, English-speaking audience, but is delivered to a lower socio-economic, Spanish-speaking audience, it is no surprise that the quality of implementation may be compromised (Gagnon et al., 2015). This cultural mismatch occurs frequently within social and prevention programs (Castro, Barrera, & Martinez, 2004). Furthermore, when a community is not consulted or ready for a program, community stakeholders may be disinterested in the program (Gagnon et al., 2015). Needs assessment offers one way to gauge community-level interest in Extension programs (Garst & McCawley, 2015). Another consideration regarding a program’s successful implementation within a community relates to the context for which it was designed versus the context in which it is currently being delivered (urban versus rural) (Gagnon et al., 2015). Extension educators must consider these factors when choosing and delivering programs within the communities they serve (Castro et al., 2004).
Another important community characteristic for successful program implementation relates to the participants in the community being served and their responsiveness to the program. According to James Bell Associates (2009), participant responsiveness refers to “the manner in which participants react to or engage in a program. Aspects of participant responsiveness can include participants’ level of interest; perceptions about the relevance and usefulness of a program; and their level of engagement” (p. 2). Participant responsiveness can influence outcomes and quality of program implementation (Gagnon et al., 2015). For example, “the less enthusiastic participants are about an intervention, the less likely the intervention is to be implemented properly and fully” (Carroll et al., 2007, p. 3). If participants are not responsive to a program or the Extension educator or are unable to engage for other reasons with the program, this may influence an Extension educator’s program delivery and compromise the quality of program implementation (Century, Freeman, & Rudnick, 2008).
The characteristics of a program may also influence levels of program implementation (Gagnon et al., 2015). If a program is too complex, too lengthy, or inappropriate for the population being served, the likelihood of a program being delivered as designed may be low (Pereplectchikova, Treat, & Kazdin, 2007). Furthermore, Extension programs are designed inherently for the community they are serving by addressing “the problems, issues, concerns of local communities” (Garst & McCawley, 2015, p. 27). Thus, if a program is not tailored to a local group, the quality to which it is implemented may suffer (Arnold, 2015)
Conversely, if programs are too simple, it may lead those delivering a program to change or modify the program to alleviate boredom or more fully engage participants (Carroll et al., 2007). Program complexity and organization are associated with successful implementation. Programs with clear processes and outcomes are easier to implement and less likely to result in low-quality implementation (Mihalic, Irwin, Elliott, Fagan, & Hansen, 2004).
Individuals providing programs exercise great influence over how programs are implemented (Gagnon et al., 2015). These program professionals and their corresponding characteristics (e.g., program-specific training, program buy-in, level of experience facilitating groups, overall competency) can significantly impact the quality of program delivery (Dusenbury, Brannigan, Falco, & Hansen, 2003; Mihalic et al., 2008; Perepletchikova et al., 2007; Sloboda et al., 2014) by changing the program design, the intended method of delivery, and the structure of a program, and by adapting program materials (e.g., curriculum, program settings, program components, etc.).
The level and quality of training offered to Extension educators has been shown to be positively associated with both positive programmatic outcomes and quality implementation (Cyr, 2008; Dufrene, Noell, Gilbertson, & Duhon, 2005). When training was active and engaging and involved role playing, peer observation, and timely feedback, Extension educator buy-in, motivation, and self-efficacy were enhanced, and thereby, quality of program delivery (Durlak & DuPre, 2008). In a study of substance abuse prevention programs Little et al. (2013) found that comprehensive training had a significant positive impact on implementation. On the other hand, inconsistent or poor training negatively impacted an educator’s ability to implement a program as designed (Gottfredson et al., 2000).
Extension educator’s buy-in can have a profound effect on both program implementation and outcomes. Extension educators buy-in is the level of motivation an educator has to facilitate, his/her belief in the goals of a program, his/her attitude about a program, and his/her level of agreement that the program will be successful (Dusenbury et al., 2003; Dusenbury, Brannigan, Hansen, Walsh, & Falco, 2005; Johnson et al., 2006). Quality implementation and achievement of positive program outcomes (Durlak & DuPre, 2008; Stein et al., 2008) are correlated with Extension educator buy-in.
Experience is another factor that influences how Extension educators implement program goals (Nobel et al., 2006) because prior program implementation experience assists Extension educators feel more comfortable presenting in front of a group (Allen, Hunter, & Donohue, 1989) and may enhance one’s competence and confidence in delivering programs. However, experience may also lead an Extension educator to overestimate his/her competence, thereby negatively affecting program delivery (Zollo & Gottschalg, 2004). Finally, there also appears to be a relationship between Extension educator competency and quality program implementation (Gagnon et al., 2015). Competency can be defined as the level of skill and understanding a facilitator possesses when delivering a program (Milligan, 1998). In an investigation of Extension program educators, Cyr (2008) found that quality training enhanced Extension educator’s competency and contributed to Extension educators feeling more confident about their efficacy as facilitators. However, this study did not link this competency explicitly with improvement to programmatic implementation or outcomes.
Extension educators recognize the importance of delivering programs according to how they are designed—a core tenet of measuring implementation quality (Dusenbury et al., 2003; Hansen, 2014). Measuring and monitoring program implementation ensures that a program plan is adhered to as designed by program developers (Gagnon et al., 2015). However, implementation assessment is more difficult than a traditional outcomes assessment. Investigation of a program’s implementation level requires more training for program evaluators, more time, and more resources (Hansen, 2014; Mihalic et al., 2008). This measurement typically takes place through process evaluations that examine the elements of a program and how they can be improved (U.S. Department of Health and Human Services, 2002). Program implementation assessment reveals if the relationship between program delivery and program outcomes is accurate (Moncher & Prinz, 1991). If high quality program implementation is maintained, but desired outcomes are not achieved, then this may suggest the need for program modification or cancellation (Gagnon et al., 2015). The monitoring of a program for quality implementation also can be used to determine what program components or features were or were not present, or what adaptations and omissions occurred, and to provide confirmation that a program is being provided as designed (Mowbray et al., 2003).
Program implementation quality is typically measured using three methods: indirect, direct, and hybrid assessments (Gresham, MacMillan, Beebe-Frankenberger, & Bocian, 2000). In a direct assessment, the components and features of a program are clearly specified in operational terms on a checklist based on the major program components (Gagnon et al., 2015). In many programs, direct observation is preferred for monitoring program implementation (Domitrovich & Greenberg, 2000). In a direct assessment, trained staff or faculty observe the program and determine the percentage of the program implemented as designed (Gagnon et al., 2015). Staff or faculty also identify Extension educators needing retraining due to low levels of implementation and/or omission or adaptation of program materials (Gresham, 1989).
In an indirect assessment, methods for monitoring implementation include self-reports, interviews, and permanent products (Gresham, 1989). For example an Extension educator would rate himself/herself on a seven-point Likert Scale on the degree to which he/she implemented each section of a program with fidelity. By completing a self-report, Extension educators may become more aware of areas to maintain and improve fidelity (Gagnon et al., 2015). They may devote more attention to enhancing fidelity in those areas in future program implementation. Another useful option for the assessment of program implementation relates to the use of structured Extension educator journals (Gagnon et al., 2015). In a study of camp staff, Mainieri and Hill (2015) used daily camp counselor journaling of program activities, adherence to the suggested program components, and reasons for deviations from the program plan. The information contained in these journals was useful in determining how programs were being modified and the underlying causes of the modifications or omissions (Gagnon et al., 2015). Finally, a hybrid assessment involves the blend of indirect and direct strategies (i.e., observation combined with self-report). This strategy is useful in triangulating the strategies to obtain a true score of an Extension educator’s implementation quality rather than being limited to one method of implementation assessment (Mainieri & Anderson, 2014).
Extension educators interested in improving program implementation can focus on five dimensions of program delivery: fidelity, exposure, quality of program delivery, Extension educator competence, and program differentiation (Berkel et al., 2011; Dane & Schneider, 1998; Gagnon, 2014; Hansen, 2014; Mihalic, 2009; Milligan, 1998).
Considering the emphasis that Extension places on program quality and meeting the needs of Extension stakeholders (Garst & McCawley, 2015), implementation assessment should be a central goal of Extension as more evidence-based programs are adopted. With clear movement towards more evidence-based programs, not only due to the demands of funders and legislators, there is a need to ensure that Extension is implementing the very best programs possible (Gagnon et al., 2015). The strength of implementation assessment is that it highlights not only areas that Extension can improve, but also current areas of strength. Implementation assessment also highlights the move from research to practice and the challenges of working in the real world versus the laboratory environment. When the assessment of implementation quality is conducted, practical data are often discovered (e.g., the program is culturally inappropriate, the participants are not engaged, or there is not enough time to deliver all components). Thoughtful consideration of how programs are implemented is necessary to achieve the best possible outcomes for Extension program participants.
Many factors may enhance or negate quality program implementation. As mentioned earlier, Extension work is only done well when all levels of delivery from the organization to the participants themselves are engaged and considered in terms of their contribution to quality implementation (Gagnon et al., 2015). When this complexity (Figure 2) is considered, quality program outcomes generally follow (Dane & Schneider, 1998; Durlak & DuPre, 2008). Given the lack of implementation science research (Duerden & Witt, 2012), Extension has an opportunity to contribute to implementation science to further not only its own research base, but also that of the broader social and prevention sciences.
Implementation work, regardless of its broad support in the social sciences, is still very much in its early development. Furthermore, by its nature, it requires more resources than a traditional outcomes assessment (Gagnon et al., 2015). However, because a core goal of Extension is the dissemination and replication of evidence-based programs, it is a necessary and valuable endeavor. By measuring programs for their implementation quality, Extension, as a field committed to both education and research, will be better able to make accurate statements about program efficacy and benefits to constituents.
Aarons, G. A., Sommerfeld, D. H., Hecht, D. B., Silovsky, J. F., & Chaffin, M. J. (2009). The impact of evidence-based practice implementation and fidelity monitoring on staff turnover: Evidence for a protective effect. Journal of Consulting and Clinical Psychology, 77(2), 270–280. doi:10.1037/a0013223
Allen, M., Hunter, J. E., & Donohue, W. A. (1989). Meta‐analysis of self‐report data on the effectiveness of public speaking anxiety treatment techniques. Communication Education, 38(1), 54–76. doi:10.1080/03634528909378740
Arnold, M. E. (2015). Connecting the dots: Improving Extension program planning with program umbrella models. Journal of Human Sciences and Extension, 3(2), 48–67.
Berkel, C., Mauricio, A. M., Schoenfelder, E., & Sandler, I. N. (2011). Putting the pieces together: An integrated model of program implementation. Prevention Science, 12(1), 23–33. doi:10.1007/s11121-010-0186-1
Biglan, A., & Taylor, T. K. (2000). Increasing the use of science to improve child-rearing. The Journal of Primary Prevention, 21(2), 207–226. doi:10.1023/A:10070832003280
Caldwell, L. L., Younker, A. S., Wegner, L. Patrick, M. E., Vergnani, T., Smith, E. A., & Flisher, A. J. (2008). Understanding leisure-related program effects by using process data in the HealthWise South Africa project. Journal of Park & Recreation Administration, 26(2), 146–162.
Carroll, C., Patterson, M., Wood, S., Booth, A., Rick, J., & Balain, S. (2007). A conceptual framework for implementation fidelity. Implementation Science, 2, Article No. 40. doi:10.1186/1748-5908-2-40
Castro, F. G., Barrera, M., & Martinez, C. R., Jr. (2004). The cultural adaptation of prevention interventions: Resolving tensions between fidelity and fit. Prevention Science, 5(1), 41– 45. doi:10.1023/B:PREV.0000013980.12412.cd
Century, J., Freeman, C., & Rudnick, M. (2008, March). A framework for measuring and accumulating knowledge about fidelity of implementation of science instructional materials. Proceedings from National Association for Research in Science Teaching Annual Meeting, Baltimore, MD.
Cyr, L. F. (2008). Facilitation competence: A catalyst for effective Extension work. Journal of Extension, 46(4), Article 4RIB2. Retrieved from http://www.joe.org/joe/2008august/rb2.php.
Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clinical Psychology Review, 18(1), 23–45. doi:10.1016/S0272-7358(97)00043-3
Domitrovich, C. E., & Greenberg, M. T. (2000). The study of implementation: Current findings from effective programs that prevent mental disorders in school-aged children. Journal of Educational and Psychological Consultation, 11(2), 193–221. doi:10.1207/S1532768XJEPC1102_04
Duerden, M. D., & Witt, P. A. (2012). Assessing program implementation: What it is, why it’s important, and how to do it. Journal of Extension, 50(1), Article 1FEA4. Retrieved from http://www.joe.org/joe/2012february/a4.php
Dufrene, B. A., Noell, G. H., Gilbertson, D. N., & Duhon, G. J. (2005). Monitoring implementation of reciprocal peer tutoring: Identifying and intervening with students who do not maintain accurate implementation. School Psychology Review, 34(1), 74–86.
Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41(3-4), 327–350. doi:10.1007/s10464-008-9165-0
Dusenbury, L., Brannigan, R., Falco, M., & Hansen, W. B. (2003). A review of research on fidelity of implementation: Implications for drug abuse prevention in school settings. Health Education Research, 18(2), 237–256. doi:10.1093/her/18.2.237
Dusenbury, L., Brannigan, R., Hansen, W. B., Walsh, J., & Falco, M. (2005). Quality of implementation: Developing measures crucial to understanding the diffusion of preventive interventions. Health Education Research, 20(3), 308–313. doi:10.1093/her/cyg134
Fixsen, D. L., Blase, K. A., Naoom, S. F., & Wallace, F. (2009). Core implementation components. Research on Social Work Practice, 19(5), 531–540. doi:10.1177/1049731509335549
Franz, N., Garst, B. A., Baughman, S., Smith, C., & Peters, B. (2009). Catalyzing transformation: Conditions in Extension educational environments that promote change. Journal of Extension, 47(4), Article 4RIB1. Retrieved from http://www.joe.org/joe/2009august/rb1.php.
Gagnon, R., Franz., N., Garst, B., & Bumpus, M.( 2015), Factors Impacting Program Delivery: The Importance of Implementation Research in Extension. Journal of Human Sciences and Extension, 3 (2), 68-82.
Gagnon, R. J. (2014). Exploring the relationship between the facilitator and fidelity. Journal of Outdoor Recreation, Education, and Leadership, 6(2), 183–186. doi:10.7768/19485123.1264
Garst, B. A., & McCawley, P. F. (2015). Solving problems, ensuring relevance, and facilitating change: The evolution of needs assessment within Cooperative Extension. Journal of Human Sciences and Extension, 3(2), 26–47.
Gottfredson, G. D., Gottfredson, D. C., Czeh, E. R., Cantor, P., Crosse, S. B., & Hantman, I. (2000). National study of delinquency prevention in schools: Summary (96-MU-MU0008; 98-JN-FX-0004). Retrieved from https://www.ncjrs.gov/pdffiles1/nij/grants/194116.pdf
Gresham, F. M. (1989). Assessment of treatment integrity in school consultation and pre-referral intervention. School Psychology Review, 18(1), 37–50.
Gresham, F. M., MacMillan, D. L., Beebe-Frankenberger, M. E., & Bocian, K. M. (2000). Treatment integrity in learning disabilities intervention research: Do we really know how treatments are implemented? Learning Disabilities Research & Practice, 15, 198–205. doi:10.1207/SLDRP1504_4
Hansen, W. B. (2014). Measuring fidelity. In Z. Sloboda & H. Petras (Eds.), Defining prevention science (pp. 335–359). New York, NY: Springer.
Israel, B. A., Eng, E., Schulz, A. J., & Parker, E. A. (Eds.). (2013). Methods for community based participatory research for health (2nd ed.). San Francisco, CA: Jossey-Bass.
James Bell Associates. (2009, October). Evaluation brief: Measuring implementation fidelity. Arlington, VA: James Bell Associates.
Johnson, E., Mellard, D. F., Fuchs, D., & McKnight, M. A. (2006). Responsiveness to intervention (RTI): How to do it. Lawrence, KS: National Research Center on Learning Disabilities.
Little, M. A., Sussman, S., Sun, P., & Rohrbach, L. A. (2013). The effects of implementation fidelity in the Towards No Drug Abuse dissemination trial. Health Education, 113(4), 281–296. doi:10.1108/09654281311329231
Mainieri, T. L., & Anderson, D. M. (2014). Exploring the “black box” of programming: Applying systematic implementation evaluation to a structured camp curriculum. Journal of Experiential Education, 1–18. doi:10.1177/1053825914524056
Mainieri, T. L., & Hill, B. (2015, February). Exploring the use of structured counselor journaling as camp implementation evaluation tool. Paper presented at the annual meeting of the American Camp Association, New Orleans, LA.
Mihalic, S. (2002). The importance of implementation fidelity. Unpublished manuscript. Mihalic, S. (2009). Implementation fidelity. Unpublished manuscript.
Mihalic, S., Fagan, A., & Argamaso, S. (2008). Implementing the Life Skills Training drug prevention program: Factors related to implementation fidelity. Implementation Science, 3, Article 5.
Mihalic, S., Irwin, K., Elliot, D., Fagan, A., & Hansen, D. (2004). Blueprints for violence prevention (NCJ 204274). Washington, DC: Office of Juvenile Justice and Delinquency Prevention.
Milligan, F. (1998). Defining and assessing competence: The distraction of outcomes and the importance of educational process. Nurse Education Today, 18(4), 273–280. doi:10.1016/S0260-6917(98)80044-0
Moncher, F. J., & Prinz, R. J. (1991). Treatment fidelity in outcome studies. Clinical Psychology Review, 11(3), 247–266. doi:10.1016/0272-7358(91)90103-2
Mowbray, C. T., Holter, M. C., Teague, G. B., & Bybee, D. (2003). Fidelity criteria: Development, measurement, and validation. American Journal of Evaluation, 24(3), 315– 340. doi:10.1177/109821400302400303
Nobel, O. B., Zbylut, M. L., Fuchs, D., Campbell, K., Brazil, D., & Morrison, E. (2006). Leader experience and the identification of challenges in a stability and support operation (Technical Report 1186). United States Army Research Institute for the Behavioral and Social Sciences, 1–38.
Peters, D. H., Adam, T., Alonge, O., Agyepong, I. A., & Tran, N. (2013). Implementation research: What it is and how to do it. British Medical Journal, 347, 1–7. doi:10.1136/bmj/f6753
Perepletchikova, F., Treat, T. A., & Kazdin, A. E. (2007). Treatment integrity in psychotherapy research: Analysis of the studies and examination of the associated factors. Journal of Consulting and Clinical Psychology, 75(6), 829–841. doi:10.1037/0022-006X.75.6.829
Rogers, E. M. (2003). Diffusion of Innovations (5th ed.). New York, NY: Free Press.
Seevers, B., & Graham, D. (2012). Education through Cooperative Extension (3rd ed.). Fayetteville, AR: University of Arkansas.
Sloboda, Z., Dusenbury, L., & Petras, H. (2014). Implementation science and the effective delivery of evidenced-based prevention. In Z. Sloboda & H. Petras (Eds.), Defining prevention science (pp. 293–314). New York, NY: Springer. doi:10.1007/978-1-48997424-2_13
Stein, M. L., Berends, M., Fuchs, D., McMaster, K., Sáenz, L., Yen, L., Fuchs, L. S., & Compton, D. L. (2008). Scaling up an early reading program: Relationships among teacher support, fidelity of implementation, and student performance across different sites and years. Educational Evaluation and Policy Analysis, 30(4), 368–388. doi:10.3012/0162373708322738
U.S. Department of Health and Human Services. (2002). Finding the balance: Program fidelity and adaptation in substance abuse prevention (ED 469 354). Rockville, MD: Center for Substance Abuse Prevention.
Wandersman, A., Duffy, J., Flaspohler, P., Noonan, R., Lubell, K., Stillman, L., Blachman, M., Dunville, R., & Saul, J. (2008). Bridging the gap between prevention research and practice: The Interactive Systems Framework for dissemination and implementation. American Journal of Community Psychology, 41(3-4), 171–181. doi:10.1007/s10464008-9174-z
Zollo, M., & Gottschalg, O. (2004). When does experience hurt? The confidence-competence paradox. Fontainebleau, France: INSEAD. Retrieved from http://www.insead.edu/facultyresearch/research/doc.cfm?did=1438