Skip to main content

Conceptualizing outcomes for use with the Consolidated Framework for Implementation Research (CFIR): the CFIR Outcomes Addendum

Abstract

Background

The challenges of implementing evidence-based innovations (EBIs) are widely recognized among practitioners and researchers. Context, broadly defined as everything outside the EBI, includes the dynamic and diverse array of forces working for or against implementation efforts. The Consolidated Framework for Implementation Research (CFIR) is one of the most widely used frameworks to guide assessment of contextual determinants of implementation. The original 2009 article invited critique in recognition for the need for the framework to evolve. As implementation science has matured, gaps in the CFIR have been identified and updates are needed. Our team is developing the CFIR 2.0 based on a literature review and follow-up survey with authors. We propose an Outcomes Addendum to the CFIR to address recommendations from these sources to include outcomes in the framework.

Main text

We conducted a literature review and surveyed corresponding authors of included articles to identify recommendations for the CFIR. There were recommendations to add both implementation and innovation outcomes from these sources. Based on these recommendations, we make conceptual distinctions between (1) anticipated implementation outcomes and actual implementation outcomes, (2) implementation outcomes and innovation outcomes, and (3) CFIR-based implementation determinants and innovation determinants.

Conclusion

An Outcomes Addendum to the CFIR is proposed. Our goal is to offer clear conceptual distinctions between types of outcomes for use with the CFIR, and perhaps other determinant implementation frameworks as well. These distinctions can help bring clarity as researchers consider which outcomes are most appropriate to evaluate in their research. We hope that sharing this in advance will generate feedback and debate about the merits of our proposed addendum.

Peer Review reports

Background

The challenges of implementing evidence-based innovations (EBIs) are widely recognized among practitioners and researchers. Context, broadly defined as everything outside the EBI [1], includes the dynamic and diverse array of forces working for or against implementation efforts [2]. As a result, implementation scientists have prioritized developing methods to understand and measure facets of context, which is necessary for all projects that involve planning, executing, or evaluating implementation efforts [3].

Theories that guide conceptualization of context abound and are often encapsulated within determinant frameworks [4, 5]; these frameworks delineate determinants (i.e., barriers or facilitators) that influence the outcome of implementation efforts. Knowledge of contextual barriers and facilitators is used to adapt EBIs [6], select and tailor implementation strategies [3, 7], and predict and/or explain implementation outcomes [8, 9]. Ultimately, the goal of this work is to increase knowledge about what works where and why to accelerate sustained integration of EBIs into routine practice.

The Consolidated Framework for Implementation Research (CFIR) is one of the most widely used frameworks within and outside implementation science [8, 10]. The original 2009 article invited critique in recognition of the need for the framework to evolve [2]. As implementation science has matured, gaps in the CFIR have been identified and updates are needed. Our team is developing the CFIR 2.0 based on a literature review and follow-up survey with authors. We encountered many recommendations in both the literature review and survey responses to add outcomes.

Although the CFIR is a determinant framework, users must develop, explore, and test theories of change that link determinants to implementation outcomes [8]. For example, Damschroder et al. identified seven CFIR determinants that were correlated with implementation outcomes using a mixed methods approach [9]; other regression- or Boolean-based analyses can be used to identify subsets of determinants that drive implementation outcomes [11]. In our trainings and consultations with new CFIR users, we have found that additional clarification and guidance is needed about which outcomes CFIR determinants influence and how to delineate determinants versus outcomes during coding and analysis. Currently published frameworks that define outcomes can be complicated to apply. For example, the Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) framework defines Maintenance outcomes at both the setting- and individual-level. As a result, users must be careful to delineate these levels because the determinants influencing Maintenance are different depending on how it is defined [12]. Furthermore, definitions are inconsistent across sources. For example, the RE-AIM framework defines Adoption as “the absolute number, proportion, and representativeness of: a) settings; and b) intervention agents (people who deliver the program) who are willing to initiate a program” [12]. Proctor et al.’s Implementation Outcomes Framework (IOF) defines Adoption as “the intention, initial decision, or action to try or employ an innovation” [13]. CFIR users will benefit from more clarity about (1) types of implementation outcomes, (2) implementation vs. innovation outcomes, and (3) determinants of implementation outcomes versus determinants of innovation outcomes.

We propose an Outcomes Addendum to the CFIR to address these issues. Our goal is not to create a new framework, but to help implementation researchers articulate which outcomes their studies are proposing to address, carefully consider the determinants that can affect those outcomes, and in turn, design studies that can collect the best data for measuring both outcomes and their determinants. The aim of this debate article is to describe the rationale for and conceptualization of the CFIR Outcomes Addendum, which draws on findings from the literature review and survey we conducted as part of our work on the CFIR 2.0 as well as the RE-AIM framework and the IOF [12, 13].

Methods

We completed a literature review to identify recommendations from the published literature. More details will be provided in the future CFIR 2.0 manuscript. Briefly, we searched SCOPUS and Web of Science from 2009 (the year the CFIR was published) to July 6, 2020; we included all articles that mentioned the CFIR in the title and/or abstract. We identified 376 articles total; 16 articles included recommendations related to adding outcomes.

In addition to completing the literature review, we surveyed corresponding authors of included articles; there were 337 unique corresponding authors, but only 334 with contact information. Of the 334 contacted authors, 157 (47%) responded. The survey asked for recommendations to improve the CFIR including adding, removing, or modifying constructs and/or domains. Thirteen respondents recommended adding outcomes and three additional recommendations were related to outcomes, though they were not explicitly identified as such. The VA Ann Arbor Healthcare System IRB declared this study exempt from the requirements of 38 CFR 16 based on category 2.

Proposed CFIR Outcomes Addendum

Overview

There were recommendations to add both implementation and innovation outcomes to the CFIR from the literature review and survey. Hung et al. recommended inclusion of both types of outcomes because it would focus “the researcher’s attention squarely on the way that context shapes intermediate results and conditions, such as user acceptance, which in turn influence classic measures of an intervention's ultimate aims or outcomes” [14]. Some authors addressed this gap by linking the CFIR with another framework: nineteen used the RE-AIM framework and eight used the IOF [12, 13]. Other authors addressed this issue by adapting the CFIR to incorporate outcomes from both the RE-AIM framework and the IOF, including the CFIR-Process Redesign [14, 15] and the Care Transitions Framework [16].

The RE-AIM framework and the IOF were used to help inform broad categories of implementation outcomes and innovation outcomes included in the CFIR Outcomes Addendum. In addition, within implementation outcomes, we draw a distinction between anticipated implementation outcomes and actual implementation outcomes. Finally, we highlight contextual determinants (as described by CFIR constructs) as potential moderators of implementation outcomes versus innovation determinants (outside the scope of the CFIR) as potential moderators of innovation outcomes. Our goal is not to develop a new framework, but rather to clarify relationships between determinants and outcomes and to provide broad definitions that users can apply while using other frameworks.

Implementation outcomes

The CFIR Outcomes Addendum broadly conceptualizes implementation outcomes as the success or failure of implementation. Anticipated implementation outcomes are based on perceptions or measures of the likelihood of future implementation success or failure, i.e., implementation outcomes that have not yet occurred. These outcomes are forward-looking; constellations of CFIR determinants across domains predict these outcomes. The concept of anticipated outcomes is well-established within the Sociology of Science and Technology field. Borup et al. assert that “[…] expectations can be seen to be fundamentally ‘generative,’ they guide activities, provide structure and legitimation, attract interest and foster investment. They give definition to role, clarify duties, offer some shape of what to expect and how to prepare for opportunities and risks” [17]. The terms anticipated and expected are used interchangeably and we chose the former term.

Actual implementation outcomes are based on perceptions or measures of current (or past) implementation success or failure, i.e., implementation outcomes that have occurred. These outcomes are backward-looking; constellations of CFIR determinants across domains explain these outcomes. Both anticipated and actual implementation outcomes can be assessed quantitatively or qualitatively.

Anticipated and Actual Implementation Outcomes include three broadly conceptualized outcomes based on our own work and the RE-AIM framework. While implementation research has tended to focus on initial implementation success, the importance of shifting from near-term implementation goals to long-term sustainment is increasingly clear. Several recommendations from both our literature review and survey responses highlighted the importance of capturing concepts of implementability and implementation, while even more discussed the importance of assessing Sustainability and Sustainment [14, 15, 18,19,20,21,22,23,24,25,26]. One survey respondent explained, “We added sustainability [sustainment] to the framework in our study. Planning for sustainability [sustainment] should begin at the earliest stages of the implementation process.” In a critique by Ilot et al. they recognized that when EBIs are not sustained, the result is a “waste of time, financial resources and leadership effort at a time of economic austerity” [18]. As a result, anticipated outcomes include adoptability, implementability, and sustainability, while actual outcomes include adoption, implementation, and sustainment. These major categories of implementation outcomes focus on the ultimate goals of implementation efforts: first, whether the decision is made to deliver the innovation (adoption); second, whether delivery of the innovation occurs (implementation); and third, whether the delivery of the innovation continues in the long-term. Table 1 lists definitions for each implementation outcome.

Table 1 Implementation outcomes definitions

As a result, although the IOF lists acceptability, appropriateness, and feasibility as implementation outcomes, these are not included as implementation outcomes in the CFIR Outcomes Addendum. These measures can be used to predict any anticipated or actual implementation outcome; for example, Weiner et al. developed measures for acceptability, appropriateness, and feasibility of an innovation and highlighted their role as potential predictors of adoption or implementation [27]. Thus, like Reilly et al., we classify these measures as “Antecedent Assessments” [28]. Additionally, the CFIR lists implementation climate and implementation readiness as higher-order constructs within the framework—each comprising multiple determinants. Since publication of the CFIR, there has been continued conceptual and measurement development for these concepts as potential predictors of implementation outcomes, but there is little consensus on their role within implementation theories [29,30,31]. Thus, we also place implementation readiness and implementation climate into the antecedent assessment category, which lies between CFIR determinants and implementation outcomes in Fig. 1. See Table 2 for a full mapping of RE-AIM Framework and IOF outcomes to the CFIR Outcomes Addendum. Given our goal to provide broad conceptualization of outcomes, many of the specific outcomes in existing frameworks map to broader concepts in the CFIR Outcomes Addendum.

Fig. 1
figure 1

CFIR Outcomes Addendum diagram

Table 2 RE-AIM Framework and IOF outcomes mapped to the CFIR Outcomes Addendum

A note on terminology: the terms Sustainability and Sustainment are commonly used colloquially and there is an entire “science of sustainability” that has much to offer to the “science of implementation” [17]. As a result, we chose these terms over Maintenance from the RE-AIM Framework.

Innovation outcomes

The CFIR Outcomes Addendum broadly conceptualizes innovation outcomes as the success or failure of the innovation, based on the impact of the innovation on three important constituents: innovation recipients, innovation deliverers, and key decision-makers.

  • Recipients are the human-beings for whom the innovation is designed to benefit, e.g., patients receiving treatment, students receiving a learning activity, or citizens receiving a city service.

  • Deliverers are the human-beings who are directly or indirectly involved with delivering the innovation to recipients, e.g., clinicians delivering treatment to patients, teachers delivering a learning activity to students, or city employees delivering a city service to citizens.

  • Key decision-makers are the human-beings who have authority within the implementing setting, whether it is a formal system or broader community, e.g., a hospital director deciding what treatment to deliver, a school superintendent deciding what learning activity to deliver, or a city mayor deciding what city service to deliver.

It is important to note that types of recipients and types of deliverers may overlap, e.g., when implementing a vaccination program for hospital employees, all employees are potential recipients while the specific employees delivering the vaccine (e.g., nurses who work within Employee Health) are also deliverers. These broad constituencies are based on feedback from CFIR users, who use the CFIR to plan and evaluate implementation of diverse innovations, both within and outside of healthcare.

While the outcomes important to innovation recipients (e.g., patients) and key decision-makers (e.g., hospital directors) are frequently prioritized in other frameworks (e.g., as reflected by the list of Client and Service Outcomes within the IOF), outcomes important to innovation deliverers (e.g., clinicians) are often not prioritized. Consideration of clinicians (and other employees) motivated evolution of the “Triple Aim” (enhancing patient experience, improving population health, and reducing costs) [35] to the “Quadruple Aim,” which added an aim of improving the work-life and well-being of clinicians and staff [36]. Ideally, implementation of innovations will produce benefit for not only innovation recipients and key decision-makers, but also innovation deliverers, e.g., reducing burnout, improving work experience.

Sustainment of outcomes may be strengthened when goals are aligned between these three key constituencies, each of whom are likely to have different priorities and interests [37,38,39,40]. For example, an innovation that improves patient function (an important outcome to patient recipients) is unlikely to be sustained if it increases burnout for clinicians (an important outcome to clinician deliverers) and/or increases system costs (an important outcome to key decision-makers). Because CFIR users are focused on achieving and sustaining implementation, it is important to consider which outcomes are most important to which people. We believe that by highlighting the human-beings impacted by Innovation Outcomes, the CFIR Outcomes Addendum will help researchers and organizations orient to values of humanism and equity. Figure 1 illustrates the components of the CFIR Outcomes Addendum.

In Fig. 1, right facing arrows at the top of the figure illustrate the temporal nature of (1) anticipated and actual implementation outcomes and (2) implementation outcomes and innovation outcomes. The right facing arrow between anticipated and actual implementation outcomes illustrates the generative nature of anticipated outcomes (see the “Implementation outcomes” section above). The right facing arrow between implementation and innovation outcomes illustrates the foundational premise within implementation science that successful implementation is a necessary pre-condition to achieving maximum innovation benefits [41]. For example, an effective innovation will fail to produce expected outcomes if it is poorly implemented; this may result in a “Type III” error, when evaluators conclude that the innovation is ineffective, when in fact that same innovation may have met or exceeded expectations if it had been properly implemented [42]. In addition, left facing arrows across the bottom of the figure illustrate the reinforcing loop that can emerge when the positive impact of an innovation inspires continued commitment to implementation and sustainment [43].

A note on terminology: We have opted to use the term Innovation to be broadly inclusive of other terms. Rogers’ classic Diffusion of Innovation theory defines innovation as an idea, practice, or object that is perceived as new by an individual or other unit of adoption. If an idea seems new within a setting or for an individual, it is an innovation [44]. This is a broad definition and includes any “thing” that is being implemented [45]: Innovations can include, e.g., medications, medical devices, behavior change interventions, technology, and more—or any combination. An innovation is ideally supported by a “strong evidence-base” before it is implemented. However, we also recognize there is lack of agreement on what types of evidence warrant implementation [46,47,48] and there is a compelling need to dismantle knowledge-building silos (e.g., clinical trialists versus implementation scientists) to translate innovations more quickly into practice [49]. Thus, we chose the term “innovation” to acknowledge that implementation can occur with innovations that are supported by diverse sources and types of evidence.

CFIR implementation determinants vs. innovation determinants

When collecting data, researchers must be clear about the goal of data collection: (1) to predict and/or explain implementation outcomes based on implementation determinants (this is within the scope of the CFIR) or (2) to predict and/or explain innovation outcomes based on innovation determinants (this is outside the scope of the CFIR). The following section explores the roles of implementation versus innovation determinants.

Implementation determinants

CFIR implementation determinants capture setting-level barriers and facilitators that predict and/or explain antecedent assessments and/or anticipated or actual implementation outcomes. These determinants are denoted by the gray arrow in Fig. 1 labeled CFIR implementation determinants. Data (qualitative and/or quantitative) on these determinants is best collected from individuals who have influence and/or authority related to implementation (usually within the implementing setting); these typically include the key decision-makers and individuals implementing and/or delivering the innovation.

Although over 20 users recommended adding a domain and/or constructs to collect data directly from recipients, the CFIR is not the appropriate framework to use for this purpose unless recipients are also helping to implement and/or deliver the innovation. As reflected by Orlando et al., it is disappointing to note that “… while patients are part of the health-care organization and are essential to assessing intervention [innovation] effectiveness, they are a less influential component of implementation success in health-care settings than administrators and physicians” (emphases added) [50]. Although hospital systems are increasingly prioritizing patient-centered care, convening patient advisory boards, and involving patients in co-design of initiatives [51, 52], these efforts have not yet resulted in true power-sharing between innovation recipients and key decision-makers [53].

As a result, direct data collection from recipients does not usually inform implementation outcomes. Instead, data collection from key decision-makers and individuals implementing and/or delivering the innovation about their perceptions of recipients (e.g., recipient characteristics and needs), and how those perceptions encourage (or discourage) completing implementation, informs Implementation Outcomes. Although the CFIR is often not appropriate for use with recipients (because they rarely hold roles as key decision-makers or innovation implementers/deliverers), we hope that will change. Recipients should have greater influence, authority, and power in healthcare systems; the CFIR 2.0 will highlight the importance of implementation teams including innovation recipients (and innovation deliverers) as members. When recipients serve in that role, we strongly encourage using the CFIR to collect data about implementation determinants from them—because they are also implementation team members. Ultimately, equitable population impact is only possible when recipients are integrally involved in implementation and all key constituencies share power and make decisions together.

Innovation determinants

Innovation determinants capture recipient-level characteristics and/or experiences with the innovation that predict and/or explain innovation outcomes. These determinants are denoted by the gray arrow in Fig. 1 labeled Innovation determinants. Data (qualitative and/or quantitative) on these determinants is best collected from recipients. Innovation determinants include constructs or measures that are based on the theoretical framework underlying the innovation. For example, in a “small change” weight loss intervention designed for patients, innovation determinants included patient-level demographics, motivation and intention, and self-efficacy because the intervention was guided by social-psychological and goal-conflict theories [54]. This innovation was tested within a randomized clinical trial [55] and a subset of patient characteristics (innovation determinants) were explored in secondary analyses to help explain Innovation Outcomes [56,57,58,59]. The CFIR is not designed to capture these theory-derived determinants of Innovation Outcomes.

Conclusion

As implementation science matures as a discipline, frameworks must mature too [60, 61]. In this debate article, we propose the inclusion of an Outcomes Addendum to the CFIR. Our goal is to offer clear conceptual distinctions between the types of outcomes for use with the CFIR, and perhaps other determinant implementation frameworks as well. These distinctions can help bring clarity as researchers consider which outcomes are most appropriate to evaluate in their research and to help center those outcomes on multiple key constituencies for sustained outcomes. We hope that sharing this in advance will generate feedback and debate about the merits of our proposed addendum.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

CFIR:

Consolidated Framework for Implementation Research

EBI:

Evidence-based innovation

IOF:

Implementation Outcomes Framework

RE-AIM Framework:

Reach, Effectiveness, Adoption, Implementation, and Maintenance Framework

References

  1. McDonald KM. Considering context in quality improvement interventions and implementation: concepts, frameworks, and application. Acad Pediatr. 2013;13:S45–53. https://doi.org/10.1016/j.acap.2013.04.013.

    Article  PubMed  Google Scholar 

  2. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:1–15.

    Article  Google Scholar 

  3. Fernandez ME, Ten Hoor GA, van Lieshout S, Rodriguez SA, Beidas RS, Parcel G, et al. Implementation mapping: using intervention mapping to develop implementation strategies. Front Public Health. 2019;7:158.

    Article  Google Scholar 

  4. Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:53.

    Article  Google Scholar 

  5. Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice. Am J Prev Med. 2012;43:337–50. https://doi.org/10.1016/j.amepre.2012.05.024.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Stirman SW, Baumann AA, Miller CJ. The FRAME: an expanded framework for reporting adaptations and modifications to evidence-based interventions. Implement Sci. 2019;14:1–10.

    Article  Google Scholar 

  7. Waltz TJ, Powell BJ, Fernández ME, Abadie B, Damschroder LJ. Choosing implementation strategies to address contextual barriers: diversity in recommendations and future directions. Implement Sci. 2019;14:1–15.

    Article  Google Scholar 

  8. Kirk MA, Kelley C, Yankey N, Birken SA, Abadie B, Damschroder L. A systematic review of the use of the consolidated framework for implementation research. Implement Sci. 2015;11:72. https://doi.org/10.1186/s13012-016-0437-z.

    Article  Google Scholar 

  9. Damschroder LJ, Reardon CM, Sperber N, Robinson CH, Fickel JJ, Oddone EZ. Implementation evaluation of the Telephone Lifestyle Coaching (TLC) program: organizational factors associated with successful implementation. Behav Med Pract Policy Res. 2017;7:233–41. https://doi.org/10.1007/s13142-016-0424-6.

    Article  Google Scholar 

  10. Skolarus TA, Lehmann T, Tabak RG, Harris J, Lecy J, Sales AE. Assessing citation networks for dissemination and implementation research frameworks. Implement Sci. 2017;12:97. https://doi.org/10.1186/s13012-017-0628-2.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Whitaker RG, Sperber N, Baumgartner M, Thiem A, Cragun D, Damschroder L, et al. Coincidence analysis: a new method for causal inference in implementation science. Implement Sci. 2020;15:108. https://doi.org/10.1186/s13012-020-01070-3.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Glasgow RE, Harden SM, Gaglio B, Rabin B, Smith ML, Porter GC, et al. RE-AIM planning and evaluation framework: adapting to new science and practice with a 20-year review. Front Public Health. 2019;7:64. https://doi.org/10.3389/fpubh.2019.00064.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38:65–76. https://doi.org/10.1007/s10488-010-0319-7.

    Article  PubMed  Google Scholar 

  14. Hung D, Gray C, Martinez M, Schmittdiel J, Harrison MI. Acceptance of lean redesigns in primary care: a contextual analysis. Health Care Manage Rev. 2017;42:203–12. https://doi.org/10.1097/HMR.0000000000000106.

    Article  PubMed  Google Scholar 

  15. Ashok M, Hung D, Rojas-Smith L, Halpern MT, Harrison M. Framework for research on implementation of process redesigns. Qual Manag Health Care. 2018;27:17–23. https://doi.org/10.1097/QMH.0000000000000158.

    Article  PubMed  Google Scholar 

  16. Dy SM, Ashok M, Wines RC, Rojas Smith L. A framework to guide implementation research for care transitions interventions. J Healthc Qual. 2015;37:41–54. https://doi.org/10.1097/01.JHQ.0000460121.06309.f9.

    Article  PubMed  Google Scholar 

  17. Borup M, Brown N, Konrad K, Van Lente H. The sociology of expectations in science and technology. Technol Anal Strateg Manag. 2006;18:285–98. https://doi.org/10.1080/09537320600777002.

    Article  Google Scholar 

  18. Ilott I, Gerrish K, Booth A, Field B. Testing the consolidated framework for implementation research on health care innovations from South Yorkshire: testing the CFIR on health care innovations. J Eval Clin Pract. 2012. https://doi.org/10.1111/j.1365-2753.2012.01876.x.

  19. Tinc PJ, Gadomski A, Sorensen JA, Weinehall L, Jenkins P, Lindvall K. Applying the Consolidated Framework for implementation research to agricultural safety and health: barriers, facilitators, and evaluation opportunities. Saf Sci. 2018;107:99–108. https://doi.org/10.1016/j.ssci.2018.04.008.

    Article  Google Scholar 

  20. Serhal E, Arena A, Sockalingam S, Mohri L, Crawford A. Adapting the Consolidated Framework for Implementation Research to create organizational readiness and implementation tools for project ECHO. J Contin Educ Health Prof. 2018;38:145–51. https://doi.org/10.1097/CEH.0000000000000195.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Ament SMC, Gillissen F, Moser A, Maessen JMC, Dirksen CD, von Meyenfeldt MF, et al. Factors associated with sustainability of 2 quality improvement programs after achieving early implementation success. A qualitative case study. J Eval Clin Pract. 2017;23:1135–43. https://doi.org/10.1111/jep.12735.

    Article  PubMed  Google Scholar 

  22. Callaghan-Koru JA, Islam M, Khan M, Sowe A, Islam J, Mannan II, et al. Factors that influence the scale up of new interventions in low-income settings: a qualitative case study of the introduction of chlorhexidine cleansing of the umbilical cord in Bangladesh. Health Policy Plann. 2020;35:440–51. https://doi.org/10.1093/heapol/czz156.

    Article  Google Scholar 

  23. Hill JN, Locatelli SM, Bokhour BG, Fix GM, Solomon J, Mueller N, et al. Evaluating broad-scale system change using the Consolidated Framework for Implementation Research: challenges and strategies to overcome them. BMC Res Notes. 2018;11:560. https://doi.org/10.1186/s13104-018-3650-9.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Morgan D, Kosteniuk J, O’Connell ME, Kirk A, Stewart NJ, Seitz D, et al. Barriers and facilitators to development and implementation of a rural primary health care intervention for dementia: a process evaluation. BMC Health Serv Res. 2019;19:709. https://doi.org/10.1186/s12913-019-4548-5.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Vidgen HA, Love PV, Wutzke SE, Daniels LA, Rissel CE, Innes-Hughes C, et al. A description of health care system factors in the implementation of universal weight management services for children with overweight or obesity: case studies from Queensland and New South Wales, Australia. Implement Sci. 2018;13:109. https://doi.org/10.1186/s13012-018-0801-2.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Breimaier HE, Heckemann B, Halfens RJG, Lohrmann C. The Consolidated Framework for Implementation Research (CFIR): a useful theoretical framework for guiding and evaluating a guideline implementation process in a hospital-based nursing practice. BMC Nurs. 2015;14:43. https://doi.org/10.1186/s12912-015-0088-4.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Weiner BJ, Lewis CC, Stanick C, Powell BJ, Dorsey CN, Clary AS, et al. Psychometric assessment of three newly developed implementation outcome measures. Implement Sci. 2017;12:108. https://doi.org/10.1186/s13012-017-0635-3.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Reilly KL, Kennedy S, Porter G, Estabrooks P. Comparing, contrasting, and integrating dissemination and implementation outcomes included in the RE-AIM and implementation outcomes frameworks. Front Public Health. 2020;8:430. https://doi.org/10.3389/fpubh.2020.00430.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Weiner BJ, Mettert KD, Dorsey CN, Nolen EA, Stanick C, Powell BJ, et al. Measuring readiness for implementation: a systematic review of measures’ psychometric and pragmatic properties. Implement Res Pract. 2020;1:263348952093389. https://doi.org/10.1177/2633489520933896.

    Article  Google Scholar 

  30. Miake-Lye IM, Delevan DM, Ganz DA, Mittman BS, Finley EP. Unpacking organizational readiness for change: an updated systematic review and content analysis of assessments. BMC Health Serv Res. 2020;20:106. https://doi.org/10.1186/s12913-020-4926-z.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Weiner BJ, Belden CM, Bergmire DM, Johnston M. The meaning and measurement of implementation climate. Implement Sci. 2011;6:78. https://doi.org/10.1186/1748-5908-6-78.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Glasgow RE. Evaluating the impact of health promotion programs: using the RE-AIM framework to form summary measures for decision making involving complex issues. Health Educ Res. 2006;21:688–94. https://doi.org/10.1093/her/cyl081.

    Article  PubMed  Google Scholar 

  33. Abildso CG, Zizzi SJ, Reger-Nash B. Evaluating an insurance-sponsored weight management program with the RE-AIM Model, West Virginia, 2004-2008. Prev Chronic Dis. 2010;7:A46.

    PubMed  PubMed Central  Google Scholar 

  34. Feldstein AC, Glasgow RE. A practical, robust implementation and sustainability model (PRISM) for integrating research findings into practice. Jt Comm J Qual Patient Saf. 2008;34:228–43. https://doi.org/10.1016/s1553-7250(08)34030-6.

    Article  PubMed  Google Scholar 

  35. Berwick DM, Nolan TW, Whittington J. The triple aim: care, health, and cost. Health Aff. 2008;27:759–69. https://doi.org/10.1377/hlthaff.27.3.759.

    Article  Google Scholar 

  36. Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med. 2014;12:573–6. https://doi.org/10.1370/afm.1713.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Jackson GL, Damschroder LJ, White BS, Henderson B, Vega RJ, Kilbourne AM, et al. Balancing reality in embedded research and evaluation: low vs high embeddedness. Learn Health Sys. 2021. https://doi.org/10.1002/lrh2.10294.

  38. Damschroder LJ, Knighton AJ, Griese E, Greene SM, Lozano P, Kilbourne AM, et al. Recommendations for strengthening the role of embedded researchers to accelerate implementation in health systems: findings from a state-of-the-art (SOTA) conference workgroup. Healthcare. 2021;8:100455. https://doi.org/10.1016/j.hjdsi.2020.100455.

    Article  PubMed  Google Scholar 

  39. Lennox L, Maher L, Reed J. Navigating the sustainability landscape: a systematic review of sustainability approaches in healthcare. Implement Sci. 2018;13:27. https://doi.org/10.1186/s13012-017-0707-4.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  40. Scheirer MA, Dearing JW. An agenda for research on the sustainability of public health programs. Am J Public Health. 2011;101:2059–67. https://doi.org/10.2105/AJPH.2011.300193.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Damschroder LJ. Clarity out of chaos: use of theory in implementation research. Psychiatry Res. 2020;283:112461.

    Article  Google Scholar 

  42. Dobson D, Cook TJ. Avoiding type III error in program evaluation. Eval Program Plann. 1980;3:269–76. https://doi.org/10.1016/0149-7189(80)90042-7.

    Article  Google Scholar 

  43. Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8:117.

    Article  Google Scholar 

  44. Rogers E. Diffusion of innovations. 5th ed. New York: Free Press; 2003.

    Google Scholar 

  45. Curran GM. Implementation science made too simple: a teaching tool. Implement Sci Commun. 2020;1:27. https://doi.org/10.1186/s43058-020-00001-z.

    Article  PubMed  PubMed Central  Google Scholar 

  46. Petit-McClure SH, Stinson C. Disrupting dis/abilization: a critical exploration of research methods to combat white supremacy and ableism in education. Intersect Cri Issues Educ. 2019;3:4.

    Google Scholar 

  47. Hall BL, Tandon R. Decolonization of knowledge, epistemicide, participatory research and higher education. Res All. 2017;1:6–19. https://doi.org/10.18546/RFA.01.1.02.

    Article  Google Scholar 

  48. Althaus C. Different paradigms of evidence and knowledge: recognising, honouring, and celebrating Indigenous ways of knowing and being. Aust J Public Adm. 2020;79:187–207. https://doi.org/10.1111/1467-8500.12400.

    Article  Google Scholar 

  49. Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50:217–26. https://doi.org/10.1097/MLR.0b013e3182408812.

    Article  PubMed  PubMed Central  Google Scholar 

  50. Orlando LA, Sperber NR, Voils C, Nichols M, Myers RA, Wu RR, et al. Developing a common framework for evaluating the implementation of genomic medicine interventions in clinical care: the IGNITE Network’s Common Measures Working Group. Genet Med. 2018;20:655–63. https://doi.org/10.1038/gim.2017.144.

    Article  PubMed  Google Scholar 

  51. Lyon AR, Whitaker K, Locke J, Cook CR, King KM, Duong M, et al. The impact of inter-organizational alignment (IOA) on implementation outcomes: evaluating unique and shared organizational influences in education sector mental health. Implement Sci. 2018;13:24.

    Article  Google Scholar 

  52. Dopp AR, Parisi KE, Munson SA, Lyon AR. Integrating implementation and user-centred design strategies to enhance the impact of health services: protocol from a concept mapping study. Health Res Policy Sys. 2019;17:1. https://doi.org/10.1186/s12961-018-0403-0.

    Article  Google Scholar 

  53. Trofino J. Power sharing. A transformational strategy for nurse retention, effectiveness, and extra effort. Nurs Leadersh Forum. 2003;8:64–71.

    PubMed  Google Scholar 

  54. Lutes LD, DiNatale E, Goodrich DE, Ronis DL, Gillon L, Kirsh S, et al. A randomized trial of a small changes approach for weight loss in veterans: design, rationale, and baseline characteristics of the ASPIRE-VA trial. Contemp Clin Trials. 2013;34:161–72. https://doi.org/10.1016/j.cct.2012.09.007.

    Article  PubMed  Google Scholar 

  55. Damschroder LJ, Lutes LD, Kirsh S, Kim HM, Gillon L, Holleman RG, et al. Small-changes obesity treatment among veterans. Am J Prev Med. 2014;47:541–53. https://doi.org/10.1016/j.amepre.2014.06.016.

    Article  PubMed  Google Scholar 

  56. Masheb RM, Lutes LD, Kim HM, Holleman RG, Goodrich DE, Janney CA, et al. Weight loss outcomes in patients with pain: weight loss and pain. Obesity. 2015;23:1778–84. https://doi.org/10.1002/oby.21160.

    Article  PubMed  Google Scholar 

  57. Masheb RM, Lutes LD, Myra Kim H, Holleman RG, Goodrich DE, Janney CA, et al. High-frequency binge eating predicts weight gain among veterans receiving behavioral weight loss treatments: high-frequency binge eating and weight gain. Obesity. 2015;23:54–61. https://doi.org/10.1002/oby.20931.

    Article  PubMed  Google Scholar 

  58. Vimalananda V, Damschroder L, Janney CA, Goodrich D, Kim HM, Holleman R, et al. Weight loss among women and men in the ASPIRE-VA behavioral weight loss intervention trial: sex-specific weight loss results in ASPIRE-VA. Obesity. 2016;24:1884–91. https://doi.org/10.1002/oby.21574.

    Article  PubMed  Google Scholar 

  59. Janney CA, Masheb RM, Lutes LD, Holleman RG, Kim HM, Gillon LR, et al. Mental health and behavioral weight loss: 24-month outcomes in Veterans. J Affect Disord. 2017;215:197–204. https://doi.org/10.1016/j.jad.2017.03.003.

    Article  PubMed  Google Scholar 

  60. Kislov R, Pope C, Martin GP, Wilson PM. Harnessing the power of theorising in implementation science. Implement Sci. 2019;14:103. https://doi.org/10.1186/s13012-019-0957-4.

    Article  PubMed  PubMed Central  Google Scholar 

  61. Glasgow RE, Estabrooks PA, Ory MG. Characterizing evolving frameworks: issues from Esmail et al. (2020) review. Implement Sci. 2020;15:53. https://doi.org/10.1186/s13012-020-01009-8.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

We want to express our sincere gratitude to the authors who completed our survey and made this work possible.

Funding

This work was funded by the Veterans Affairs (VA) Quality Enhancement Research Initiative (QUE 15-286) and VA Health Services Research and Development (LIP 20-116).

Author information

Authors and Affiliations

Authors

Contributions

MW, CR, and LD developed the literature review search criteria and created the survey. MW conducted the literature review and fielded the survey. CR and MW analyzed the survey data. JL, LD, and CR drafted the manuscript; MW provided survey data in relevant sections. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Laura J. Damschroder.

Ethics declarations

Ethics approval and consent to participate

The VA Ann Arbor Healthcare System IRB approved this study, declaring it exempt from the requirements of 38 CFR 16 based on category 2.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Damschroder, L.J., Reardon, C.M., Opra Widerquist, M.A. et al. Conceptualizing outcomes for use with the Consolidated Framework for Implementation Research (CFIR): the CFIR Outcomes Addendum. Implementation Sci 17, 7 (2022). https://doi.org/10.1186/s13012-021-01181-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13012-021-01181-5

Keywords