Skip to main content

The updated Consolidated Framework for Implementation Research based on user feedback

Abstract

Background

Many implementation efforts fail, even with highly developed plans for execution, because contextual factors can be powerful forces working against implementation in the real world. The Consolidated Framework for Implementation Research (CFIR) is one of the most commonly used determinant frameworks to assess these contextual factors; however, it has been over 10 years since publication and there is a need for updates. The purpose of this project was to elicit feedback from experienced CFIR users to inform updates to the framework.

Methods

User feedback was obtained from two sources: (1) a literature review with a systematic search; and (2) a survey of authors who used the CFIR in a published study. Data were combined across both sources and reviewed to identify themes; a consensus approach was used to finalize all CFIR updates. The VA Ann Arbor Healthcare System IRB declared this study exempt from the requirements of 38 CFR 16 based on category 2.

Results

The systematic search yielded 376 articles that contained the CFIR in the title and/or abstract and 334 unique authors with contact information; 59 articles included feedback on the CFIR. Forty percent (n = 134/334) of authors completed the survey. The CFIR received positive ratings on most framework sensibility items (e.g., applicability, usability), but respondents also provided recommendations for changes. Overall, updates to the CFIR include revisions to existing domains and constructs as well as the addition, removal, or relocation of constructs. These changes address important critiques of the CFIR, including better centering innovation recipients and adding determinants to equity in implementation.

Conclusion

The updates in the CFIR reflect feedback from a growing community of CFIR users. Although there are many updates, constructs can be mapped back to the original CFIR to ensure longitudinal consistency. We encourage users to continue critiquing the CFIR, facilitating the evolution of the framework as implementation science advances.

Peer Review reports

Background

Far too many efforts to implement evidence-based innovations (EBIs) fail [1, 2], even with highly developed plans for execution [3]. In randomized controlled trials, innovations are tested in an environment where many contextual factors are controlled. However, implementation science embraces the reality that contextual factors are active and dynamic forces working for and against implementation efforts in the real world [4,5,6,7].

Theories that guide conceptualization of contextual factors are often encapsulated within determinant frameworks [8, 9]; these frameworks delineate determinants (i.e., barriers or facilitators) that influence the outcome of implementation efforts. Determinant frameworks provide a base set of concepts, terms, and definitions by which to articulate dynamic complex contexts and develop much needed measures of context [10]. As a discipline, implementation science spans both generalized theory-building and development of practical approaches for successful implementation; both researchers and practitioners use and benefit from determinant frameworks [11].

The Consolidated Framework for Implementation Research (CFIR) is among the most highly cited [12] frameworks in implementation science and has been listed in the top five most accessed articles within Implementation Science since its publication in 2009. Kirk et al.’s 2015 review of 26 articles with meaningful use of the CFIR found that most users employed mixed (n = 13) or qualitative (n = 10) methods and used the CFIR in the post-implementation phase (n = 15) [13]. As a determinant framework, the overarching aim of the CFIR is to predict or explain barriers and facilitators (determinants, independent variables) to implementation effectiveness (the outcome, dependent variable) [8]. Determinant frameworks can thus be used to inform choice of implementation strategies that may best address contextual determinants [14], generate hypotheses to prospectively guide predictions of implementation outcomes, or retrospectively explain implementation outcomes by assessing differences in determinants across implementation settings [11, 13, 15].

Implementation scientists have been called to engage in “theory-building” science where theory is improved with every application and theorizing becomes “an iterative and recursive process” [16]. This means that theory should not be seen as immutable, but as something that should be refined considering empirical findings. The original CFIR article invited critique in recognition of the need for the framework to evolve [17]; the aim of this project was to elicit feedback from experienced CFIR users to inform updates to the framework.

Methods

The VA Ann Arbor Healthcare System IRB declared this study exempt from the requirements of 38 CFR 16 based on category 2.

Data collection

Feedback was obtained from two sources: (1) articles identified in a literature review with a systematic search; and (2) a survey of authors who used the CFIR in a published study.

Literature review

We completed a literature review to identify articles with feedback on the CFIR. The most efficient search criteria for this purpose was the inclusion of the CFIR in the title and/or abstract (see Additional file 1). We searched SCOPUS and Web of Science from 2009 (the year the CFIR was published) to July 6, 2020. This search yielded 376 articles, including (1) original research; (2) systematic reviews; and (3) evaluation of the CFIR as a framework. Two reviewers (MOW, CMR) read the full text of approximately 10% (n = 40/376) of the included articles to independently abstract feedback on the CFIR; discrepancies with abstraction were discussed until consensus was reached. One reviewer (MOW) then read the remaining articles and abstracted relevant passages. Only 59 of the articles contained feedback on the CFIR.

Author survey

We surveyed unique corresponding authors of the articles included in the literature review (n = 334) in August 2020 to elicit in-depth feedback about their experience using the CFIR. First, the survey elicited information about the author’s use of the CFIR (e.g., the total number of projects completed using the CFIR) (see Table 1). Second, respondents were asked to rate the CFIR based on Flottorp et al.’s “sensibility” criteria for determinant frameworks (e.g., Applicability, Simplicity) [18] (see Table 2). Third, respondents were asked for open-ended feedback about the framework overall as well as existing domains and constructs. Finally, respondents were asked for recommendations to add or remove domains and constructs (see Additional file 2 for the full survey). Survey invitations were sent via email with an embedded link to the survey.

Table 1 Use of the original CFIR
Table 2 User ratings of Flottorp’s Criteria for the original CFIR

Data analysis

Responses to closed-ended survey questions were analyzed using descriptive statistics. Responses to open-ended survey questions were combined with passages abstracted from the published literature in Microsoft Excel; feedback was organized in individual matrices at the framework, domain (i.e., one matrix for each domain), and construct (i.e., one matrix for each construct) levels. Matrices contained a row for each individual feedback item (including the source of the feedback, i.e., survey or literature) with a column for each analyst (CMR, LJD, JCL) to add notes and provide a recommendation on how to address the feedback. Additional literature was reviewed (1) when a user recommended a specific citation or (2) when a user identified a high-level issue (e.g., a construct was too broad), but did not provide a solution (e.g., did not suggest specific subconstructs). The team independently reviewed all feedback items to add their notes and recommendations, and then met approximately 3 h a week from September 2020 to February 2022 to discuss and reach consensus on CFIR updates.

Positionality

We are all white, cisgender women. We are researchers embedded within and employed by the United States (US) Veterans Health Administration (VHA), the largest integrated healthcare system in the USA. The VHA has over 1000 medical centers, community-based outpatient clinics, and other entities, and serves 9.6 million enrolled US military Veterans. LJD and JCL were developers of the original CFIR. JCL has worked in implementation science in the VHA’s Quality Enhancement Research Initiative (QUERI) program since 2006 and has a Health Services Organization and Policy doctoral degree. LJD worked in management consulting for 20 years prior to joining the VHA and has two master’s degrees (biometrics and public health); she joined the VHA QUERI program in 2007. CMR is a qualitative analyst with 10 years of experience using the CFIR to collect, analyze, and interpret qualitative data from implementation evaluations. MOW is a Limited License Master’s Social Worker (LLMSW) and a research associate. Although LJD and CMR have consulted on dozens of projects outside the VHA and trained hundreds of CFIR users, most have been in US healthcare settings.

Results

Overview

The systematic search yielded 376 articles and 334 unique authors with contact information; 59 articles included feedback on the CFIR. Most of the projects discussed in the 59 articles were conducted in US healthcare settings; 27% (n = 16) were conducted in non-healthcare settings (e.g., educational, agricultural, or community settings), and 8% (n = 5) were conducted in low- and/or middle-income countries (LMICs) (see Additional file 3).

While 47% (n = 157/334) of authors responded to the survey, only 40% (n = 134/334) of authors completed the survey. Nearly 20% of authors reported use of the CFIR on five or more projects, and over 65% reported use in at least two projects. Over 80% of authors reported use of the CFIR in healthcare settings and to guide data collection, analysis, and/or interpretation (see Table 1).

While 50% of respondents felt the CFIR was easy to use for implementation science researchers, only 16% felt it was easy to use for non-researchers. In addition, 58% felt the CFIR was more complicated than necessary. One respondent stated: the “CFIR is far too complicated and difficult to use. I have been learning about and trying to use CFIR for more than 5 years and the more I use it the more difficult and uninterpretable I find it to be” (survey response). However, another observed that, “Implementation research is challenging in itself, and I see that the complexity of CFIR gets blamed for the broader challenges” (survey response). In addition, while the number of constructs was often cited as the reason the CFIR was too complicated, many users identified missing themes in the framework; nearly all respondents provided qualitative feedback about revising existing domain(s)/construct(s) or adding domain(s)/construct(s).

The other sensibility criteria from Flottorp et al. received positive ratings from over half of the survey respondents; most respondents felt the CFIR was applicable across settings (67%) and innovations (81%), useful for reporting determinants (77%) and designing implementation strategies (65%), and that the domains and constructs were labeled in a way that was easy to understand (77%) (see Table 2).

CFIR updates

Table 3 details the updated CFIR domain and construct names and definitions; it is also included in Additional file 6 for user convenience (see below). Word limits prohibit the ability to describe the updated CFIR in detail, but more detail is available in the Additional files:

  • Additional file 4 contains a mapping of the original CFIR constructs to the updated CFIR constructs;

  • Additional file 5 contains the mapping in Additional file 4 with the rationale for each update based on user feedback; and

  • Additional file 6 contains both the short and detailed descriptions of updated CFIR constructs, drawing on the descriptions from the original CFIR, feedback from our literature review, and support from other published literature.

Table 3 Updated CFIR domain and construct definitions

In the sections below, we summarize key updates in the updated CFIR and refer readers to the additional files and CFIR Outcomes Addendum [19] for details.

Overall framework

Construct names and definitions were updated in response to recommendations to make the framework more applicable across a range of innovations and settings [30,31,32,33,34,35]. This includes (1) using innovation (following Rogers that any “idea, practice, or object perceived as new” is an innovation) [36] rather than intervention; (2) using recipients (individuals intended to benefit from the innovation) rather than patients; and (3) using deliverers (individuals involved in delivering the innovation). In addition, we have removed all references to stakeholders and instead refer to people who “have influence and/or power over the outcome of implementation efforts” when discussing how to identify a sample for data collection. Overall, every domain and construct had at least minor revisions.

Some survey respondents were unclear whether the CFIR seeks to elicit perceptions or reality: “A difficult distinction here is whether these are PERCEPTIONS [sic] of the implementer, or actual features of the program; both seem important” (survey response). Underlying assessment theories are needed to fully explicate a response to this concern. However, we acknowledge that responses to questions related to CFIR constructs will likely reflect a blend of objective reality and subjective perceptions that arise out of experiences within the setting (see “Discussion”).

Constructs and subconstructs were added to address missing themes and further develop domains; the number of constructs and subconstructs increased in all domains except the Innovation Domain; the updated CFIR contains 48 constructs and 19 subconstructs across 5 domains (with one domain including two subdomains). Domain-specific changes are summarized in the sections below and reflect our consensus decisions based on published feedback (noted by citations) and survey responses.

Innovation domain

Domain level

Survey respondents questioned whether the CFIR was intended to evaluate the innovation and/or the strategy being used to implement the innovation, and they found it difficult to differentiate between them. The literature has recognized that the lack of a clean boundary between the innovation and implementation strategies is a contributor to implementation complexity [22]; however, distinguishing between the innovation and implementation strategy is necessary for accurate attribution to implementation outcomes [28] and to identify appropriate areas for future intervention. As a result, the updated CFIR guides users to define the innovation (aka “the thing” [20, 25] being implemented), including the boundary between the innovation and implementation strategies. We encourage use of a reporting guideline to define the innovation (see Table 3).

Constructs and subconstructs

The word Innovation was added to the name of each construct in the Innovation Domain to orient users to the focus of this domain: the Innovation itself, independent of the implementation strategy. Major revisions were made to the definition of Innovation Complexity: the text “difficulty of implementation” was replaced with “the innovation is complicated” to focus attention on the innovation, not implementation.

Outer Setting domain

Domain level

While some users recommended dividing the Outer Setting into multiple levels, others wanted to combine the Outer and Inner Settings, describing difficulty understanding boundaries between the two settings. In the original CFIR article's Additional file 1, the boundary between the Inner and Outer Settings was visually depicted using “overlapping, irregular, and thick grayed lines” to highlight that the line between them is not always clear [17]. Lengnick-Hall et al. expand on this reality and call for researchers to take an “open-systems” perspective “to highlight interdependence between outer and inner contexts and [to] view organizations as part of a broader interdependent system that may range from simple to complex, rigid to flexible, and loosely to tightly coupled” [37]. Although embracing an open-systems perspective may be challenging, conceptually differentiating internal and external influences on the performance of organizations has been a central tenet of organization science [38] and highlights the level at which to focus interventions. As a result, the updated CFIR retains the two domains and guides users to objectively define their Outer vs. Inner Settings, including defining multiple levels of the Outer Setting if appropriate.

Constructs and subconstructs

A few constructs were renamed because users felt the labels were unintuitive (e.g., Cosmopolitanism) or confusing (e.g., Peer Pressure). Patient Needs and Resources was separated into three constructs and relocated to the Inner Setting and Individuals Domains in response to comments that it captured two distinct themes: awareness of patient needs versus prioritization of patient needs [39].

Users remarked that the Outer Setting domain was underdeveloped [40, 41]. The updated CFIR adds constructs to capture the potential influence of Local Attitudes, i.e., sociocultural values and beliefs, and Local Conditions, i.e., economic, environmental, political, and/or technological conditions, on the willingness and ability of entities within the Outer Setting to support implementation and delivery of the innovation [42,43,44,45,46,47], which may influence equity in implementation. These constructs are especially important for innovations that require support from community entities, such as Housing First models of care [48], and for capturing common resource constraints in LMICs [42].

The original CFIR’s broad construct, External Policies and Incentives, was separated into several new constructs, including, for example, the key role of Financing [46, 49,50,51]. The updated CFIR also better captures diverse sources of External Pressures [46], including Societal Pressure (e.g., pressure from social movements and protests) [45], Market Pressure (e.g., pressure to compete with and/or imitate peer entities), and Performance Measurement Pressure (e.g., pressure to meet publicly reported goals).

Inner Setting domain

Domain level

Some users recommended dividing the Inner Setting into multiple levels [52] to account for teams or units [53, 54]. We added guidance for users to objectively define their Inner Setting and to add additional levels as needed. For example, Safaeinili et al. adapted the CFIR to accommodate three embedded levels: (1) pilot clinics, (2) peer clinics, and (3) the larger health system [54].

Constructs and subconstructs

New constructs and subconstructs were added to the Inner Setting to address several critiques. For example, Culture was felt to be too broad, with one survey respondent stating, it “ends up becoming my ‘I don’t know where else this fits’ bucket” (survey response). Additionally, users noted the absence of equity considerations [40, 42], including “more specifically racism, patriarchy and misogyny, that [are] so much a part of the care that we provide” (survey response). As a result, four subconstructs were added to Culture, including Human Equality-Centeredness, Recipient-Centeredness, Deliverer-Centeredness, and Learning-Centeredness, which serve to orient users to determinants that may influence equity in implementation.

In addition, as described in our companion article, The CFIR Outcomes Addendum [19], Implementation Climate and Readiness for Implementation were removed from the updated CFIR. Though few users commented on these constructs, some questioned their meaning and “nesting” of subconstructs within each in the framework (e.g., Leadership Engagement, Available Resources, and Access to Knowledge and Information were all nested within Readiness for Implementation). Though there is broad recognition that implementation climate and readiness are a function of multiple implementation determinants, there is no consensus on precisely which determinants. Therefore, we have reclassified these constructs to more appropriately position them as antecedent assessments [55], on the pathway between implementation determinants and outcomes in the CFIR Outcomes Addendum [19].

Individuals domain

Domain level

Many users felt the CFIR did not provide “sufficient individual-level constructs” [45] and were unclear about which individuals were included [45,46,47, 56,57,58,59]. Furthermore, they felt that constructs in this domain overlapped with constructs in other domains and failed to capture more important individual-level characteristics. One user summarized this feedback well: “[The CFIR needs to focus] more on who the individuals are and their underlying characteristics” (survey response). As a result, the Individuals Domain was significantly reorganized and now includes two subdomains: Roles and Characteristics.

Roles subdomain

In the original CFIR, roles were spread across three different domains: Patient Needs and Resources was listed in the Outer Setting, Leadership Engagement was listed in the Inner Setting, and multiple implementation-specific roles were listed in the Process Domain (e.g., Formally Appointed Internal Implementation Leaders). All roles have been relocated to this new subdomain, and additional roles were added, including Implementation Team Members [60]. In addition, the Formally Appointed Internal Implementation Leader and Champion constructs were combined into the Implementation Leads role because of the inability of users to distinguish between the two roles [61], and as affirmed in a review of champions [62].

Characteristics subdomain

Users felt that the existing Characteristics constructs overlapped with constructs in other domains, e.g., Knowledge and Beliefs overlapped with all constructs in the Innovation Domain. In addition, they thought the domain failed to capture more relevant characteristics related to professional roles and identities, skills and capabilities, autonomy, and level of involvement [46, 47, 59]. Some CFIR users combine this domain with the Theoretical Domains Framework (TDF), which was developed with the intent “to simplify and integrate a plethora of behavior change theories and make theory more accessible to, and usable by, other disciplines” [63]. The COM-B system was developed as an even more simplified system by which to acknowledge key domains related to behavior change based on US consensus of behavioral theorists and a principle of criminal law defining specific prerequisites for volitional behavior [29]. As a result, the original CFIR Characteristics constructs were replaced with constructs based on Michie et al.’s COM-B system [29]. The COM-B posits that broad categories of Capability (e.g., skills), Opportunity (e.g., autonomy), and Motivation (e.g., commitment) shape behavior.

The COM-B constructs are each mapped to 14 domains in the TDF, which provides CFIR users a wide portal into a repository of 84 behavior-change theoretical constructs. In addition, we encourage users to add additional constructs and map them to the COM-B as appropriate. For example, theories, models, and frameworks related to:

  • Behavior change, e.g., the TDF [63, 64], the Theory of Planned Behavior [65], or the Social Ecological Theory [66] may provide constructs relevant for Innovation Recipients and Innovation Deliverers.

  • Facilitation [67, 68] and project management [69, 70] may provide constructs relevant for Implementation Facilitators and Implementation Leads.

  • Leadership [67, 68] may provide constructs relevant for High- and Mid-Level Leaders.

We also added the Need construct, based on feedback about its importance for all constituents [56], and to capture facets of the original CFIR Patient Needs and Resources construct.

Implementation Process

Domain level

We added guidance to encourage users to describe their overall approach or implementation process framework to guide implementation, e.g., the Interactive Systems Framework [71]. Doing so helps distinguish the Innovation from the Implementation Process and accompanying implementation strategies.

Some users questioned the inclusion of the Implementation Process Domain in the CFIR because it appears to include strategies, not contextual factors. We clarify that the goal of this domain is to capture “the degree to which” each of these processes occur during implementation and influence implementation outcomes. Additional constructs were added in the updated CFIR to acknowledge scientific advancement since 2009 that are common across many process frameworks [8] and collective-level change theories [72]. Depending on the process framework used for a particular project and the implementation strategies used [26, 27], there may be other components of the implementation process that users should add.

Constructs and subconstructs

The updated CFIR has expanded the number of constructs within the Implementation Process Domain in response to critiques that key processes and strategies were missing. Though it is outside the scope of the CFIR to include all 73 implementation strategies from the Expert Recommendations for Implementing Change (ERIC) [26, 27], a few best practices have been added: Teaming [42, 46, 73], Assessing Needs [46, 47], Assessing Context, Tailoring Strategies [14], and Adapting [45, 74, 75]. Published guidance highlights the importance of Adapting the innovation [76], and the updated CFIR notes the importance of adapting the setting as well [77]. The addition of Assessing Needs: Innovation Recipients and Engaging: Innovation Recipients serves to better center Innovation Recipients in the updated CFIR and orient users to these as important determinants to equity in implementation.

Discussion

In the original CFIR article, we called for users to publish their reflections on three key questions: (1) Is the framework’s terminology and language coherent? (2) Does the framework promote comparison of results across contexts and studies over time? (3) Does the framework stimulate new theoretical development? This feedback was used to evolve the CFIR. However, only 59 of 376 (15.7%) articles in the literature review contained feedback to improve the CFIR. The survey expanded this rate 2.5-fold: 40% of 334 authors provided feedback in our follow-on survey. We echo Kislov et al.’s call to move into theorizing as “an iterative and recursive process [where] theory is no longer seen as ‘fixed and immutable’” but rather as a living, evolving, improving set of propositions, principles, and hypotheses [16], to which we contribute as a component of every application of theory in every study.

The addition of constructs better aligns the updated CFIR with other published frameworks. For example, Nilsen and Bernhardsson evaluated 17 determinant frameworks with clearly distinguishable dimensions. They concluded that the original CFIR only addressed 10 of 12 identified dimensions; the updated CFIR now addresses all 12 dimensions with the addition of the Characteristics: Opportunity construct in the Individuals Domain, to capture dedicated time, and the Structural Characteristics: Physical Infrastructure subconstruct in the Inner Setting Domain, to capture the physical environment [41]. Expansion of the Outer Setting also brings the updated CFIR into closer alignment with other implementation and policy frameworks [18, 78,79,80].

Framework scope and purpose

As detailed in our companion article, the CFIR Outcomes Addendum, several users suggested that the CFIR be expanded to include (1) implementation and innovation outcomes and (2) determinants to innovation outcomes collected from recipients [19]. However, these are both outside the scope of the CFIR, which is an implementation determinant framework [8] designed to describe barriers and facilitators to implementation outcomes [17]. The CFIR Outcomes Addendum provides high-level guidance for identifying implementation outcomes by drawing on other frameworks [81, 82]. It also clarifies that data on CFIR constructs should be collected from those who have power and/or influence over implementation outcomes; data collected from recipients not involved in implementation should be a source of information for innovation determinants and outcomes.

Construct operationalization

The CFIR is a generalized framework, but adaptations have been developed for diverse innovations and settings, e.g., educational, agricultural, community, and low- and middle-income contexts [42, 46, 47, 54] (see Additional file 3). Though the CFIR provides relatively detailed definitions for each construct [9], it is essential for users to fully operationalize constructs by adapting and using language that is meaningful for the context and individuals involved in implementing and delivering the innovation.

Construct selection

The updated CFIR significantly expands the number of constructs. It is often not feasible to assess every construct in the framework; nor will every construct apply within every project. In order to purposefully select a subset of constructs for assessment, users can rely on (1) consensus discussions and/or surveys with experts, (2) empirical studies or prior work, or [3] policy or change theories, including theories of organizational- [40] and individual-level [63, 83] change. For example, an implementation model developed by Klein et al. [84], comprising only seven constructs, was used to focus the evaluation in one study [85]. In all cases, it is important to elicit and analyze data from open-ended questions to explore the possibility of themes not captured by existing constructs. In addition, even if only a subset of constructs is used to guide data collection, data should be coded to additional CFIR constructs during the analysis, interpretation, or reporting phases as appropriate.

Construct measurement

The majority of CFIR users employ qualitative methods to assess constructs [13]. However, more users are employing quantitative assessment approaches, including established methods to quantify qualitative data by applying ratings of valence (positive to negative manifestation) and strength (weak to strong manifestation) to qualitative data for each construct [86, 87]. A key challenge for measurement is the focus on what is being measured. Some CFIR users wanted more explanation about whether constructs were intended to capture perceptions or objective reality. At the construct level, we explicitly guide CFIR users to elicit “the degree to which” each construct manifests as defined. Perceptions and shared meanings, arising through social interactions among individuals in the workplace [88], are an important influence on how people respond to this question, along with objective consideration of presence or absence of specific factors related to each construct. Thus, assessing the “degree to which” each construct manifests will likely elicit responses based on a blend of subjective judgements combined with objective fact; for example, Structural Characteristics: IT Infrastructure may capture the factual presence of an electronic health record system (EHR) as well as subjective perceptions about the degree to which that EHR supports functional performance in the Inner Setting.

Systematic reviews of implementation context measures have been conducted, all of which have found significant gaps and signal the continued need for measure validation and development [89,90,91,92,93,94,95]. Development of quantitative measures must be intimately linked to underlying theory; validation of measures relies on establishing empirical validity of underlying theories encapsulated in constructs including use of appropriate response scales [96]. Lewis et al.’s measurement reviews on the original CFIR constructs [10] focused specifically on assessing questionnaires administered to healthcare professionals or leaders within behavioral health settings [97] using multi-dimensional criteria for validity and pragmatism [98]. The most highly rated measures typically elicit self-ratings using Likert-type scales. In the updated CFIR, the addition of the definition stem “the degree to which” to constructs that have clear labels and definitions provides a strong starting point to assess construct and content validity for quantitative measures.

Equity in implementation

Although the updated CFIR includes new constructs to better assess determinants related to equity in implementation, we urge users to collaborate with equity experts [99] to combine use of equity, justice, or non-discrimination theories with the CFIR as a lens through which to view all facets of implementation [100]. For example, Allen et al. adapted the CFIR using the Public Health Critical Race Praxis to understand the ways structural racism influence implementation of an equity program across all constructs [101]. Researchers have produced decades of findings focused on the role of individual (e.g., race) and structural (e.g., access to care) determinants of health in highlighting inequities in services and outcomes [102]. We must move upstream, past spurious individual-level determinants [99] to recognize the role of racism and other systems of oppression as the source of these outcomes [103,104,105]. Lett et al. challenge us to center equity by asking ourselves: Who is represented in the study? How can this work cause harm [99]? This requires understanding our own positionality, i.e., who we are, relative to who should have influence and/or power over implementation, being deliberate in collaborating with communities and deeply knowledgeable equity researchers, and prioritizing sustainability over urgency in research [99]. Our own team’s lack of equity expertise and our narrow positionality disallow us from addressing the urgent need to adequately center equity within the CFIR.

Notwithstanding our personal limitations, implementation researchers are uniquely positioned to address oppression by seeking to understand how it manifests across all domains as a determinant to equitable implementation. Approaches to build competency and ingrain collaborative critique and reflexive methods into professional practice do exist and could help teams center equity and make meaningful impact [106]. We must seek opportunities to subvert established systems of oppression by including and sharing power with members of historically excluded groups in implementation and evaluation. When first planning implementation of an innovation, researchers should use a multilevel approach to identify implementation strategies that will address equity [100], e.g., including recipients and other community members in choosing and adapting the innovation and implementation strategies. When evaluating implementation, researchers should combine use of an equity-focused framework (e.g., the HEIF [107]) and a broader theoretical lens (e.g., critical race theory [101, 108]) with the CFIR to identify potential determinants and implementation outcomes [109], and deliberately include historically excluded recipients and deliverers in this process [99].

Limitations

Our survey was only sent to authors identified via our literature search, and our literature search was limited to articles that included the CFIR in the title and/or abstract published before July 2020. As a result, we may have missed valuable feedback from (1) implementation scientists and practitioners who used other determinant frameworks, (2) authors who used the CFIR but did not include it in the title and/or abstract of their article, and (3) authors who included the CFIR in the title and/or abstract but published after July 2020. Including non-users or users with less experience could potentially broaden the tenets and design of the CFIR. However, we purposefully focused on feedback from individuals with experience using the CFIR; these individuals have applied the framework, providing them with firsthand knowledge of issues with the CFIR. While it was not feasible to update our search after July 2020, we added the Outer Setting: Critical Incidents construct to capture the influence of large-scale events (e.g., the COVID 19 pandemic), which may disrupt implementation and/or delivery of an innovation. In addition, many survey respondents asked for more clarification about how to apply the CFIR and differentiate between specific dyads or clusters of constructs. While we were not able to address this in the current manuscript, we plan to publish a practical application guide for users in the future. Despite limitations, gaps, and the need for further evolution, the updated CFIR offers much needed updates for the field.

Future research

The updated CFIR represents an incremental change from the original based on feedback from CFIR users, approximately two-thirds of whom have applied the CFIR in more than one study. The CFIR (and technical assistance website [110]) is a public resource and common good—free and open to all—and it must continue to evolve. We call on implementation scientists to collaborate with researchers in other disciplines (e.g., equity and justice, business, organizational science) to continue developing the CFIR, building on feedback from an ever-larger community of users. Our call for critique and reflection is echoed by others [16, 111].

Our team can help support these advances, but we do not own the framework, and we represent a narrow slice of the world. We extend an open invitation for others within alternative spheres to move the CFIR into the next generation. Further development is needed to: operationalize the framework to address equity; adapt the framework for a series of specific scenarios such as LMIC settings; map the framework to other determinant frameworks to identify gaps and resolve synonymy and polysemy issues (i.e., construct fallacies) [112]; develop qualitative, mixed, and quantitative methods for application; continue development of validated measures; establish foundations for iterative evolution and strengthen the theories encapsulated in the updated CFIR, including understanding relationships between constructs and with outcomes; and further exploring and establishing semantic identity for each construct [112]. Systematic reviews of empirical findings are needed to further inform or refine theoretical concepts encapsulated within and across constructs and middle-range theories need to be developed to understand interrelationships between constructs [16]. Our team and others have explored the use of causation [113] and relationship coding to highlight how determinants may interact within an implementation project [114]. For example, Kerins et al. developed an adapted version of the CFIR that included construct relationships based on a systematic review of menu labeling implementation projects [45]. Coincidence analysis and other novel analysis methods can be applied to explore clusters of constructs that lead to desired outcomes [115].

Conclusion

The updates in the CFIR reflect feedback from a growing community of CFIR users. Although there are many updates, constructs can be mapped back to the original CFIR to ensure longitudinal consistency. We have provided resources for users to apply the updated CFIR via several additional files, and the technical assistance website will be updated to support the CFIR [110]. We are deeply grateful for past users who provided feedback and encourage future users to continue the critique and development of the CFIR as a common good.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

EBI:

Evidence-based innovation

LMIC:

Low- and/or middle-income country

QUERI:

Quality Enhancement Research Initiative

US:

United States

VHA:

Veterans Health Administration

References

  1. Meaney M, Pung C. McKinsey global results: creating organizational transformations. McKinsey Q. 2008;7(3):1–7.

    Google Scholar 

  2. Rafferty AE, Jimmieson NL, Armenakis AA. Change readiness: a multilevel review. J Manag. 2013;39(1):110–35.

    Google Scholar 

  3. Peden CJ, Stephens T, Martin G, Kahan BC, Thomson A, Rivett K, et al. Effectiveness of a national quality improvement programme to improve survival after emergency abdominal surgery (EPOCH): a stepped-wedge cluster-randomised trial. Lancet. 2019;393(10187):2213–21 (2019/04/30 ed).

    Article  PubMed  Google Scholar 

  4. Nilsen P, Birken SA. Epilogue. In: Handbook on implementation science. Cheltenham: Edward Elgar Publishing; 2020. p. 527–8.

  5. Hawe P, Shiell A, Riley T. Complex interventions: how “out of control” can a randomised controlled trial be? BMJ. 2004;328(7455):1561–3.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Dopson S, FitzGerald L, Ferlie E, Gabbay J, Locock L. No magic targets! Changing clinical practice to become more evidence based. Health Care Manage Rev. 2010;35(1):2–12.

    Article  PubMed  Google Scholar 

  7. Shojania KG, Grimshaw JM. Still no magic bullets: pursuing more rigorous research in quality improvement. Am J Med. 2004;116(11):778–80.

    Article  PubMed  Google Scholar 

  8. Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10(1):53.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice. Am J Prev Med. 2012;43(3):337–50.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Lewis CC, Mettert KD, Dorsey CN, Martinez RG, Weiner BJ, Nolen E, et al. An updated protocol for a systematic review of implementation-related measures. Syst Rev. 2018;7(1):66.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Damschroder LJ. Clarity out of chaos: use of theory in implementation research. Psychiatry Res. 2020;283:112461.

    Article  PubMed  Google Scholar 

  12. Skolarus TA, Lehmann T, Tabak RG, Harris J, Lecy J, Sales AE. Assessing citation networks for dissemination and implementation research frameworks. Implementation Sci. 2017;12(1):97.

    Article  Google Scholar 

  13. Kirk MA, Kelley C, Yankey N, Birken SA, Abadie B, Damschroder L. A systematic review of the use of the Consolidated Framework for Implementation Research. Implementation Sci. 2015;11(1):72.

    Article  Google Scholar 

  14. Waltz TJ, Powell BJ, Fernández ME, Abadie B, Damschroder LJ. Choosing implementation strategies to address contextual barriers: diversity in recommendations and future directions. Implementation Sci. 2019;14(1):42.

    Article  Google Scholar 

  15. Damschroder LJ, Hagedorn HJ. A guiding framework and approach for implementation research in substance use disorders treatment. Psychol Addict Behav. 2011;25(2):194–205.

    Article  PubMed  Google Scholar 

  16. Kislov R, Pope C, Martin GP, Wilson PM. Harnessing the power of theorising in implementation science. Implementation Sci. 2019;14(1):103 (s13012-019-0957–4).

    Article  Google Scholar 

  17. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):1–5.

    Article  Google Scholar 

  18. Flottorp SA, Oxman AD, Krause J, Musila NR, Wensing M, Godycki-Cwirko M, et al. A checklist for identifying determinants of practice: a systematic review and synthesis of frameworks and taxonomies of factors that prevent or enable improvements in healthcare professional practice. Implementation Sci. 2013;8(1):35.

    Article  Google Scholar 

  19. Damschroder LJ, Reardon CM, Opra Widerquist MA, Lowery J. Conceptualizing outcomes for use with the Consolidated Framework for Implementation Research (CFIR): the CFIR Outcomes Addendum. Implementation Sci. 2022;17(1):1–10.

    Article  Google Scholar 

  20. Curran GM. Implementation science made too simple: a teaching tool. Implement Sci Commun. 2020;1:27.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Albrecht L, Archibald M, Arseneau D, Scott SD. Development of a checklist to assess the quality of reporting of knowledge translation interventions using the Workgroup for Intervention Development and Evaluation Research (WIDER) recommendations. Implementation Sci. 2013;8(1):52.

    Article  Google Scholar 

  22. Butler M, Epstein RA, Totten A, Whitlock EP, Ansari MT, Damschroder LJ, et al. AHRQ series on complex intervention systematic reviews—paper 3: adapting frameworks to develop protocols. J Clin Epidemiol. 2017;90:19–27.

    Article  PubMed  Google Scholar 

  23. The AIMD Writing/Working Group, Bragge P, Grimshaw JM, Lokker C, Colquhoun H. AIMD - a validated, simplified framework of interventions to promote and integrate evidence into health practices, systems, and policies. BMC Med Res Methodol. 2017;17(1):38.

    Article  Google Scholar 

  24. Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;7(348):g1687.

    Article  Google Scholar 

  25. Lengnick-Hall R, Gerke DR, Proctor EK, Bunger AC, Phillips RJ, Martin JK, et al. Six practical recommendations for improved implementation outcomes reporting. Implementation Sci. 2022;17(1):16.

    Article  Google Scholar 

  26. Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, et al. A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev. 2012;69(2):123–57 (2011/12/29 ed).

    Article  PubMed  Google Scholar 

  27. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10(1):21.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, et al. Standards for Reporting Implementation Studies (StaRI) Statement. BMJ. 2017;6:i6795.

    Article  Google Scholar 

  29. Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. 2011;23(6):42.

    Article  Google Scholar 

  30. Pithara C, Farr M, Sullivan SA, Edwards HB, Hall W, Gadd C, et al. Implementing a digital tool to support shared care planning in community-based mental health services: qualitative evaluation. J Med Internet Res. 2020;22(3):e14868.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Ware P, Ross HJ, Cafazzo JA, Laporte A, Gordon K, Seto E. Evaluating the implementation of a mobile phone–based telemonitoring program: longitudinal study guided by the consolidated framework for implementation research. JMIR Mhealth Uhealth. 2018;6(7):e10768.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Ruble L, McGrew JH, Snell-Rood C, Adams M, Kleinert H. Adapting COMPASS for youth with ASD to improve transition outcomes using implementation science. School Psychology. 2019;34(2):187–200.

    Article  PubMed  Google Scholar 

  33. Williams EC, Johnson ML, Lapham GT, Caldeiro RM, Chew L, Fletcher GS, et al. Strategies to implement alcohol screening and brief intervention in primary care settings: a structured literature review. Psychol Addict Behav. 2011;25(2):206–14.

    Article  PubMed  Google Scholar 

  34. Sorensen JL, Kosten T. Developing the tools of implementation science in substance use disorders treatment: applications of the consolidated framework for implementation research. Psychol Addict Behav. 2011;25(2):262–8.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Cole CB, Pacca J, Mehl A, Tomasulo A, van der Veken L, Viola A, et al. Toward communities as systems: a sequential mixed methods study to understand factors enabling implementation of a skilled birth attendance intervention in Nampula Province, Mozambique. Reprod Health. 2018;15(1):132.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Rogers E. Diffusion of Innovations. 5th ed. New York, NY: Free Press; 2003.

    Google Scholar 

  37. Lengnick-Hall R, Willging C, Hurlburt M, Fenwick K, Aarons GA. Contracting as a bridging factor linking outer and inner contexts during EBP implementation and sustainment: a prospective study across multiple U.S. public sector service systems. Implementation Sci. 2020;15(1):43.

    Article  Google Scholar 

  38. Katz D, Kahn RL. The social psychology of organizations. New York: Wiley; 1966.

  39. Godbee K, Gunn J, Lautenschlager NT, Palmer VJ. Refined conceptual model for implementing dementia risk reduction: incorporating perspectives from Australian general practice. Aust J Prim Health. 2020;26(3):247.

    Article  PubMed  Google Scholar 

  40. Leeman J, Baquero B, Bender M, Choy-Brown M, Ko LK, Nilsen P, et al. Advancing the use of organization theory in implementation science. Prev Med. 2019;129:105832.

    Article  Google Scholar 

  41. Nilsen P, Bernhardsson S. Context matters in implementation science: a scoping review of determinant frameworks that describe contextual determinants for implementation outcomes. BMC Health Serv Res. 2019;19(1):189.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Means AR, Kemp CG, Gwayi-Chore MC, Gimbel S, Soi C, Sherr K, et al. Evaluating and optimizing the consolidated framework for implementation research (CFIR) for use in low-and middle-income countries: a systematic review. Implement Sci. 2020;15(1):1–19.

    Article  Google Scholar 

  43. Merlo G, Page K, Zardo P, Graves N. Applying an implementation framework to the use of evidence from economic evaluations in making healthcare decisions. Appl Health Econ Health Policy. 2019;17(4):533–43.

    Article  PubMed  Google Scholar 

  44. Tiderington E, Ikeda J, Lovell A. Stakeholder perspectives on implementation challenges and strategies for moving on initiatives in permanent supportive housing. J Behav Health Serv Res. 2020;47(3):346–64.

    Article  PubMed  Google Scholar 

  45. Kerins C, McHugh S, McSharry J, Reardon CM, Hayes C, Perry IJ, et al. Barriers and facilitators to implementation of menu labelling interventions from a food service industry perspective: a mixed methods systematic review. Int J Behav Nutr Phys Act. 2020;17(1):48.

    Article  PubMed  PubMed Central  Google Scholar 

  46. Dy SM, Ashok M, Wines RC, Rojas SL. A framework to guide implementation research for care transitions interventions. J Healthcare Qual. 2015;37(1):41–54.

    Article  Google Scholar 

  47. Ashok M, Hung D, Rojas-Smith L, Halpern MT, Harrison M. Framework for research on implementation of process redesigns. Qual Manag Health Care. 2018;27(1):17–23.

    Article  PubMed  Google Scholar 

  48. Baxter AJ, Tweed EJ, Katikireddi SV, Thomson H. Effects of Housing First approaches on health and well-being of adults who are homeless or at risk of homelessness: systematic review and meta-analysis of randomised controlled trials. J Epidemiol Community Health. 2019;73(5):379–87.

    Article  PubMed  Google Scholar 

  49. Hohmeier KC, Wheeler JS, Turner K, Vick JS, Marchetti ML, Crain J, et al. Targeting adaptability to improve Medication Therapy Management (MTM) implementation in community pharmacy. Implementation Sci. 2019;14(1):99.

    Article  Google Scholar 

  50. Moullin JC, Sabater-Hernández D, Benrimoj SI. Qualitative study on the implementation of professional pharmacy services in Australian community pharmacies using framework analysis. BMC Health Serv Res. 2016;16(1):439.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Dopp AR, Narcisse MR, Mundey P, Silovsky JF, Smith AB, Mandell D, et al. A scoping review of strategies for financing the implementation of evidence-based practices in behavioral health systems: state of the literature and future directions. Implementation Res Pract. 2020;1:263348952093998.

    Article  Google Scholar 

  52. McEachern BM, Jackson J, Yungblut S, Tomasone JR. Barriers and facilitators to implementing exercise is medicine Canada on Campus Groups. Health Promot Pract. 2019;20(5):751–9.

    Article  PubMed  Google Scholar 

  53. Miake-Lye IM, Delevan DM, Ganz DA, Mittman BS, Finley EP. Unpacking organizational readiness for change: an updated systematic review and content analysis of assessments. BMC Health Serv Res. 2020;20(1):106.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Safaeinili N, Brown‐Johnson C, Shaw JG, Mahoney M, Winget M. CFIR simplified: Pragmatic application of and adaptations to the Consolidated Framework for Implementation Research (CFIR) for evaluation of a patient‐centered care transformation within a learning health system. Learn Health Sys. 2020;4(1). Available from: https://onlinelibrary.wiley.com/doi/abs/https://doi.org/10.1002/lrh2.10201[Cited 2020 Nov 5]

  55. Reilly KL, Kennedy S, Porter G, Estabrooks P. Comparing, contrasting, and integrating dissemination and implementation outcomes included in the RE-AIM and Implementation Outcomes Frameworks. Front Public Health. 2020;2(8):430.

    Article  Google Scholar 

  56. Breimaier HE, Heckemann B, Halfens RJG, Lohrmann C. The Consolidated Framework for Implementation Research (CFIR): a useful theoretical framework for guiding and evaluating a guideline implementation process in a hospital-based nursing practice. BMC Nurs. 2015;14(1):43.

    Article  PubMed  PubMed Central  Google Scholar 

  57. Varsi C, Ekstedt M, Gammon D, Ruland CM. Using the Consolidated Framework for Implementation Research to identify barriers and facilitators for the implementation of an internet-based patient-provider communication service in five settings: a qualitative study. J Med Internet Res. 2015;17(11):e262.

    Article  PubMed  PubMed Central  Google Scholar 

  58. Barwick M, Barac R, Kimber M, Akrong L, Johnson SN, Cunningham CE, et al. Advancing implementation frameworks with a mixed methods case study in child behavioral health. Transl Behav Med. 2020;10(3):685–704.

    Article  PubMed  Google Scholar 

  59. Moretto N, Comans TA, Chang AT, O’Leary SP, Osborne S, Carter HE, et al. Implementation of simulation modelling to improve service planning in specialist orthopaedic and neurosurgical outpatient services. Implementation Sci. 2019;14(1):78.

    Article  Google Scholar 

  60. Rogers L, De Brún A, McAuliffe E. Development of an integrative coding framework for evaluating context within implementation science. BMC Med Res Methodol. 2020;20(1):158.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  61. Ilott I, Gerrish K, Booth A, Field B. Testing the Consolidated Framework for Implementation Research on health care innovations from South Yorkshire: testing the CFIR on health care innovations. J Eval Clin Pract. 2012 Aug;n/a-n/a.

  62. Miech EJ, Rattray NA, Flanagan ME, Damschroder L, Schmid AA, Damush TM. Inside help: an integrative review of champions in healthcare-related implementation. SAGE Open Med. 2018;1(6):205031211877326.

    Article  Google Scholar 

  63. Cane J, O’Connor D, Michie S. Validation of the theoretical domains framework for use in behaviour change and implementation research. Implementation Sci. 2012;7(1):37.

    Article  Google Scholar 

  64. Michie S, Johnston M, Abraham C, Lawton R, Parker D, Walker A, et al. Making psychological theory useful for implementing evidence based practice: a consensus approach. Qual Saf Health Care. 2005;14(1):26–33.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  65. Ajzen I. The theory of planned behaviour: Reactions and reflections. Psychol Health. 2011;26(9):1113–27.

    Article  PubMed  Google Scholar 

  66. Stokols D. Translating social ecological theory into guidelines for community health promotion. Am J Health Promot. 1996;10(4):282–98.

    Article  CAS  PubMed  Google Scholar 

  67. Metz A, Louison L, Burke K, Ward C. Implementation Support Practitioner Profile. National Implementation Research Network; 2020. Available from: https://nirn.fpg.unc.edu/resources/implementation-support-practitioner-profile [Cited 2021 Dec 22]

  68. Albers B, Metz A, Burke K. Implementation support practitioners – a proposal for consolidating a diverse evidence base. BMC Health Serv Res. 2020;20(1):368.

    Article  PubMed  PubMed Central  Google Scholar 

  69. Barron M, Barron A. Project Management Areas of Expertise. In: Project Management. Available from: https://cnx.org/contents/XpF315mY@11.6:_nDfs3nk@2/Project-Management-Areas-of-Expertise

  70. Müller R, Turner R. Leadership competency profiles of successful project managers. Int J Project Manage. 2010;28(5):437–48.

    Article  Google Scholar 

  71. Wandersman A, Duffy J, Flaspohler P, Noonan R, Lubell K, Stillman L, et al. Bridging the gap between prevention research and practice: the interactive systems framework for dissemination and implementation. Am J Community Psychol. 2008;41(3–4):171–81.

    Article  PubMed  Google Scholar 

  72. Perla RJ, Provost LP, Parry GJ. Seven propositions of the science of improvement: exploring foundations. Qual Manag Health Care. 2013;22(3):170–86.

    Article  PubMed  Google Scholar 

  73. Edmondson AC. Teaming: How organizations learn, innovate, and compete in the knowledge economy. San Francisico: Jossey-Bass; 2012. 241 p.

  74. Hill JN, Locatelli SM, Bokhour BG, Fix GM, Solomon J, Mueller N, et al. Evaluating broad-scale system change using the Consolidated Framework for Implementation Research: challenges and strategies to overcome them. BMC Res Notes. 2018;11(1):560.

    Article  PubMed  PubMed Central  Google Scholar 

  75. Wells R, Breckenridge ED, Linder SH. Wellness project implementation within Houston’s Faith and Diabetes initiative: a mixed methods study. BMC Public Health. 2020;20(1):1050.

    Article  PubMed  PubMed Central  Google Scholar 

  76. Wiltsey Stirman S, Baumann AA, Miller CJ. The FRAME: an expanded framework for reporting adaptations and modifications to evidence-based interventions. Implement Sci. 2019;14(1):58 (2019/06/07 ed.).

    Article  PubMed  PubMed Central  Google Scholar 

  77. von Thiele SU, Aarons GA, Hasson H. The Value Equation: three complementary propositions for reconciling fidelity and adaptation in evidence-based practice implementation. BMC Health Serv Res. 2019;19(1):868.

    Article  Google Scholar 

  78. Moullin JC, Sabater-Hernández D, Fernandez-Llimos F, Benrimoj SI. A systematic review of implementation frameworks of innovations in healthcare and resulting generic implementation framework. Health Res Policy Sys. 2015;13(1):16.

    Article  Google Scholar 

  79. Raghavan R, Bright CL, Shadoin AL. Toward a policy ecology of implementation of evidence-based practices in public mental health settings. Implementation Sci. 2008;3(1):26.

    Article  Google Scholar 

  80. Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health. 2011;38(1):4–23.

    Article  PubMed  Google Scholar 

  81. Glasgow RE, Harden SM, Gaglio B, Rabin B, Smith ML, Porter GC, et al. RE-AIM planning and evaluation framework: adapting to new science and practice with a 20-year review. Front Public Health. 2019;29(7):64.

    Article  Google Scholar 

  82. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76.

    Article  PubMed  Google Scholar 

  83. Kwasnicka D, Dombrowski SU, White M, Sniehotta F. Theoretical explanations for maintenance of behaviour change: a systematic review of behaviour theories. Health Psychol Rev. 2016;10(3):277–96.

    Article  PubMed  PubMed Central  Google Scholar 

  84. Klein KJ, Conn AB, Sorra JS. Implementing computerized technology: an organizational analysis. J Appl Psychol. 2001;86(5):811–24.

    Article  CAS  PubMed  Google Scholar 

  85. Damschroder LJ, Goodrich DE, Robinson CH, Fletcher CE, Lowery JC. A systematic exploration of differences in contextual factors related to implementing the MOVE! weight management program in VA: a mixed methods study. BMC Health Serv Res. 2011;11(1):248.

    Article  PubMed  Google Scholar 

  86. Damschroder LJ, Lowery JC. Evaluation of a large-scale weight management program using the consolidated framework for implementation research (CFIR). Implementation Sci. 2013;8(1):51.

    Article  Google Scholar 

  87. Damschroder LJ, Reardon CM, Sperber N, Robinson CH, Fickel JJ, Oddone EZ. Implementation evaluation of the Telephone Lifestyle Coaching (TLC) program: organizational factors associated with successful implementation. Behav Med Pract Policy Res. 2017;7(2):233–41.

    Article  Google Scholar 

  88. Scroggins WA. Managing meaning for strategic change: the role of perception and meaning congruence. J Health Hum Serv Adm. 2006;29(1):83–102.

    PubMed  Google Scholar 

  89. Chaudoir SR, Dugan AG, Barr CH. Measuring factors affecting implementation of health innovations: a systematic review of structural, organizational, provider, patient, and innovation level measures. Implementation Sci. 2013;8(1):22.

    Article  Google Scholar 

  90. Chor KHB, Wisdom JP, Olin SCS, Hoagwood KE, Horwitz SM. Measures for Predictors of Innovation Adoption. Adm Policy Ment Health. 2015;42(5):545–73.

    Article  PubMed  PubMed Central  Google Scholar 

  91. Lewis CC, Mettert K, Lyon AR. Determining the influence of intervention characteristics on implementation success requires reliable and valid measures: results from a systematic review. Implementation Res Pract. 2021;2:263348952199419.

    Article  Google Scholar 

  92. McHugh S, Dorsey CN, Mettert K, Purtle J, Bruns E, Lewis CC. Measures of outer setting constructs for implementation research: a systematic review and analysis of psychometric quality. Implementation Res Pract. 2020;1:263348952094002.

    Article  Google Scholar 

  93. Powell BJ, Mettert KD, Dorsey CN, Weiner BJ, Stanick CF, Lengnick-Hall R, et al. Measures of organizational culture, organizational climate, and implementation climate in behavioral health: a systematic review. Implementation Res Pract. 2021;2:263348952110188.

    Article  Google Scholar 

  94. Stanick CF, Halko H, Mettert K, Dorsey C, Moullin J, Weiner B, et al. Measuring characteristics of individuals: an updated systematic review of instruments’ psychometric properties. Implementation Res Pract. 2021;2:263348952110004.

    Article  Google Scholar 

  95. Dorsey CN, Mettert KD, Puspitasari AJ, Damschroder LJ, Lewis CC. A systematic review of measures of implementation players and processes: summarizing the dearth of psychometric evidence. Implementation Res Pract. 2021;2:263348952110024.

    Article  Google Scholar 

  96. Anderson NH. Functional measurement and psychophysical judgment. Psychol Rev. 1970;77(3):153–70.

    Article  CAS  PubMed  Google Scholar 

  97. Wensing M. Reflections on the measurement of implementation constructs. Implementation Res Pract. 2021;2:263348952110201.

    Article  Google Scholar 

  98. Stanick CF, Halko HM, Dorsey CN, Weiner BJ, Powell BJ, Palinkas LA, et al. Operationalizing the ‘pragmatic’ measures construct using a stakeholder feedback and a multi-method approach. BMC Health Serv Res. 2018;18(1):882.

    Article  PubMed  PubMed Central  Google Scholar 

  99. Lett E, Adekunle D, McMurray P, Asabor EN, Irie W, Simon MA, et al. Health equity tourism: ravaging the justice landscape. J Med Syst. 2022;46(3):17.

    Article  PubMed  PubMed Central  Google Scholar 

  100. Shelton RC, Adsul P, Oh A. Recommendations for addressing structural racism in implementation science: a call to the field. Ethn Dis. 2021;31(Suppl):357–64.

    Article  PubMed  PubMed Central  Google Scholar 

  101. Allen M, Wilhelm A, Ortega LE, Pergament S, Bates N, Cunningham B. Applying a Race(ism)-Conscious Adaptation of the CFIR Framework to Understand Implementation of a School-Based Equity-Oriented Intervention. Ethn Dis. 2021;31(Suppl):375–88.

    Article  PubMed  PubMed Central  Google Scholar 

  102. Boyd RW, Lindo EG, Weeks LD, McLemore MR. On racism: a new standard for publishing on racial health inequities. Health Affairs Blog. 2020;10(10.1377).

  103. Malawa Z, Gaarde J, Spellen S. Racism as a root cause approach: a new framework. Pediatrics. 2021;147(1):e2020015602.

    Article  PubMed  Google Scholar 

  104. Braveman PA, Arkin E, Proctor D, Kauh T, Holm N. Systemic and structural racism: definitions, examples, health damages, and approaches to dismantling: study examines definitions, examples, health damages, and dismantling systemic and structural racism. Health Aff. 2022;41(2):171–8.

    Article  Google Scholar 

  105. Zambrana RE, Williams DR. The intellectual roots of current knowledge on racism and health: relevance to policy and the national equity discourse: article examines the roots of current knowledge on racism and health and relevance to policy and the national equity discourse. Health Aff. 2022;41(2):163–70.

    Article  Google Scholar 

  106. Anne G, Émilie T, Kareen N, Ginette L, Valéry R. Adapting a health equity tool to meet professional needs (Québec, Canada). Health Promot Int. 2019;34(6):e71-83.

    Article  PubMed  Google Scholar 

  107. Woodward EN, Matthieu MM, Uchendu US, Rogal S, Kirchner JE. The health equity implementation framework: proposal and preliminary study of hepatitis C virus treatment. Implementation Sci. 2019;14(1):26.

    Article  Google Scholar 

  108. Delgado R, Stefancic J. Critical race theory: past, present, and future. Curr Leg Probl. 1998;51(1):467–91.

    Article  Google Scholar 

  109. Baumann AA, Cabassa LJ. Reframing implementation science to address inequities in healthcare delivery. BMC Health Serv Res. 2020;20(1):190.

    Article  PubMed  PubMed Central  Google Scholar 

  110. VA Center for Clinical Management Research (CCMR). The Consolidated Framework for Implementation Research – Technical Assistance for users of the CFIR framework [Internet]. The Consolidated Framework for Implementation Research – Technical Assistance for users of the CFIR framework. 2022. Available from: https://cfirguide.org/ [Cited 2022 Mar 14].

  111. Hailemariam M, Bustos T, Montgomery B, Barajas R, Evans LB, Drahota A. Evidence-based intervention sustainability strategies: a systematic review. Implementation Sci. 2019;14(1):57.

    Article  Google Scholar 

  112. Larsen KR, Bong CH. A tool for addressing construct identity in literature reviews and meta-analyses. MIS Q. 2016;40(3):529–52.

    Article  Google Scholar 

  113. Saldana, J. The coding manual for qualitative researchers. 2nd ed. London: SAGE; 2015.

  114. Nevedal AL, Reardon CM, Jackson GL, Cutrona SL, White B, Gifford AL, et al. Implementation and sustainment of diverse practices in a large integrated health system: a mixed methods study. Implement Sci Commun. 2020;1(1):61.

    Article  PubMed  PubMed Central  Google Scholar 

  115. Whitaker RG, Sperber N, Baumgartner M, Thiem A, Cragun D, Damschroder L, et al. Coincidence analysis: a new method for causal inference in implementation science. Implementation Sci. 2020;15(1):108.

    Article  Google Scholar 

Download references

Acknowledgements

We want to express our sincere gratitude to the authors who completed our survey and made this work possible.

Funding

This work was funded by the Veterans Affairs (VA) Quality Enhancement Research Initiative (QUE 15–286) and VA Health Services Research and Development (LIP 20–116).

Author information

Authors and Affiliations

Authors

Contributions

MW, CR, and LD developed the literature review search criteria and created the survey. MW conducted the literature review and fielded the survey. CR and MW analyzed the survey data. JL, LD, and CR drafted the manuscript; MW provided survey data in relevant sections. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Caitlin M. Reardon.

Ethics declarations

Ethics approval and consent to participate

The VA Ann Arbor Healthcare System IRB approved this study, declaring it exempt from the requirements of 38 CFR 16 based on category 2.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Literature Review Methods.

Additional file 2.

Consolidated Framework for Implementation Research User Survey.

Additional file 3.

Literature Review Articles.

Additional file 4.

Original CFIR (2009) to Updated CFIR (2022): Construct Mapping.

Additional file 5.

User Feedback & CFIR Updates.

Additional file 6.

Updated CFIR Domains and Constructs: Short Definitions and Detailed Descriptions.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Damschroder, L.J., Reardon, C.M., Widerquist, M.A.O. et al. The updated Consolidated Framework for Implementation Research based on user feedback. Implementation Sci 17, 75 (2022). https://doi.org/10.1186/s13012-022-01245-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13012-022-01245-0

Keywords