Specifying and comparing implementation strategies across seven large implementation interventions: a practical application of theory
Implementation Science volume 14, Article number: 32 (2019)
The use of implementation strategies is an active and purposive approach to translate research findings into routine clinical care. The Expert Recommendations for Implementing Change (ERIC) identified and defined discrete implementation strategies, and Proctor and colleagues have made recommendations for specifying operationalization of each strategy. We use empirical data to test how the ERIC taxonomy applies to a large dissemination and implementation initiative aimed at taking cardiac prevention to scale in primary care practice.
EvidenceNOW is an Agency for Healthcare Research and Quality initiative that funded seven cooperatives across seven regions in the USA. Cooperatives implemented multi-component interventions to improve heart health and build quality improvement capacity, and used a range of implementation strategies to foster practice change. We used ERIC to identify cooperatives’ implementation strategies and specified the actor, action, target, dose, temporality, justification, and expected outcome for each. We mapped and compiled a matrix of the specified ERIC strategies across the cooperatives, and used consensus to resolve mapping differences. We then grouped implementation strategies by outcomes and justifications, which led to insights regarding the use of and linkages between ERIC strategies in real-world scale-up efforts.
Thirty-three ERIC strategies were used by cooperatives. We identified a range of revisions to the ERIC taxonomy to improve the practical application of these strategies. These proposed changes include revisions to four strategy names and 12 definitions. We suggest adding three new strategies because they encapsulate distinct actions that were not described in the existing ERIC taxonomy. In addition, we organized ERIC implementation strategies into four functional groupings based on the way we observed them being applied in practice. These groupings show how ERIC strategies are, out of necessity, interconnected, to achieve the work involved in rapidly taking evidence to scale.
Findings of our work suggest revisions to the ERIC implementation strategies to reflect their utilization in real-work dissemination and implementation efforts. The functional groupings of the ERIC implementation strategies that emerged from on-the-ground implementers will help guide others in choosing among and linking multiple implementation strategies when planning small- and large-scale implementation efforts.
Registered as Observational Study at www.clinicaltrials.gov (NCT02560428).
It is well recognized that an active and purposive approach is required to translate research findings into routine clinical care. Methods used to disseminate and implement research findings into clinical practice are termed implementation strategies,  and interventions designed to facilitate the application of evidence into clinical practice typically use a combination of implementation strategies tailored to a particular clinical context. Multiple implementation frameworks [2, 3] provide guidance on how to define factors that may explain or predict implementation outcomes and how to assess implementation interventions (e.g., Consolidated Framework for Implementation Research (CFIR) ; Promoting Action on Research in Health Services (PARIHS) ; Normalization Process Theory (NPT) ). These frameworks provide theoretical structures and are an indispensable foundation for developing testable hypothesized models, establishing an understanding of the factors that impact implementation, and building a base of implementation knowledge [7, 8]. However, these frameworks may not necessarily provide guidance about how to organize detailed implementation strategies to achieve specific outcomes, may use inconsistent names for implementation strategies, and may lack adequate strategy descriptions—making understanding and replication difficult.
Powell and colleagues sought to address the problem of multiple names and definitions for implementation strategies in 2012 . Through a comprehensive review of the literature, they identified 68 discrete strategies and organized them into six key implementation domains. These domains were organized by type, with similar implementation strategies grouped into the same domain. They then updated and expanded this list by engaging a broad group of implementation scientists in a three-phase, modified-Delphi process that refined and expanded the list to 73 discrete implementation strategies, each with a name and definition . These 73 strategies were grouped into nine clusters based upon strategy type, and ratings of importance (high/low) and feasibility (high/low) were elicited for each strategy . This updated compilation is known as the Expert Recommendations for Implementing Change (ERIC).
ERIC has made an important theoretical contribution to identifying a broad range of implementation strategies and developing a common nomenclature with definitions. An important next step toward advancing this work is to test and refine the taxonomy based on the use of these implementation strategies in on-the-ground implementation efforts; Proctor and colleagues  recognized this need and have started this work. They have recommended that implementation strategies be further specified to include such details as the actor (who is enacting the implementation strategy), the actions (specific activities to support implementation), the targets (entity impacted), temporality, dose, expected outcome, and justification. Researchers have used the ERIC and Proctor specifications to understand the operationalization of implementation strategies and have found this useful [12,13,14,15,16]. The level of specification has been helpful in identifying how strategies can be tailored to different settings , fostering comparison across clinical sites in a multisite study , and helping to inform the development of activity logs to track implementation strategies .
Researchers who applied the ERIC and Proctor frameworks to real-world implementation efforts recommended additional implementation strategies for the ERIC taxonomy . Others have used ERIC to guide detailed mapping of implementation strategies during implementation study planning ; one of these studies used ERIC to develop a blueprint to guide implementation efforts . As all implementers know, however, there is a big difference between what you propose or plan to do during implementation and what really happens. To date, few studies (and none we could identify) use data from large-scale, multi-site implementation efforts to apply and refine the ERIC taxonomy.
This paper reports a study we conducted to apply the ERIC taxonomy and the Proctor reporting recommendations to multiple large-scale studies. We use empirical data collected during active implementation with an intent of further refining ERIC, if changes were identified as necessary.
The setting for this study is the EvidenceNOW initiative  funded by the Agency for Healthcare Research and Quality (AHRQ). EvidenceNOW focused on rapid dissemination and implementation of cardiovascular preventive care (appropriate aspirin use, blood pressure and cholesterol management, and smoking cessation—the ABCS of heart health) among smaller primary care practices as well as capacity building (e.g., quality improvement experience, data access, practice leadership). In the request for applications, AHRQ “strongly encouraged” the use of specific implementation strategies, including practice facilitation, data review and feedback and benchmarking, peer-to-peer learning, and expert consultation , and each cooperative was required to produce clinical quality measures and practice capacity measurement to determine change over time in these domains. In addition to funding seven cooperatives, AHRQ also funded a national evaluation of EvidenceNOW called Evaluating System Change to Advance Learning and Take Evidence to Scale (ESCALATES) to harmonize and coordinate the collection of quantitative data, to collect additional qualitative data, and to bring together the cross-cooperative comparative findings. For more details on the ESCALATES evaluation, see Cohen et al. .
EvidenceNOW funded seven regional cooperatives that spanned 12 states (Oregon, Washington, Idaho, Colorado, New Mexico, Oklahoma, Wisconsin, Illinois, Indiana, New York City, North Carolina, Virginia). For details on the regions and states, see the EvidenceNOW website  and Ono et al. . Each cooperative engaged over 200 primary care practices in this effort (N = 1721). Practices were smaller (< 10 clinicians) and varied with respect to ownership and geographic location. Accomplishing the goal of large-scale rapid dissemination and implementation in less than 3 years required a large workforce and use of a range of implementation strategies.
We reviewed and triangulated four sources of qualitative data from years one and two of the EvidenceNOW initiative to identify and define the cooperatives’ implementation strategies. First, we reviewed the cooperatives’ study proposals, which provided an outline of broad, overarching implementation strategies. Second, we examined cooperative-level documents, including training tools and conceptual models, to further specify content of these overarching strategies. We also analyzed facilitator training documents, which provided insights into specific activities and implementation strategies that facilitators would use to support practices. For example, templates for facilitators to document and track their work provided detailed descriptions of activities that facilitators may perform. The third source of data were entries written by cooperative team members on the ESCALATES interactive online diary . The online diary  is a web-based platform that allowed our evaluation team to communicate with each cooperative team about their implementation experiences and clarify details about on-the-ground implementation support they were providing to practices. These entries were made during active implementation of the interventions across cooperatives. The fourth data source analyzed comprised field notes from site visits to cooperatives and transcripts from interviews with cooperative leadership and key team members during active implementation. Analyzing interview transcripts and field notes provided important insight into the pragmatic, on-the-ground use of implementation strategies to support practice change.
All field notes were prepared soon after the site visit. Interviews followed a semi-structured guide and were professionally transcribed, checked for accuracy, and de-identified. Field notes, interviews, and other documents were entered into Atlas.ti for data management and analysis.
We used an iterative and inductive analytic approach  to identify and describe cooperatives’ implementation strategies. Guided by Proctor and colleagues’ recommendations for specifying and reporting details on implementation strategies , we constructed an intervention table for each cooperative describing the broad implementation strategies and more detailed activities that cooperatives included in their interventions and specified the actors, action targets, dose, temporality, expected outcomes, and justification for each. We member-checked these tables with the cooperative leadership and refined these documents, as needed. (see Additional file 1 for an example of one such table). From these seven intervention tables, we identified 266 detailed actions that were documented by cooperative members to help practices implement changes to improve capacity or ABCS.
Next, four of the authors (LJD, CKP, JRH, TTW) independently mapped each action to a strategy in the ERIC compilation  based on alignment of each action with the definition for each ERIC strategy. We met weekly over 5 months to independently map and discuss mappings until we reached consensus. Throughout this process, we operationalized ERIC definitions to help guide decisions about alignment of ERIC implementation strategies with the EvidenceNOW actions and identified areas where ERIC definition expansion, revision, and/or reorganization was needed to better reflect the actions cooperatives used and how that could help make the ERIC strategies more pragmatic and easily applied by researchers and practitioners.
Next, we created a consolidated cross-cooperative matrix that organized implementation strategies based upon their practical application. We pile-sorted each implementation strategy into functionally similar groups based on their justification and expected outcomes. We reviewed and labeled each grouping and considered the cooperatives’ practical applications of strategies within and across the groupings, recognizing interdependencies and differences, such as “actors” (who enacts the strategies within a grouping; facilitators with health information technology (HIT) expertise versus cooperative-employed data experts).
The 266 actions identified across the seven cooperatives mapped to 33 of the 73 ERIC implementation strategies. We propose refinements for 13 strategies. Table 1 lists the ERIC strategies, our proposed changes, and our rationale for these changes. Recommended refinements include changes in strategy names for four strategies and changes in definitions for 12 strategies. Additionally, we propose adding three new distinct strategies which we identified as being used in practices among the cooperatives: assess and redesign workflow, create online learning communities, and engage community resources. Definitions for these strategies are provided in Table 1. The ERIC strategies taxonomy included “Ancillary Material” that the authors included as an Additional file . We recommend changes in ancillary material for 15 strategies and provide ancillary material for the three new strategies; details are provided in our Additional file 2. We will refer to ERIC strategies using our new proposed labels when applicable (e.g., implementation facilitation instead of ERIC’s original name of facilitation).
We grouped the 33 ERIC strategies used by the cooperatives into four functional groupings: (1) build health information technology to support data-informed quality improvement (QI), (2) build QI capacity and improve outcomes, (3) enhance clinician and practice member knowledge, and (4) build community connections and patient involvement. Tables 2, 3, 4, and 5 list implementation strategies and identify the ERIC cluster to which each strategy belongs along with the actor, specific actions, and targets; each table covers of one of the four groupings listed above. The following sections describe the four overarching functions and the implementation strategies within each grouping.
Build health information technology to support data-informed QI
Multiple implementation strategies are needed to achieve data-informed QI. A critical aspect of the cooperatives’ efforts to rapidly disseminate and implement evidence into practice was helping practices gain access to data to support data-informed QI functions (see Table 2). All cooperatives included an audit and provide feedback strategy to support data-informed QI. This strategy relied on region- and practice-level HIT infrastructure to deliver trusted data to practices at regular intervals throughout intervention. Audit functions (e.g., ability to produce ABCS performance reports) were necessary for feedback functions (e.g., communication of the audit to clinicians and staff), which in turn, were necessary to inform QI efforts. For instance, generated data were used to identify quality gaps and help to identify opportunities on which to focus improvement efforts. These data were also used to monitor the impact that changes had on ABCS outcomes. Audit and provide feedback, thus, was a key strategy to improve ABCS outcomes.
On the ground, up to seven additional ERIC implementation strategies were necessary to accomplish building the infrastructure and capacity needed to implement and sustain audit and provide feedback functions, as shown in Table 2. All but one cooperative sought to deliver audit and provide feedback functions in a way that could be sustained past the time of their funded project period. The degree to which these seven additional strategies were used by the cooperatives was based on the robustness of existing HIT infrastructure in their region. For example, one cooperative used all seven strategies listed in Table 2, which required significant time and investment. This cooperative hired HIT experts to connect practices’ electronic health record (EHR) systems with external registries (use data warehousing techniques) and to develop dashboards—available through centralized portals (develop and organize quality monitoring systems)—to provide practices with quality reports needed to support audit and provide feedback functions. At times, HIT experts (or an HIT-trained facilitator) worked with practices to change how the practice members documented information from appointments and other sources within their EHR to ensure valid and complete data were extracted (change records systems). This step was necessary to get good quality data to a centralized data warehouse, and for the warehouse then to provide the application and cleaned data to support quality reports, which were provided back to practices. This was necessary because many practices did not have the capability to generate robust and useable reports within their own setting. The local technical assistance HIT experts provided were extended to assisting practices with translating information from reports to actionable opportunities for improvement related to the generated performance reports and monitoring of progress over time.
Build QI capacity and improve outcomes
Practice facilitators used a wide range of implementation strategies to build QI capacity and improve outcomes (see Table 3). Facilitation was the central implementation strategy used by all cooperatives. Practice facilitators, who were actors in this process, used many different implementation strategies identified by ERIC to promote practice change and to improve capacity for conducting regular QI to improve ABCS outcomes. For example, facilitators assisted practices in engaging clinicians, practice members, and leaders in practice change efforts (identify and prepare champions); organized and facilitated meetings (organize implementation team and team meetings); assessed and offered suggestions for revising clinical work flows (assess and redesign workflow); assessed HIT needs and data set-up (assess for readiness and identify barriers and facilitators); discussed and identified an improvement plan (develop an implementation blueprint); assisted with the change process and supported the change efforts including pre-visit planning and outreach (implementation facilitation); and helped revise plans based upon results of change cycles. As reported above, practice facilitators were also involved in helping set up HIT infrastructure.
Enhance clinician and practice member knowledge
Several implementation strategies were employed to enhance clinician and practice member knowledge of evidence-based guidelines and interventions designed to improve ABCS outcomes, measures to assess outcomes, and QI methods (see Table 4). All seven cooperatives included education implementation strategies (e.g., develop educational materials, distribute educational materials). Cooperatives employed a range of education strategies and more than one strategy might be used by a cooperative. These also included implementation strategies aimed at strengthening peer exchange of best practices to build stronger, sustained supportive professional learning communities. In addition, we identified one new implementation strategy used by EvidenceNOW cooperatives—create online learning communities. Given the great distances some cooperatives covered to reach practices, online learning techniques were tested to improve connections between practices for distance learning (e.g., online portals with informational resources, discussion boards, webinars).
Build community connections and patient involvement
Multiple implementation strategies were used to build community connections and increase patient involvement in improving ABCS outcomes (see Table 5). Some cooperatives used implementation strategies to create or expand community connections with the goal of better connecting patients, practices, and community resources to improve ABCS. For example, one cooperative developed resource sharing agreements to forge links between practices and state and county health departments. Five cooperatives worked to engage community resources to build connections between practices and health related organizations in the communities, including participation in collaborative projects; this approach is not listed among the ERIC strategies and is newly proposed. Two cooperatives employed additional strategies to reach out directly to patients and/or families to promote acceptance of ABCS evidence-based treatments among their patients.
We used the naming and definitions provided by the ERIC taxonomy  of implementation strategies to harmonize the language across the EvidenceNOW cooperatives. We expanded on this information by using the descriptive domains recommended by Proctor and colleagues to specify details about how each implementation strategy was actually used. The combination of standardized language from ERIC and specification of what happened when, by whom, under what conditions and for what reason provided a means by which to harmonize, understand, and compare implementation strategies across cooperatives. This is a necessary foundation to cross-cooperative comparisons in a national evaluation such as ESCALATES. Cooperatives, collectively, employed 33 of the ERIC implementation strategies; we identified a number of areas where improvements could be made to the ERIC taxonomy to better align with how implementation happened on-the-ground within the large-scale EvidenceNOW initiative. In addition, we recommend an alternative way of grouping some strategies to better align them with how they are linked to accomplish real-world goals, for example, the need for several HIT-related strategies (e.g., use data experts) to implement and sustain use of audit and provide feedback.
Cooperatives documented 266 actions mapped to 33 of 73 total ERIC strategies. This number of strategies is consistent with other published studies that have reported a range of 16–59 strategies used in implementation studies [13, 15, 16]. Likely, the range and number of strategies employed in an implementation intervention depends upon the goals and scale of the project. We propose adding three strategies to the ERIC list: engage community resources, create online learning communities, and assess and redesign workflow. Each of these strategies encapsulates distinct actions that were not identified in the existing ERIC taxonomy and are likely to be used in a broad range of other settings. Other researchers have also recommended new strategies: obtain worker feedback about implementation plan and plan for outcome evaluation . Thus, as ERIC is applied in real world settings, additional strategies may continue to be identified expanding the ERIC taxonomy.
Audit and provide feedback was a critical implementation strategy used across cooperatives. Audit and provide feedback consists of two distinct yet interrelated concepts: audit, the assessment of performance often compared to a specific target or benchmark; and feedback, communication of the performance. Data used to measure and assess performance must be accurate and credible [23,24,25]. Many of the individual practices within the EvidenceNOW cooperatives had challenges producing the audit needed for feedback. Published trials typically only describe the feedback function within audit and provide feedback when reporting findings. In a commentary on an updated Cochrane review of audit and provide feedback, the authors list eight characteristics that contribute to variability in effectiveness across studies; all eight characteristics pertain to feedback with no mention of the role and quality of the audit component . Other publications have mentioned audit functions but do not articulate the importance of reliable, sustainable processes to obtain the audits  or provide guidance on the infrastructure needed to obtain valid and meaningful audits . Without valid and reliable audits, it is challenging, if not impossible, to provide the feedback, ultimately impacting the capability to reliably identify and monitor progress of QI efforts. Furthermore, pay-for-performance initiatives, such as the Centers for Medicare and Medicaid Quality Payment Program necessitate accurate reporting of clinical quality measures. Thus, there is increasing pressure on primary care practices to generate reliable data in a sustainable manner.
The EvidenceNOW cooperatives described seven strategies that were needed in differing combinations, depending on the local context, to build the necessary HIT infrastructure to support audit and provide feedback and to then engage in data-informed QI to improve ABCS outcomes. In the ERIC taxonomy, these seven strategies are spread across four different clusters. We group these HIT strategies based on their shared justification and expected outcomes as this better reflects how these strategies may need to be sequenced to accomplish the intended outcome. Five of the seven strategies to build HIT infrastructure were rated as low-feasibility within the ERIC taxonomy  largely because of the difficulty and cost of building HIT infrastructure. Since all but one of the cooperatives planned to sustain audit and provide feedback functionality beyond this initiative and because of its focus and substantial funding, EvidenceNOW makes a unique contribution to the ERIC taxonomy by providing insight into what is needed to build the infrastructure to support audit and provide feedback functions.
A second clear grouping of strategies is those aimed at helping practices build QI capacity and improve ABCS outcomes, supported foundationally by implementation facilitation, the core strategy described by all seven cooperatives. Implementation facilitation was used to support practices in accomplishing their QI goals on the path to improving ABCS outcomes; it involves performing interrelated and complex roles and skillfully applying diverse strategies in a flexible and dynamic manner to meet the local needs and priorities of each primary care practice [27,28,29,30,31]. Thus, it is both a role (facilitator) and a strategy (implementation facilitation). In EvidenceNOW, practice facilitators provided nearly all of the external support to practices. The facilitators, including facilitators with HIT skill, were responsible for performing, either solely or in conjunction with other actors, 27 of the 33 strategies, spanning all four overarching functions within EvidenceNOW. Using this broad array of strategies requires diverse and sophisticated skills including interpersonal (including emotional intelligence), technical (including adept use of EHR platforms), organizational, communication, leadership, and pedagogical skills [29, 32, 33]. The broad scope of strategies used by facilitators highlights the challenge and complexity of facilitation and the critical role facilitators play in supporting practices in implementing evidence into practice and building capacity for QI.
Regrouping the ERIC strategies from the current nine clusters to a clustering that is more pragmatic may make the ERIC taxonomy easier for practitioners to apply and might be helpful in implementation planning. We intentionally developed these groupings to better reflect how strategies were employed across the diverse contexts of EvidenceNOW. Leeman and colleagues have also proposed an alternative organizing structure for ERIC, proposing five groups, based upon actor and target for each strategy . They suggest that this classification approach, grounded in relevant theory, may aid in understanding the mechanisms by which strategies bring about change . We suggest that the pragmatic grouping we developed may be useful to implementers and recognize the value that analysis of the practical applications of ERIC can have for informing the refinement of this taxonomy for different audiences and users.
In our analysis, we specified further each cooperative’s implementation strategies following Proctor and colleagues’ recommendations. Specifying actor highlighted the broad array of strategies performed by the practice facilitators. A priori specification of dose and temporality, however, was less useful for EvidenceNOW. Most strategies were designed to be used “as needed” in response to local needs of practices. On the one hand, this was a unifying theme across the cooperatives, and on the other hand, specifying these dimensions for each individual strategy was unnecessary. However, the cooperatives did track the activities of their facilitators at varying levels of detail; analysis of this data is outside the scope of this current work. Other researchers have highlighted the challenges with specifying dose: it is resource-intensive to specify and track often because of its ambiguity [12, 15]. However, a priori specification of temporality can be useful in guiding the timing for specific implementation strategies within a more structured multi-site implementation, as highlighted by Huynh and colleagues .
Data were collected early in the cooperatives’ implementation phase; they reflected strategies that were included or just started to be employed during the implementation phase of each of the cooperatives’ respective large-scale multi-component interventions. Some cooperatives may have under-reported their strategies because they may have been tacit or intuitive and thus not explicitly reported. This limitation was mitigated through online diary postings by cooperative team members including practice facilitators and early site visits where we observed practice facilitators and others who were delivering implementation strategies, rather than relying exclusively on cooperatives’ self-reported data. The cooperatives did track dose and temporality of strategies used (i.e., when use of a strategy started and ended), but these data were nearly impossible to harmonize across the cooperatives because of the varying level of detail and inconsistencies within how dose and temporality were tracked. It is important to note that this work includes strategies that were specified by cooperatives to provide external support to primary care practices; it does not include micro-level strategies used within individual practices.
Based on empirical experiences across seven large cooperatives participating in the AHRQ-funded large-scale EvidenceNOW initiative, we recommend refinements to the ERIC list of strategies, which if adopted, would better align ERIC with real-world implementation needs and improve its utility for implementers and researchers. Combined use of the refined ERIC taxonomy and the specifications suggested by Proctor et al. add transparency and thus promote replicability by identifying, creating common nomenclature, and harmonizing a potentially diverse array of implementation strategies used by multiple large multi-site dissemination and implementation initiatives such as EvidenceNOW.
Aspirin use for high-risk patients, blood pressure control, cholesterol management, smoking cessation counseling
Agency for Healthcare Research and Quality
Electronic health record
Expert Recommendations for Implementing Change
Evaluating System Change to Advance Learning and Take Evidence to Scale, a national evaluation of EvidenceNOW
Health information technology
Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:139.
Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:53.
Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med. 2012;43(3):337–50.
Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.
Harvey G, Kitson A. PARIHS revisited: from heuristic to integrated framework for the successful implementation of knowledge into practice. Implement Sci. 2016;11:33.
May CR, Mair F, Finch T, MacFarlane A, Dowrick C, Treweek S, et al. Development of a theory of implementation and integration: normalization process theory. Implement Sci. 2009;4:29.
Davidoff F, Dixon-Woods M, Leviton L, Michie S. Demystifying theory and its use in improvement. BMJ Qual Saf. 2015;24(3):228–38.
Foy R, Ovretveit J, Shekelle PG, Pronovost PJ, Taylor SL, Dy S, et al. The role of theory in research to develop and evaluate the implementation of patient safety practices. BMJ Qual Saf. 2011;20(5):453–9.
Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, et al. A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev. 2012;69(2):123–57.
Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:21.
Waltz TJ, Powell BJ, Matthieu MM, Damschroder LJ, Chinman MJ, Smith JL, et al. Use of concept mapping to characterize relationships among implementation strategies and assess their feasibility and importance: results from the Expert Recommendations for Implementing Change (ERIC) study. Implement Sci. 2015;10:109.
Boyd M, Powell B, Endicott D, Lewis C. A method for tracking implementation strategies: an exemplar implementing measurement-based care in community behavioral health clinics. Behav Ther. 2018;49(4):525–37.
Bunger AC, Powell BJ, Robertson HA, MacDowell H, Birken SA, Shea C. Tracking implementation strategies: a description of a practical approach and early findings. Health Res Policy Syst. 2017;15(1):15.
Gold R, Bunce AE, Cohen DJ, Hollombe C, Nelson CA, Proctor EK, et al. Reporting on the strategies needed to implement proven interventions: an example from a “real-world” cross-setting implementation study. Mayo Clin Proc. 2016;91(8):1074–83.
Huynh AK, Hamilton AB, Farmer MM, Bean-Mayberry B, Stirman SW, Moin T, et al. A pragmatic approach to guide implementation evaluation research: strategy mapping for complex interventions. Front Public Health. 2018;6:134.
Rogal SS, Yakovchenko V, Waltz TJ, Powell BJ, Kirchner JE, Proctor EK, et al. The association between implementation strategy use and the uptake of hepatitis C treatment in a national sample. Implement Sci. 2017;12(1):60.
Agency for Healthcare Research and Quality: EvidenceNOW: Advancing Hearth Health in Primary Care. https://www.ahrq.gov/evidencenow. Accessed 4 Jan 2019.
Accelerating the Dissemination and Implementation of PCOR Findings into Primary Care Practice (R18). Rockville: Agency for Healthcare Research and Quality. https://grants.nih.gov/grants/guide/rfa-files/RFA-HS-14-008.html. Updated December 2014. Accessed 4 Jan 2019.
Cohen DJ, Balasubramanian BA, Gordon L, Marino M, Ono S, Solberg LI, et al. A national evaluation of a dissemination and implementation initiative to enhance primary care practice capacity and improve cardiovascular disease care: the ESCALATES study protocol. Implement Sci. 2016;11(1):86.
Ono SS, Crabtree BF, Hemler JR, Balasubramanian BA, Edwards ST, Green LA, et al. Taking innovation to scale in primary care practices: the functions of health care extension. Health Aff (Millwood). 2018;37(2):222–30.
Cohen D, Leviton L, Isaacson N, Tallia A, Crabtree B. Online diaries for qualitative evaluation gaining real-time insights. Am J Eval. 2006;27:163–84.
Miller W, Crabtree B. The dance of interpretation. In: CBaM WL, editor. Doing qualitative research. 2nd ed. Thousand Oaks: Sage Publications; 1999. p. 127–43.
Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;13(6):CD000259.
Ivers NM, Grimshaw JM, Jamtvedt G, Flottorp S, O'Brien MA, French SD, et al. Growing literature, stagnant science? Systematic review, meta-regression and cumulative analysis of audit and feedback interventions in health care. J Gen Intern Med. 2014;29(11):1534–41.
Ivers NM, Sales A, Colquhoun H, Michie S, Foy R, Francis JJ, et al. No more ‘business as usual’ with audit and feedback interventions: towards an agenda for a reinvigorated intervention. Implement Sci. 2014;9:14.
Brehaut JC, Colquhoun HL, Eva KW, Carroll K, Sales A, Michie S, et al. Practice feedback interventions: 15 suggestions for optimizing effectiveness. Ann Intern Med. 2016;164(6):435–41.
Berta W, Cranley L, Dearing JW, Dogherty EJ, Squires JE, Estabrooks CA. Why (we think) facilitation works: insights from organizational learning theory. Implement Sci. 2015;10:141.
Dogherty EJ, Harrison MB, Graham ID. Facilitation as a role and process in achieving evidence-based practice in nursing: a focused review of concept and meaning. Worldviews Evid-Based Nurs. 2010;7(2):76–89.
Lessard S, Bareil C, Lalonde L, Duhamel F, Hudon E, Goudreau J, et al. External facilitators and interprofessional facilitation teams: a qualitative study of their roles in supporting practice change. Implement Sci. 2016;11:97.
Nagykaldi Z, Mold JW, Aspy CB. Practice facilitators: a review of the literature. Fam Med. 2005;37(8):581–8.
Taylor EF, Machta RM, Meyers DS, Genevro J, Peikes DN. Enhancing the primary care team to provide redesigned care: the roles of practice facilitators and care managers. Ann Fam Med. 2013;11(1):80–3.
Grumbach K, Bainbridge E, Bodenheimer T. Facilitating improvement in primary care: the promise of practice coaching. Issue Brief Commonwealth Fund. 2012;15:1-14.
Harvey G, Loftus-Hills A, Rycroft-Malone J, Titchen A, Kitson A, McCormack B, et al. Getting evidence into practice: the role and function of facilitation. J Adv Nurs. 2002;37(6):577–88.
Leeman J, Birken SA, Powell BJ, Rohweder C, Shea CM. Beyond “implementation strategies”: classifying the full range of strategies used in implementation science and practice. Implement Sci. 2017;12(1):125.
We are extremely grateful to all of the EvidenceNOW Cooperatives who have made this work possible. In addition, the entire ESCALATES study team willingly contributed their thoughtful feedback to this manuscript. We are particularly thankful for the extra support we received from Benjamin Crabtree, PhD; Leif Solberg, MD; William Miller, MD, MA, Donna Shelley, MD, MPH and Claire Diener, BA as well as editorial assistance from Amanda Delzer Hill, BA.
This research was supported by a grant from Agency for Healthcare Research and Quality 1R01HS023940-01 (PI: Cohen).
Availability of data and materials
The datasets generated and/or analysed during the current study are not publicly available because the study is still in progress; data will not be publicly available until after the national evaluation is complete, but may be available from the corresponding author upon reasonable request.
Ethics approval and consent to participate
Study was approved by the Oregon Health & Science University Institutional Review Board. Participants provided informed consent prior to participation.
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Perry, C.K., Damschroder, L.J., Hemler, J.R. et al. Specifying and comparing implementation strategies across seven large implementation interventions: a practical application of theory. Implementation Sci 14, 32 (2019). https://doi.org/10.1186/s13012-019-0876-4