Skip to main content

Advertisement

Spread tools: a systematic review of components, uptake, and effectiveness of quality improvement toolkits

Abstract

Background

The objective was to conduct a systematic review of toolkit evaluations intended to spread interventions to improve healthcare quality. We aimed to determine the components, uptake, and effectiveness of publicly available toolkits.

Methods

We searched PubMed, CINAHL, and the Web of Science from 2005 to May 2018 for evaluations of publicly available toolkits, used a forward search of known toolkits, screened references, and contacted topic experts. Two independent reviewers screened publications for inclusion. One reviewer abstracted data and appraised the studies, checked by a second reviewer; reviewers resolved disagreements through discussion. Findings, summarized in comprehensive evidence tables and narrative synthesis addressed the uptake and utility, procedural and organizational outcomes, provider outcomes, and patient outcomes.

Results

In total, 77 studies evaluating 72 toolkits met inclusion criteria. Toolkits addressed a variety of quality improvement approaches and focused on clinical topics such as weight management, fall prevention, vaccination, hospital-acquired infections, pain management, and patient safety. Most toolkits included introductory and implementation material (e.g., research summaries) and healthcare provider tools (e.g., care plans), and two-thirds included material for patients (e.g., information leaflets). Pre-post studies were most common (55%); 10% were single hospital evaluations and the number of participating staff ranged from 17 to 704. Uptake data were limited and toolkit uptake was highly variable. Studies generally indicated high satisfaction with toolkits, but the perceived usefulness of individual tools varied. Across studies, 57% reported on adherence to clinical procedures and toolkit effects were positive. Provider data were reported in 40% of studies but were primarily self-reported changes. Only 29% reported patient data and, overall, results from robust study designs are missing from the evidence base.

Conclusions

The review documents publicly available toolkits and their components. Available uptake data are limited but indicate variability. High satisfaction with toolkits can be achieved but the usefulness of individual tools may vary. The existing evidence base on the effectiveness of toolkits remains limited. While emerging evidence indicates positive effects on clinical processes, more research on toolkit value and what affects it is needed, including linking toolkits to objective provider behavior measures and patient outcomes.

Trial Registration

PROSPERO registration number: PROSPERO 2014:CRD42014013930.

Background

Diffusion of innovations is a complex process. While research studies continue to show successful interventions to improve healthcare, their dissemination is slow [1,2,3]. Implementations of proof of concept studies and adoption of interventions shown to be effective in research studies into routine clinical practice is delayed or not achieved at all.

In recent years, a number of organizations have developed “toolkits” for healthcare quality improvement [4]. Toolkits are resource and tool collections designed to facilitate spread across settings and organizations and to ease the uptake and implementation of interventions or intervention bundles and practices. They are a resource for documentation of interventions, for implementation of successful interventions, and for scaling up initiatives developed in pilot or demonstration sites into large-scale rollouts. Toolkits may include a variety of materials useful to organizations to help introduce an intervention, practical tools to help incorporate best practices into routine care such as pocket cards for healthcare providers, or patient education materials. There is currently no definition of nor standard approach to toolkit contents or formats.

A variety of healthcare research agencies publish toolkits. The US Agency for Healthcare Research and Quality (AHRQ) alone has published a large number, on topics ranging from allergy and immunologic care to urologic care. The AHRQ Healthcare Innovations Exchange website has tracked the development of tools or toolkits to improve quality and reduce disparities (website maintenance ended in 2017). Users may browse the resources online or download them free of charge. Little is known, however, about uptake of published toolkits. While exact copying of the intervention is possible, a process of re-invention in the new context is also likely to occur. Re-invention may change the intervention to some extent during the diffusion process as it transitions from the developer to the adopter, with or without the help of a toolkit [5], potentially resulting in decreased but still significant effort for toolkit adaptation [6]. To date, we know very little about successful components that may be useful across toolkits, about the toolkit adoption process, or about what makes toolkits easier or harder to adopt.

Furthermore, little is known about the effectiveness of published toolkits. A scoping review describing toolkits assembled for individual research projects concluded that the toolkits often did not specify the evidence base from which they draw and their effectiveness as a knowledge translation strategy was rarely assessed [1, 7]. The effectiveness of a toolkit is likely to depend on its quality, the effectiveness of the intervention, and the setting characteristics. However, for published toolkits, an additional consideration is apparent. Toolkits applied in new settings may not be as effective as seen in the original implementation of the intervention bundle that led to the development of the toolkit. Potential reasons include diminished healthcare provider motivation, reduced staff buy-in, or other aspects of low readiness (e.g., healthcare providers were not instrumental in initiating and shaping the interventions).

Our objective was to conduct a systematic review on the spread of interventions intended to improve healthcare quality through toolkits. This systematic review aims to determine the following key questions:

  • Key question 1: What are the components of published quality improvement toolkits?

  • Key question 2: What is the uptake and utility of published quality improvement toolkits?

  • Key question 3: What is the effectiveness of published quality improvement toolkits?

The review explores the types of tools included in toolkits, measures and results that describe the uptake and utility, and the effectiveness of published toolkits to inform users and developers of toolkits.

Methods

We registered in PROSPERO, registration number PROSPERO 2014:CRD42014013930. The reporting follows the PRISMA guidelines (see Additional file 1).

Searches

We searched the databases PubMed, CINAHL, and Web of Science for evaluations of toolkits in May 2018. The PubMed search strategy is given in full in Additional file 2. The strategy searched for the term “toolkit” in the title, abstract, keywords, or full text of the publication (Web of Science only). We did not limit the search to publications using the MeSH term “diffusion of innovation” because the pilot search strategy showed that known toolkit evaluations were not systematically tagged with this term. We limited to English-language citations published since 2005 to identify current toolkits readily applicable to US settings.

In addition, we searched resources from nine organizations dedicated to healthcare improvement to find published toolkits: AHRQ, World Health Organization (WHO), Institute for Healthcare Improvement [IHI], Robert Wood Johnson Foundation [RWJF], Association of perioperative Registered Nurses [AORN], Emergency Care Research Institute [ECRI], Centers for Disease Control and Prevention (CDC), Centers for Medicare and Medicaid Services (CMS), and Department of Veterans Affairs (VA). We also screened the category “QualityTool” in AHRQ’s database of innovations. A “forward search” identified any publication that had cited the titles of the toolkits we located. We screened included studies and relevant reviews and contacted content experts to identify additional relevant publications.

Study inclusion and exclusion criteria

Two independent reviewers screened titles and abstracts to avoid errors and bias. We obtained publications deemed as potentially relevant by at least one reviewer as full text. Full text publications had to meet the outlined criteria to be eligible for inclusion in the review. Discrepancies were resolved through discussion in the review team. In the absence of a universally agreed definition of a toolkit, the project team developed the outlined working definition.

  • Participants and condition being studied: Publications evaluating toolkits in healthcare delivery organizations were eligible. The review was not limited to toolkits targeting specific clinical conditions, but toolkits had to be aimed at healthcare. Toolkits aimed primarily at other than healthcare provider professions (e.g., policy makers in non-healthcare delivery settings), or aimed at students not yet involved in healthcare delivery (e.g., nursing students) were excluded. Toolkits only aimed at patients, such as patient education material or patient self-management programs, were excluded.

  • Intervention and toolkit definition: Studies evaluating the use of toolkits designed to aid healthcare delivery organizational were eligible. A “toolkit” was defined as an intervention package, or set of tools. Toolkits had to be aimed at quality improvement (an effort to change/improve the clinical structure, process, and/or outcomes of care by means of an organizational or structural change) [8] of healthcare; toolkits to increase research capacity or workforce issues were excluded. Test batteries, image processing protocols, or computer software termed “toolkit” were not eligible. Toolkits had to be either publicly or commercially available.

  • Comparator/study design: Studies evaluating the use of existing toolkits were eligible. Studies supporting the development of toolkits and reporting on earlier versions rather than the currently available toolkits were excluded. Controlled and uncontrolled studies with historic (e.g., pre-post studies) or concurrent comparators (e.g., randomized controlled trials, RCTs) were eligible. Comparators could include active controls (a different intervention) or passive controls (e.g., status before the introduction of the toolkit).

  • Outcome: Publications reporting on patient, provider, or organizational findings were eligible. Studies had to report on structured evaluations (e.g., surveys); informal or anecdotal evaluation statements were not sufficient.

  • Timing: To capture current and relevant toolkits developed in accordance with current standards and applicable material, evaluated toolkits must have been published in 2005 or more recently, or be still available.

  • Setting: Implementations of toolkits were included regardless of the setting, but the original toolkits had to be aimed at quality improvement in health care. Toolkits developed for other than healthcare delivery organizations such as school settings or laboratories as well as toolkits primarily focusing on health system improvements in conflict zones or disrupted healthcare systems were excluded.

We consolidated publications reporting on the same sample of participants. Evaluations published in academic journals as well as gray literature (conference abstracts, dissertations) were eligible. The literature flow diagram is shown in Fig. 1.

Fig. 1
figure1

Literature flow diagram

Potential effect modifiers and reasons for heterogeneity

The review included a large number of study designs and study outcomes to allow a comprehensive overview of the available evidence on toolkits. In particular, the study design (e.g., comparative studies, post-only study) and the study outcomes (e.g., feasibility, patient health outcome) were sources of heterogeneity across studies.

Data extraction strategy

One reviewer abstracted and a second experienced systematic reviewer checked the data; disagreements were resolved by discussion. We determined categories based on the initial review of publications and used a piloted-tested data extraction form to ensure standardized data abstraction.

We extracted the toolkit name, the developing organization, the general area of application, the toolkit components, and type of availability (publicly or commercially). In addition, information on the evaluation—including study design, participants, setting, and additional non-toolkit components—were extracted.

We documented the uptake and adherence to toolkit components (e.g., number of downloaded toolkits); utility and feasibility; healthcare provider measures including knowledge, attitudes, and barriers; procedural, structural, and organizational changes (e.g., number of ordered tests); and patient outcomes including patient health outcomes and patient-reported satisfaction. We added effectiveness results from the development phase of the toolkit where available.

Study quality assessment

We used the Quality Improvement Minimum Quality Criteria Set (QI-MQCS) to assess studies [9]. The QI-MQCS is a 16-item scale designed for critical appraisal of quality improvement intervention publications; the domains are described in Additional file 2. The synthesis for the primary outcome integrates the appraisal finding; results for all included studies are documented in Additional file 2.

Data synthesis and presentation

We documented the included studies in an evidence table (with supporting tables in the appendix) and summarized evaluation results in a narrative synthesis. Given the diversity of the identified studies, the quality of evidence assessment was limited to assessing inconsistency in study results across studies and study limitations of identified studies. The synthesis followed the key questions. Key question 1 was organized by the developed framework of components. Key question 2 was organized by outcome category: uptake and utility. Key question 3 was organized by provider outcomes, procedure/organizational results, and patient outcomes. The primary outcome of the review was patient health outcomes. The synthesis differentiated evidence from studies with concurrent and with historic comparator. For each toolkit, the evaluation of the intervention spread (i.e., using an available toolkit to disseminate practices and tools included in the toolkit) was also contrasted with initial results obtained in the organization where the toolkit had been first developed (where information was available).

Results

Review statistics

The electronic search for “toolkit” publications and a forward search for 156 specific toolkits (see Additional file 2) published by AHRQ, CMS, WHO, IHI, RWJF, AORN, ECRI, CDC, VA, or on the AHRQ Innovation Exchange identified 5209 citations. We obtained 661 citations as full text articles; of these, 77 studies were identified that met inclusion criteria (Fig. 1).

Study characteristics

Four included evaluations of groups randomized to an intervention or a control condition. Six studies provided a comparison to concurrent (non-randomized) control groups that did not participate in toolkit implementation. Forty-two studies presented pre- and post-intervention data for at least one outcome but did not include a concurrent comparator to account for secular trends independent of the intervention. Twenty-five studies reported only post-intervention data and provided no comparison to the status before or without the toolkit. Assessment methods and reported details varied widely and included online and written staff surveys, administrative data, medical chart review data, and web statistics.

The range of healthcare organizations involved in the evaluation varied widely from single hospital evaluations (10%) to studies with data on 325 institutions; and 22% of studies, often those that reported on web download statistics, did not report on the number of institutions. The number of participating staff members, often healthcare providers asked to use tools contained in the toolkit in clinical practice, ranged from 17 to 704, but the number of participants was only reported in 47% of studies. Of those studies reporting patient data, 59% reported the number of patients the data were based on; the number varied and ranged from 43 to 337,630.

Sixty-nine percent of included evaluations described elements in addition to the toolkit such as workshops and presentations to introduce the toolkit or the intervention promoted in the toolkit. The developer of the toolkit was part of the evaluation of the toolkit in more than half of the included studies (59%); toolkits were evaluated by independent study groups in 27% of studies (14% unclear).

Most evaluations were conducted in the USA (75%); other countries contributing to the study pool were Canada, the UK, Australia, Mongolia, and an international evaluation with multiple countries. In 34% of studies, the evaluation setting was a hospital; in 32%, toolkits were evaluated in primary care facilities; other organizations included community health centers, ambulatory care clinics, long-term care facility, specialty clinics (e.g., multiple sclerosis clinic), a hospice, and in some cases the characteristics were not reported.

The details of the included studies are shown in the evidence table (Table 1).

Table 1 Evidence table

Quality assessment

As a critical appraisal tool, the QI-MQCS targets the informational quality of QI studies and informs decisions about applicability of results to other settings. The number of criteria met per study ranged from 3 to 14 (mean 9.78, SD 3.04). Since the objective of this systematic review was to assess the spread of QI interventions through the use of toolkits, 100% of included publications/studies addressed Spread and described the ability of the intervention to be replicated in other settings.

In addition, for ten of the 16 domains, more than 50% of the included publications met the minimum QI-MQCS criteria. The top five described aspects related to study initiation and included Organization motivation (description of the organization reason, problem, or motivation for the intervention, 93%); Intervention rationale (description of the rationale linking the intervention to the effects, 88%); Intervention (description of the processes, strategies, content, and means of achieving the effects associated with the intervention and considered to be permanent as opposed to activities considered to be temporary for the purpose of introducing the intervention, 70%); Implementation (description of the approach to designing and/or introducing the intervention, 81%); and Data sources (documentation of how data were obtained and whether the primary outcome was defined, 82%). The other five domains, for which more than 50% of studies met minimum QI-MQCS criteria, included Organizational characteristics (description of setting demographics and basic characteristics, 68%); Timing (clear outline of the timeline for intervention implementation and evaluation so that follow-up time can be assessed, 60%); Adherence/fidelity (level of compliance with the intervention, 57%); Organizational readiness (description of QI culture and resources available for the intervention, 64%); and Limitations (outline of limitations and the quality of the interpretation of findings, 68%).

The five domains, for which less than 50% of studies met minimum QI-MQCS criteria, addressed evaluation of results and included Study design (documentation of the evaluation approach with respect to study design, 36%); Comparator (description of the control condition against which the intervention was evaluated, 26%); Health outcomes (inclusion of patient health outcomes in the evaluation, 17%); Penetration/reach (reporting of the proportion of eligible units that participated in the intervention, 29%); and Sustainability (information on the potential for maintaining or sustaining the intervention with or without additional resources, 40%).

Key question 1: what are common elements of quality improvement toolkits?

The evaluated toolkits addressed a variety of quality improvement approaches. Most focused on a specific clinical topic rather than general healthcare provider behaviors. Seven toolkits addressed weight management; four toolkits evaluated in five studies addressed fall prevention; three, emergency preparedness; three each patient safety and three perinatal care; and two (evaluated in three studies) were aimed at vaccination. We identified two toolkits each addressing the topics asthma management, cancer screening, elective delivery, health literacy, hospital-acquired infections, hospital readmission, medical errors, mental health, pain management, screening, smoking cessation, and substance use. The other toolkits addressed antimicrobial stewardship, autism communication, brain injury symptom management, cancer care, cardiac care, care quality, clinical decision making for critical care, depression care, diabetes care, end of life care, geriatric care, heart failure, hepatitis C care, kidney disease care, medication management, multiple sclerosis symptom management, newborn screening, nursing best practices, obstetric care, parental education, pediatric preventive care, psychotherapy decision support, staff trauma support, and wrong site surgery.

The toolkits varied in length and complexity and included a large variety of elements. Most toolkits were downloadable online and free of charge. The toolkit format was often a consolidated text document with written material. Some toolkits used a website with downloadable individual tools and links to additional online resources. Some toolkits included other material such as alcohol hand rubs or peak flow meters, in branded packages, and eight toolkits included a software program. Table 1 includes the toolkit components; further details, including the link to a downloadable copy of the toolkit, can be found in Additional file 2.

Implementation toolkit elements

As the summary Table 2 documents, the majority of the 72 toolkits evaluated in 77 studies included material designed to help with the introduction and implementation of the specific intervention promoted in the toolkit. This typically included educational material such as research summaries, supporting evidence for healthcare interventions, and further reading lists. Some toolkits included downloadable slide decks for presentations to staff, links to online videos to introduce the clinical issue or the intervention, information on achieving change in organizations such as action plan templates, institutional self-assessment tools, templates to collect performance data to facilitate audits and research, templates or actual material to raise awareness such as posters, and many included practical “implementation tips.” As the evidence table shows, many toolkits included unique additional practical tools such as letters to management staff to raise awareness; briefing notes; detailed material for training courses (e.g., daily timetable or teach-back technique) to facilitate staff education; and other tools useful for staff such as a list of frequently asked questions, cost calculators, worksheets, or example forms.

Table 2 Toolkit element and data summary

Provider toolkit elements

Tools that targeted healthcare providers specifically were also included in most toolkits. Tools encompassed care plans, treatment and management algorithms, decision support, or clinical practice guidelines. In addition, many toolkits included assessment scales that providers could apply in clinical practice. Some toolkits also included pocket cards for clinicians, checklists to be used in clinical consultations, written scripts for healthcare providers, practice demonstration videos for providers to perform the intervention, and ready-to-use forms for patient care. A few toolkits included additional tools such as body mass index (BMI) calculators, spirometers, alcohol hand rubs, or prescription pads (see Table 1).

Patient toolkit elements

As the evidence and summary tables show, about two-thirds of toolkits included material for direct dissemination to patients. In the large majority, these were informational handouts or more comprehensive educational materials such as treatment brochures. Some toolkits included bilingual material and several contained posters and ward notices directed at patients. Other, less common resources directly targeting patients or caregivers included patient self-assessment tools, checklists (such as for appointments), activity journals and diaries, links to online resources for patients, educational videos, or peak flow meters for patients.

Key question 2: what is the uptake and utility of published quality improvement toolkits?

A majority of included studies reported on the uptake and/or utility of the evaluated toolkit.

Uptake

Fifty-five percent of studies reported information on the uptake and use in practice of and the adherence to the toolkit or its components, but the type and informational value of reported data varied widely.

Several reported download statistics for online tools or requests for the toolkit [11, 15, 29,30,31, 67, 88, 90], but most studies reported no denominator and reported the total number of downloads at the time of the publication with no further detail. Three studies that reported a point of reference stated that 2000 toolkit copies were downloaded in 7 months [11], that 725 copies had been downloaded in 1 year [15], or that the toolkit had been accessed by 8163 practitioners over 255 days [67]. Some studies tracked which or how many individual tools included in the toolkit had been adopted by the end users [21, 24, 25, 29, 34, 35, 40, 46, 51, 56, 61, 64, 69, 75, 76, 78, 81, 88]. The evidence table shows variable uptake with no studies reporting full uptake of the toolkit. Uptake of components ranged from 10% (fitness prescription pads) [21] to 87% (recall/reminder system installed) [24].

Five studies documented staff awareness of the toolkit and whether the distributed toolkit had been reviewed by eligible users; the studies with numerical results reported high, but not perfect review rates (81–86%) [13, 29, 56, 62, 68]. Two studies reported on the proportion of eligible participating sites that adopted the toolkit; results ranged from 53 to 98% [14, 19]. Several studies reported on adoption of the intervention promoted in the toolkit: 98.7% of VA facilities have MOVE! programs in place [37], 10 to 15% of teams were unable to get beyond the planning stage and 50 to 65% implemented the medical error prevention practices partially or fully [27], 67% of provinces and 53% of hospitals implemented an emergency preparedness program [14], 7/10 sites successfully implemented a discharge program as planned [78], one indicated that all components of a protocol to prevent hospital-acquired infections had been implemented (but some had already been in place before the project) [40], one study reported that 54% of hospitals completed 14 of 17 intervention bundle elements [77], all teams had implemented best practices in all toolkit categories [25], one reported varying results across intervention components (e.g., 80% identification of children with special health care needs) [24], all sites reported using at least 5/14 strategies to increase vaccination rates [51], and one study indicated that each participating clinic implemented a specific weight management program strategy in all child office encounters and not just for wellness visits [64]. Individual studies reported the proportion of adopting hospitals out of those approached [19, 27, 30, 76], tracked the number of sites completing the toolkit evaluation project [38, 76, 85], surveyed how clinicians used the tools [22], or recorded which sites continued to use the toolkit after the implementation period, with or without substantial changes [10, 50].

Utility

Half of included studies reported on the utility, feasibility, or acceptability of, the satisfaction with, or the barriers to using the toolkit, its components, or the intervention promoted in the toolkit.

Reported satisfaction with the toolkit was generally high. One study reported that 50% of respondents found the toolkit information “some or very much helpful” [32], another reported 75% of respondents found the toolkit “extremely or very helpful” [15], one study reported ratings of “being helpful to staff” that ranged between 73 and 92% [33], one study documented that clinicians were “extremely satisfied or satisfied” in 11/11 discussions [70], in one study 86% of respondents agreed that the toolkit was helpful in clinical decision-making [62], and another study reported that 85% of staff who had read the toolkit found it helpful [29]. One study reported that most staff at three out of four sites believed the toolkit improved efficiency for adult vaccinations [51], one study found that all participants were “very satisfied or satisfied” with the overall usefulness of the toolkit [17], and one highlighted that the toolkit enabled comprehensive disease management and improved overall patient care [43]. In another study, most staff and stakeholders had described the toolkit as a useful resource [69], and three studies indicated that feedback was “positive” [22, 23, 63]. Two studies reported mixed feedback [67, 79]: while most providers found the toolkit moderately or very useful, several noted that they already were doing what was recommended [79]. One study found that the perceived helpfulness of the toolkit decreased over time after implementation of the intervention [89].

For feasibility, ten studies indicated that the interventions or best practices included in the toolkit were not feasible [13, 21, 25, 27, 34, 59, 73, 84,85,86]. For example, a quarter of participants in one study reported that systematic screening for obesity was not feasible in clinical practice [21]. Up to 91% of teams found implementing the recommended practices difficult in another study [27], and one study highlighted that 54% of users reported that incorporating health literacy techniques added time to the patient’s visit, although all thought the time was worthwhile [34].

Several studies ranked or rated individual toolkit components and found variation in the utility of different components [17, 26, 31, 35, 49, 63, 65, 85, 89]. For example, one study reported that 29% of respondents found ICD codes and reference articles the most useful tools in a pediatric obesity toolkit [35]. One study reported a wide range of perceived usefulness across components (cost calculator 10%, patient health questionnaire 68%) [31], one study reported that all participants were satisfied with the algorithms while only 83% were satisfied with the included office strategies to improve screening [17], one indicated that the provided frameworks for implementation were helpful and that the major success element was alcohol hand rubs [26], and one study reported on videos as the most positively rated component among individual tools [49]. Four studies assessed how to improve the toolkit or which components were missing [31, 39, 62, 67].

Seventeen studies reported on barriers to staff implementing the toolkit [13, 24, 27, 43, 49, 59, 62, 74,75,76, 78, 79, 81, 84,85,86, 88]. Common cited barriers included time constraints [21, 24, 43, 59, 62, 65, 85, 86, 88], no pertinent personnel available [13, 27, 43, 74, 88], culture or institutional factors [27, 74, 75], limited resources or costs [13, 24, 27, 49, 74, 85, 86], competing demands [65, 75, 86], or dissatisfaction with the toolkits content [24, 62]. Some study explored facilitators and barriers such as support from leadership [59] or whether a component could be implemented quickly and/or easily, especially when the tool or template was immediately available [24].

Individual studies reported ratings across dimensions such as ease of use [41], estimated time spent using the toolkit [48], or which intervention components (e.g., patient partnering) were most difficult to implement [25].

Key question 3: what is the effectiveness of published quality improvement toolkits?

We systematically extracted any information reported on process, provider, and patient effects.

Process effects

More than half of the included studies (57%) reported specific effects on clinical practice such as procedural changes [12,13,14, 16, 19,20,21, 24,25,26, 34,35,36,37,38,39,40, 42, 43, 45,46,47, 51, 52, 55, 56, 58, 59, 61,62,63,64,65,66,67,68,69,70,71, 74, 76, 78, 81,82,83, 85, 87]. In most cases, studies reported on the adherence to procedures suggested in the toolkit such as appointing a pediatric physician coordinator [13], counseling children and their families on weight and healthy lifestyles [86], and documenting symptom assessment for mobility impairment or falls [43]. The evidence table shows the range of findings reported in individual studies.

The randomized controlled trials (RCTs) reported positive results for process outcomes. A Fall TIPS toolkit study reported patients on the intervention units were more likely to have fall risk documented (p < .0001) [16]. An evaluation of the America-on-the-Move toolkit reported control providers provided nutrition counseling to overweight patients in 40 to 49% of visits compared to 30 to 39% in intervention providers but the statistical significance of the difference was not reported [39]. Intervention practices increased vaccination rates more than controls (p = 0.34) in a study that used the 4-Pillars Toolkit for Increasing Childhood Influenza Immunization [46]. One RCT and five controlled trials did not report procedure outcomes [18, 28, 32, 44, 77, 87]. One controlled trial indicated that the control group missed or weakly addressed on average 3.3 of nine key intensive care unit care but no significant test was reported [81].

Pre-post studies that compared baseline and follow-up performances and that reported a statistical significance test for the difference were generally positive but there was variation across different procedures. The median percent of patients with asthma using inhaled corticosteroids, patients with an action plan, and patients using spirometry increased statistically significantly after introducing the Colorado Asthma Toolkit [19]. In another study, performance on quality measures for antenatal steroid administration increased from 77 to 100% (< .01) [36]. The Fall TIPS toolkit was associated with an increase from 1.7 to 2.0 in the mean number of fall risk assessments completed per day 1 month after implementation (p < .003) [61] [23]. An evaluation of an Acute Postoperative Pain Management Toolkit reported statistically significantly improvement in two pain management indicators (patients who had a pain score used to assess pain at rest and movement, patients with documented pain management plan) [12]. Compared to baseline, nurses were almost twice as likely to advise smokers to quit (p < .005), and more likely to assess willingness to quit, assist with a quit plan, and to recommend the smoking helpline (p < .0001) 6 months after the implementation of a smoking cessation toolkit [83]. One study showed a significant increase (p = .03) in the number of patients reporting a dialogue about weight management [82].

Five pre-post studies with numerical data reported mixed results. The Bright Futures Training Intervention Project toolkit was associated with statistically significant increases in the use of a preventive service prompting system and the proportion of families asked about special health care needs, but not the proportion of children who received a structured developmental assessment [24]. A toolkit to support multiple sclerosis management was associated with some improvements in documented assessments and care plan documentation [43]. A pre-post study evaluating the 4 Pillars Toolkit found different results for the different vaccines and different sites [51]. Medication list but not allergy list accuracy improved after introducing the Ambulatory Patient Safety Toolkit [25]. Another study showed improvements in documentation of BMI percentile (p < .05), education and counseling (p < .05), accurate diagnosis of overweight or obesity (p < .05) but a decrease in documentation of blood pressure readings (p < .05) [64].

Provider effects

Forty percent included studies reported data from healthcare providers. Studies did not separate effects of toolkits versus other intervention elements when these were present. With some exceptions [18, 21, 26, 32, 34, 43, 48, 50, 60, 63, 65, 66, 68, 70, 73, 76, 78, 88], provider effects were studied using post only designs such as asking providers to describe the effects of the toolkit.

The majority of these studies included self-reported provider behavior changes or intentions [15, 17, 18, 22, 26, 30, 31, 33, 48, 50, 65, 68, 69, 72, 73, 75, 76]. Among studies reporting numerical findings, results ranged from 60% of respondents indicating they had somewhat changed their practice after viewing study resources [31] to 95% of providers stating that they would increase use of fecal immunochemical tests for patients ineligible for or refusing colonoscopy [17].

Studies also reported on healthcare provider attitudes [21, 26, 32, 43, 44, 49, 52, 60, 62, 63, 68, 69, 76, 78, 86]. For example, one study reported 76 to 84% of providers indicated that posters made staff think about their hand hygiene [26], one indicated that positive perceptions of the importance and usefulness of body mass index increased [21], one reported increased awareness of multiple sclerosis symptoms [43], one indicated that the impact on patients varied by site [52], and one found no difference in safety perception, culture of safety awareness, sensitivity, and competence behaviors between the toolkit exposed and control groups [32].

Some studies reported on self-reported provider knowledge, confidence and perceived competence, and results were positive throughout [30, 34, 44, 60, 62, 65, 67,68,69,70, 76]. Examples included that 77% of users agreed that their knowledge of health literacy was improved [34], participants’ ratings of knowledge gain and confidence in geriatric competencies improved [30], and provider confidence in the ability to provide physical activity and exercise counseling and greater knowledge about physical activity improved [44].

Three studies tested provider knowledge; one found no difference in general concussion knowledge between intervention and control groups but intervention physicians were less likely to recommend next day return to play after concussion [18]. A congenital heart disease toolkit improved knowledge (pretest average score 71% improved to 93%, p < .0001) [66], and one study documented that only three of the ten knowledge-based questions were answered correctly by more than 85% of participants on the pre-test but all ten questions were answered correctly by at least 95% of participants on the post-test after implementing a patient safety toolkit [88]. One study reported that adherence to targeted provider behaviors increased significantly for 62% of behaviors but not for counselor competence [50].

Patient effects

We identified 22 studies (29% of all included studies) that reported on patient outcomes, the primary outcome of the review. While some studies reported on patient health [10, 28, 33, 40, 44, 45, 52, 55, 57, 61, 77, 78, 84, 87], others reported on patient satisfaction with the toolkit or individual tools [26, 48, 64, 79, 85], or other patient outcomes such as satisfaction with care processes [60, 66, 86].

None of the RCTs reported on patient outcomes. The studies with concurrent control groups reported mixed results within and across studies. A controlled trial (12/16 QI-MQCS domain criteria met) evaluating the impact of shared decision making supported by a toolkit reported higher asthma quality of life (MD 0.9; CI 0.4, 1.4) and fewer asthma control problems (MD − 0.9; CI − 1.6, − 0.2) in the intervention group [87]. Another controlled trial (13/16 QI-MQCS) found a single counseling appointment using the Diabetes Physical Activity and Exercise Toolkit was not associated with significant changes in physical activity or clinical outcomes compared to standard care [44]. The Guidelines Applied in Practice–Heart Failure Tool Kit was associated with a reduction in the baseline-adjusted 30-day readmission rate but not 30-day mortality comparing the toolkit and a control cohort (7/16 QI-MQCS) [28]. A state perinatal quality collaborative reported that women in hospitals engaged in the initiative experienced a 21% reduction in severe maternal morbidity among hemorrhage patients compared to baseline while the non-participating California hospitals showed no changes (1.2% reduction, n.s.); the collaborative used a toolkit to disseminate the intervention bundle (13/16 QI-MQCS) [77].

Two pre-post studies reported a statistically significant reduction in the incidence rate of hospital-acquired infections. One study (14/16 QI-MQCS) reported a reduction in carbapenemase-producing Enterobacteriaceae outbreaks and no further occurrence of extensively drug-resistant Acinetobacter baumannii after introducing a CDC toolkit and additional safety procedures such as limiting access to rooms and common areas [40]. A study (13/16 QI-MQCS) evaluating the AORN toolkit accompanying the Universal Protocol for Correct Site Surgery reported that after the introduction of the protocol, the rate of wrong site surgery increased initially [33]. A study (3/16 QI-MQCS) evaluating a toolkit on elimination of non-medically indicated (elective) deliveries before 39 weeks gestational age indicated that there were no transfers to the neonatal intensive care unit compared to five transfers pre-intervention (p < .022) for non-medically indicated deliveries between 37/0 and 38/6 pregnancy weeks [55]. A study (13/16 QI-MQCS) evaluating a toolkit-based intervention to reduce central line associated bloodstream infections reported that the rate of infections decreased by 24% (p = .001) [84]. The remaining pre-post studies reported improved patient outcomes for some or all outcomes but the statistical significance was not reported (QI-MQCS assessments ranged from four to 14 domain criteria met) [10, 45, 52, 57, 61, 78].

Comparison of original intervention and toolkit supported effects

For six toolkits, results of the initial intervention that led to the development of the toolkit had been published. However, no definitive comparison between initial intervention and success of spreading the intervention via the toolkit could be achieved due to the paucity of data and differences in study designs and metrics.

A toolkit intervention to reduce central line associated bloodstream infections referred to a published RCT that had established the effectiveness of the interventions for intensive care unit patients. The toolkit intervention established a 24% infection rate reduction and the authors highlighted the routine practice evaluating achieved results that are comparable to the original trial results (modeled hazard ratio 0.63, 2.1 vs 3.4 isolates per 1000 days, p = .01) [84, 91]. A toolkit for postoperative pain management was based on an initiative that had achieved a 13% increase in preoperative patient education and 19% increase in patients with at least one documented postoperative pain score [92]. Corresponding results associated with toolkit-based spread showed a 28% increase of patients with pain assessments [12]. An electronic fall prevention toolkit was tested in two studies [16, 23] and results were also available from the development of the toolkit. The intervention was associated with a reduced rate of falls [93] but the RCT testing the toolkit-assisted spread evaluation did not report on patient outcomes and it is unclear whether the toolkit can replicate the results in different organizations. An antenatal corticosteroid therapy toolkit was developed as part of a quality care collaborative that reported that antenatal steroid administration rate increased from 76 to 86% [94]. The results associated with implementation of the later developed toolkit was 100% performance of state quality measures for antenatal steroid performance administration compared to 77% at baseline [36]. The Project Re-Engineered Discharge toolkit was associated with a readmission rate reduction of 32% compared to baseline but the 30-day readmission rate was not reported [45]. The original hospital discharge program reported reduced hospital utilization within 30 days of discharge in an RCT comparing to usual care (30-day readmission rate 0.149 vs 0.207) [95]. The four pillars toolkit for influenza and pneumococcal vaccinations has been evaluated in multiple publications [46, 51]. The development phase of the toolkit has also been documented, but reported information was limited to areas of improvement that resulted in the final tool [96]. A relapse prevention group counseling toolkit was associated with counselor adherence to toolkit content in 13 out of 21 targeted behaviors [50]. Data from the development phase of the toolkit were available but not directly comparable; one study reported significant improvements in content adherence after 3 h of training [97], the other study reported on acceptability and sustainability of toolkit use [98].

Discussion

There are few methods other than toolkits to document complex healthcare interventions or to support their use outside of initial intervention sites, yet little theoretical or empirical literature addresses toolkit use. We reviewed over a decade of published evaluations of toolkits used as a method for spreading quality improvement interventions for healthcare delivery organizations. This review documents the frequency of key toolkit elements and the effects of using publicly available toolkits. We hope this review will stimulate further thought on use of toolkits, on toolkit evaluation, and on toolkit reporting.

The toolkits and their evaluations included highly variable sets of information. Among toolkit elements, the toolkits we identified most commonly included introductory and implementation information (e.g., educational material for staff) and healthcare provider tools for clinical practice (e.g., care plans); and two-thirds included material for patients (e.g., information leaflets). Among evaluation elements, studies most often rated satisfaction with the toolkit and/or ratings of the utility of individual tools; while satisfaction was usually high, usefulness ratings varied. Rates of toolkit uptake across eligible users could provide invaluable information on issues such as ease of adoption, needed toolkit improvements, or equity in terms of making toolkit benefits accessible to all eligible subjects. However, only half of studies reported on toolkit uptake; these studies typically showed varied uptake between providers and/or settings. The reported information on toolkit uptake also often lacked a denominator or point of reference, such as the time period of tracked downloads, how many providers or sites were eligible, or how the uptake compared to other toolkits. A qualitative study of clinic and community members perspectives on intervention toolkits highlighted that information on the use of the toolkit is critical; simply disseminating toolkits does not guarantee its use [99].

We found the existing evidence base on toolkit effectiveness to be very limited despite the substantial number of publications on toolkits. We looked for effectiveness information not only in the searched toolkit publication, but in any related studies of the toolkit. While more than half of the included studies reported on adherence to clinical procedures, only some assessed effects on healthcare providers. In addition, the existing evidence base for healthcare provider effects associated with toolkits focuses on self-reported behavior changes or intentions. While reported results were positive and often indicated substantial improvement, objective tests for behavior changes are largely absent from the literature.

Quality improvement theory emphasizes the importance of completing the intervention and evaluation cycle through an assessment of impacts on patient care and outcomes, but we found few such assessments. Few studies reported on patient outcomes and there is a lack of evaluations showing improved health outcomes to be associated with toolkits. Toolkits are commonly aimed at intervention spread; however, the evidence base for their effectiveness for this purpose is limited. Identified RCTs reported positive results for spread sites; however, the number of high-level evidence studies that allow strong effectiveness conclusions is small. While pre-post assessments tended to be positive, studies with concurrent control groups reported mixed results within and across studies. More evaluations of toolkit effects on patient care and outcomes are needed to determine whether the use of toolkits translates into improvements for patients.

Throughout, study results were often insufficiently reported and the assessed outcomes were very diverse. Furthermore, the identified studies were often not designed to assess the effect of the toolkit per se because the intervention included other components in addition to the toolkit. Use of stronger study designs for assessing toolkit effectiveness as a method of spread, such as presenting comparisons to the status prior to their implementation or to a control group, would increase the value of toolkit spread studies.

An optimistic review interpretation is that studies of toolkit effectiveness showed no deterioration when the toolkit was applied in new settings. Very few published studies are available that directly address this comparison, however. While some studies described the development of the toolkit as following a successful intervention implementation, very few studies reported numerical results that allowed a direct comparison between the original intervention and the results of facilitating the spread of the intervention through a toolkit.

The reported detail in the included studies varied widely and no study met all of the QI-MQCS criteria, a critical appraisal tool for quality improvement evaluation publications [9]. We included studies reported in abbreviated form such as conference abstracts, hence some information important to practitioners was sometimes not available but a large majority of studies reported a rationale for implementing the toolkit in their organization and provided information on the intended change in organizational or provider behavior that they were aiming to achieve with the toolkit. We anticipate that future evaluations of toolkits can increase their impacts by focusing on the information most likely to be useful to potential users or to fellow developers of toolkits. These include, for example, uptake rates, resources required for toolkit adoption, and resources required for toolkit maintenance. Information on toolkit adaptations required for adoption in different organizational contexts would also be helpful. Furthermore, while the reported satisfaction with the toolkits was generally reported to be positive, there were often large variations in ratings of the utility of specific components or tools. Further evaluations should consider the merits of assessing individual toolkit components in addition to evaluating the toolkit as a whole.

There is no standard definition of a toolkit and guidance for toolkit developers and users is only beginning to emerge [100]. A strength of this review is our focus on quality improvement interventions in healthcare, using a definition based on our prior experience with quality improvement and implementation research [8, 9, 101,102,103,104,105,106]. A limitation is that we used a self-applied definition of what constitutes a toolkit and we only searched for studies using the term “toolkit.” A broader review of tools and of similar resources not referenced as “toolkits” would be an important addition to the literature.

The included studies and evaluated toolkits were very heterogeneous, limiting generalizable conclusions that can be drawn across studies, and the diversity is reflected in the evidence and summary tables. Nonetheless, the review was limited to publications and toolkits that used the term “toolkit” and we included only toolkits reported in published literature. Our review included gray literature in that we purposefully included conference abstracts and dissertations; we know, however, that we missed information on unpublished use of toolkits especially in large organizations. Furthermore, the number of studies contributing the effectiveness key question was limited, in particular studies reporting on the primary outcome—patient health. Limitations in the quality of evidence hindered more detailed analyses and conclusions, including answers to the question whether toolkits developed in another context can achieve the same results in a new context.

Finally, our review concentrated on the large number of toolkits that are currently publicly available, free of charge or for purchase. Toolkits not explicitly designed for ongoing spread (e.g., toolkit distributions for one-time interventions) were beyond the scope of the review. A prior systematic review on toolkits reported limited evidence for toolkits as a general intervention component or implementation strategy. Of eight methodologically acceptable evaluations identified by the review, six showed at least partial effectiveness in changing clinical outcomes; however, the review concluded that more rigorous study designs were needed to explain the factors underlying toolkit effectiveness and successful implementation [107].

Conclusions

This review documents over a decade of evaluations of publicly available quality improvement toolkits and provides insight into the components, the uptake, and the current evidence base of the effectiveness of this tool for spread. Available uptake data are limited but indicate variability. High satisfaction with toolkits can be achieved but the usefulness of individual tools may vary. The existing evidence base on the effectiveness of toolkits remains limited. While emerging evidence indicates positive effects on clinical processes, more research on toolkit value and what affects it is needed, including linking toolkits to objective provider behavior measures and patient outcomes. Considering the potential importance of toolkits as a method for maximizing the impacts of healthcare improvement interventions, a stronger research focus on the conduct and reporting of toolkit intervention and evaluation components is critical.

Availability of data and materials

The data are displayed in the in-text tables and the online-only appendix. We can convert the data to a spreadsheet upon request.

Abbreviations

AHRQ:

Agency for Healthcare Research and Quality

AORN:

Association of Perioperative Registered Nurses

CDC:

Centers for Disease Control and Prevention

CMS:

Centers for Medicare and Medicaid Services

ECRI:

Emergency Care Research Institute

IHI:

Institute for Healthcare Improvement

QI-MQCS:

Quality Improvement Minimum Quality Criteria Set

RCT:

Randomized controlled trials

RWJF:

Robert Wood Johnson Foundation

VA:

Department of Veterans Affairs (VA)

WHO:

World Health Organization

References

  1. 1.

    Berwick DM. Disseminating innovations in health care. JAMA. 2003;289:1969–75.

  2. 2.

    Parston G, McQueen J, Patel H, Keown OP, Fontana G, et al. The science and art of delivery: accelerating the diffusion of health care innovation. Health Aff (Millwood). 2015;34:2160–6.

  3. 3.

    Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translation of research findin7777gs. Implement Sci. 2012;7:50.

  4. 4.

    Howlett M, Mukherjee I, Woo JJ. From tools to toolkits in policy design studies: the new design orientation towards policy formulation research. Policy and Politics. 2015;43:291–311.

  5. 5.

    Rogers EM (2003) Diffusion of Innovations, Fifth Edition. New York, NY: Free Press.

  6. 6.

    Liu CF, Rubenstein LV, Kirchner JE, Fortney JC, Perkins MW, et al. Organizational cost of quality improvement for depression care. Health Serv Res. 2009;44:225–44.

  7. 7.

    Barac R, Stein S, Bruce B, Barwick M. Scoping review of toolkits as a knowledge translation strategy in health. BMC Med Inform Decis Mak. 2014;14:121.

  8. 8.

    Danz MS, Rubenstein LV, Hempel S, Foy R, Suttorp M, et al. Identifying quality improvement intervention evaluations: is consensus achievable? Qual Saf Health Care. 2010;19:279–83.

  9. 9.

    Hempel S, Shekelle PG, Liu JL, Sherwood Danz M, Foy R, et al. Development of the Quality Improvement Minimum Quality Criteria Set (QI-MQCS): a tool for critical appraisal of quality improvement intervention publications. BMJ Qual Saf. 2015;24:796–804.

  10. 10.

    Fisher S. The development of a falls prevention and management toolkit for hospices. Int J Palliat Nurs. 2013;19:244–9.

  11. 11.

    Callard L, Williams A. The 15 steps challenge: a toolkit for good care. Nurs Manag (Harrow). 2012;19:14–8.

  12. 12.

    Pulver LK, Oliver K, Tett SE. Innovation in hospital quality improvement activities—acute postoperative pain management (APOP) self-help toolkit audits as an example. J Healthc Qual. 2012;34:45–59.

  13. 13.

    Clancy KA, Kacica MA. Ready for our children? Results from a survey of upstate New York hospitals' utilization of Pediatric Emergency Preparedness Toolkit guidance. Disaster Med Public Health Prep. 2012;6:138–45.

  14. 14.

    Henry JA, Orgoi S, Govind S, Price RR, Lundeg G, et al. Strengthening surgical services at the soum (first-referral) hospital: the WHO emergency and essential surgical care (EESC) program in Mongolia. World J Surg. 2012;36:2359–70.

  15. 15.

    Pratt S, Kenney L, Scott SD, Wu AW. How to develop a second victim support program: a toolkit for health care organizations. Jt Comm J Qual Patient Saf. 2012;38(235-240):193.

  16. 16.

    Carroll DL, Dykes PC, Hurley AC. An electronic fall prevention toolkit: effect on documentation quality. Nurs Res. 2012;61:309–13.

  17. 17.

    Spruce LR, Sanford JT. An intervention to change the approach to colorectal cancer screening in primary care. J Am Acad Nurse Pract. 2012;24:167–74.

  18. 18.

    Chrisman SP, Schiff MA, Rivara FP. Physician concussion knowledge and the effect of mailing the CDC's "Heads Up" toolkit. Clin Pediatr (Phila). 2011;50:1031–9.

  19. 19.

    Bender BG, Dickinson P, Rankin A, Wamboldt FS, Zittleman L, et al. The Colorado Asthma Toolkit Program: a practice coaching intervention from the High Plains Research Network. J Am Board Fam Med. 2011;24:240–8.

  20. 20.

    Nace DA, Perera S, Handler SM, Muder R, Hoffman EL. Increasing influenza and pneumococcal immunization rates in a nursing home network. J Am Med Dir Assoc. 2011;12:678–84.

  21. 21.

    Smith PD, O'Halloran P, Hahn DL, Grasmick M, Radant L. Screening for obesity: clinical tools in evolution, a WREN study. WMJ. 2010;109:274–8.

  22. 22.

    Shershneva MB, Harper PL, Elsinger LM, Olson CA. Facilitating multiorganizational smoking cessation knowledge translation through on-line toolkit for educators and clinicians. J Contin Educ Health Prof. 2010;30:149–50.

  23. 23.

    Dykes PC, Carroll DL, Hurley A, Gersh-Zaremski R, Kennedy A, et al. Fall TIPS: strategies to promote adoption and use of a fall prevention toolkit. AMIA Annu Symp Proc. 2009;2009:153–7.

  24. 24.

    Lannon CM, Flower K, Duncan P, Moore KS, Stuart J, et al. The bright futures training intervention project: implementing systems to support preventive and developmental services in practice. Pediatrics. 2008;122:e163–71.

  25. 25.

    Schauberger CW, Larson P. Implementing patient safety practices in small ambulatory care settings. Jt Comm J Qual Patient Saf. 2006;32:419–25.

  26. 26.

    Randle J, Clarke M, Storr J. Hand hygiene compliance in healthcare workers. J Hosp Infect. 2006;64:205–9.

  27. 27.

    Leape LL, Rogers G, Hanna D, Griswold P, Federico F, et al. Developing and implementing new safe practices: voluntary adoption through statewide collaboratives. Qual Saf Health Care. 2006;15:289–95.

  28. 28.

    Koelling T. Study: fewer heart deaths when QI efforts are made. Healthcare Benchmarks Qual Improv. 2006;13:21–2.

  29. 29.

    Dobbins M, Davies B, Danseco E, Edwards N, Virani T. Changing nursing practice: evaluating the usefulness of a best-practice guideline implementation toolkit. Nurs Leadersh (Tor Ont). 2005;18:34–45.

  30. 30.

    Ryan D, Barnett R, Cott C, Dalziel W, Gutmanis I, et al. Geriatrics, interprofessional practice, and interorganizational collaboration: a knowledge-to-practice intervention for primary care teams. J Contin Educ Health Prof. 2013;33:180–9.

  31. 31.

    Han C, Voils C, Williams J. Uptake of web-based clinical resources from the MacArthur initiative on depression and primary care. Community Ment Health J. 2013;49:166–71.

  32. 32.

    Parkman CA (2013) Evaluation of an educational intervention on perceptions of a patient safety culture among staff in acute care nursing units: University of Nevada, Las Vegas. 154 p p.

  33. 33.

    Mulloy DF (2008) Evaluation of implementation of the AORN correct site surgery tool kit and the universal protocol for wrong site surgery: University of Massachusetts Boston. 166 p p.

  34. 34.

    Dore A, Dye J, Hourani L, Hackney B, Criscione-Schreiber LG, et al. Incorporating The Health Literacy Universal Precautions Toolkit Quick Start In Academic Rheumatology Practices: Carolina Fellows Collaborative. Arthritis Rheum. 2013;65:S414.

  35. 35.

    Sample DA, Carroll HL, Barksdale DJ, Jessup A. The pediatric obesity initiative: development, implementation, and evaluation. J Am Acad Nurse Pract. 2013;25:481–7.

  36. 36.

    Byrne J, Govindaswami B, Jegatheesan P, Jelks A, Kunz L, et al. Perinatal core measure: antenatal steroid performance improvement following a preterm birth risk assessment decision model and perinatal QI toolkit. Am J Obstet Gynecol. 2011;204:S193.

  37. 37.

    Kinsinger LS, Jones KR, Kahwati L, Harvey R, Burdick M, et al. Design and dissemination of the MOVE! Weight-management program for veterans. Prev Chronic Dis. 2009;6.

  38. 38.

    McHugo GJ, Drake RE, Whitley R, Bond GR, Campbell K, et al. Fidelity outcomes in the national implementing evidence-based practices project. Psychiatr Serv. 2007;58:1279–84.

  39. 39.

    Abraham A, Stuht J, Emsermann CB, Kutner JS. Impact of distribution of the America-on-the-move toolkit on primary care providers' self-reported exercise and dietary counseling with overweight patients. J Gen Intern Med. 2007;22:105.

  40. 40.

    Enfield KB, Huq NN, Gosseling MF, Low DJ, Hazen KC, et al. Control of simultaneous outbreaks of carbapenemase-producing Enterobacteriaceae and extensively drug-resistant Acinetobacter baumannii infection in an intensive care unit using interventions promoted in the Centers for Disease Control and Prevention 2012 Carbapenemase-Resistant Enterobacteriaceae Toolkit. Infect Control Hosp Epidemiol. 2014;35:810–7.

  41. 41.

    Adsett JA, Mullins R, Page K, Hickey A. Heart education assessment and rehabilitation toolkit: HEART Online. Translating research into practice. European Journal of Heart Failure. 2014;16:62–3.

  42. 42.

    Fine PG, Bradshaw DH, Cohen MJ, Connor SR, Donaldson G, et al. Evaluation of the Performance Improvement CME Paradigm for Pain Management in the Long-Term Care Setting. Pain Med. 2014;15:403–9.

  43. 43.

    Miller AE, Cohen BA, Krieger SC, Markowitz CE, Mattson DH, et al. Constructing an adaptive care model for the management of disease-related symptoms throughout the course of multiple sclerosis-performance improvement CME. Mult Scler J. 2014;20:18–23.

  44. 44.

    Fowles JR, Shields C, Barron B, McQuaid S, Dunbar P (2014) Implementation of resources to support patient physical activity through diabetes centres in Atlantic Canada: The Effectiveness of Toolkit-Based Physical Activity Counselling. Can J Diabetes.

  45. 45.

    Adams CJ, Stephens K, Whiteman K, Kersteen H, Katruska J. Implementation of the re-engineered discharge (RED) toolkit to decrease all-cause readmission rates at a rural community hospital. Qual Manag Health Care. 2014;23:169–77.

  46. 46.

    Zimmerman RK, Nowalk MP, Lin CJ, Hannibal K, Moehling KK, et al. Cluster randomized trial of a toolkit and early vaccine delivery to improve childhood influenza vaccination rates in primary care. Vaccine. 2014;32:3656–63.

  47. 47.

    Kuhlmann ZC, Ahlers-Schmidt CR, Kuhlmann S, Schunn C, Rosell J (2014) To improve safe sleep, more emphasis should be placed on removing inappropriate items from cribs. Obstet Gynecol 123 Suppl 1: 115S.

  48. 48.

    Stiff L, Vogel L, Remington PL. Evaluating the implementation of a primary care weight management toolkit. WMJ. 2014;113:28–31.

  49. 49.

    Mueller SK, Kripalani S, Stein J, Kaboli P, Wetterneck TB, et al. A toolkit to disseminate best practices in inpatient medication reconciliation: multi-center medication reconciliation quality improvement study (MARQUIS). Jt Comm J Qual Patient Saf. 2013;39:371–82.

  50. 50.

    Brooks AC, Carpenedo CM, Fairfax-Columbo J, Clements NT, Benishek LA, et al. The RoadMAP relapse prevention group counseling toolkit: counselor adherence and competence outcomes. J Subst Abuse Treat. 2013;45:356–62.

  51. 51.

    Nowalk MP, Nolan BA, Nutini J, Ahmed F, Albert SM, et al. Success of the 4 pillars toolkit for influenza and pneumococcal vaccination in adults. J Healthc Qual. 2014;36:5–15.

  52. 52.

    Stalhandske E, Mills P, Quigley P, Neily J, Bagian JP (2008) Advances in patient safety VHA's national falls collaborative and prevention programs. In: Henriksen K, Battles JB, Keyes MA, Grady ML, editors. Advances in Patient Safety: New Directions and Alternative Approaches (Vol 2: Culture and Redesign). Rockville (MD): Agency for Healthcare Research and Quality (US).

  53. 53.

    Nowalk MP, Zimmerman RK, Lin CJ, Reis EC, Huang HH, et al. Maintenance of Increased Childhood Influenza Vaccination Rates 1 Year After an Intervention in Primary Care Practices. Acad Pediatr. 2016;16:57–63.

  54. 54.

    Nowalk MP, Lin CJ, Hannibal K, Reis EC, Gallik G, et al. Increasing childhood influenza vaccination: a cluster randomized trial. Am J Prev Med. 2014;47:435–43.

  55. 55.

    Alidina JR, Prieto J, Cole C, Ramer K, Chez-Flood B. Decreasing early term elective deliveries: one hospital's implementation of the 39 Week Toolkit and Quality Improvement Measurements. Anim Reprod Sci. 2015;22:261A.

  56. 56.

    Ashiru-Oredope D, Budd EL, Bhattacharya A, Din N, McNulty CAM, et al. Implementation of antimicrobial stewardship interventions recommended by national toolkits in primary and secondary healthcare sectors in England: TARGET and Start Smart Then Focus. J Ant0imicrob Chemother. 2016;71:1408–14.

  57. 57.

    Brown GS, Simon A, Cameron J, Minami T. A collaborative outcome resource network (ACORN): Tools for increasing the value of psychotherapy. Psychotherapy (Chic). 2015;52:412–21.

  58. 58.

    Chesis N (2015) A Quality Improvement Project to Reduce the Incidence of Nonmedically Indicated Elective Deliveries Before 39 Weeks...Proceedings of the 2015 AWHONN Convention. JOGNN: Journal of Obstetric, Gynecologic & Neonatal Nursing 44: S49-S50.

  59. 59.

    Coe LJ, St John JA, Hariprasad S, Shankar KN, MacCulloch PA, et al. An integrated approach to falls prevention: a model for linking clinical and community interventions through the Massachusetts Prevention and Wellness Trust Fund. Front Public Health. 2017;5:38.

  60. 60.

    Cox A, Arber A, Bailey F, Dargan S, Gannon C, et al. Developing, implementing and evaluating an end of life care intervention. Nurs Older People. 2017;29:27–35.

  61. 61.

    Dykes PC, Duckworth M, Cunningham S, Dubois S, Driscoll M, et al. Pilot testing FALL TIPS (tailoring interventions for patient safety): a patient-centered fall prevention toolkit. Jt Comm J Qual Patient Saf. 2017;43:403–13.

  62. 62.

    Ezzat AM, Schneeberg A, Huisman ES, White LD, Kennedy C, et al. A cross-sectional evaluation examining the use of the Achilles tendinopathy toolkit by physiotherapists in British Columbia, Canada. Disabil Rehabil. 2017;39:671–6.

  63. 63.

    Fernald D, Hamer M, James K, Tutt B, West D. Launching a laboratory testing process quality improvement toolkit: from the shared networks of Colorado Ambulatory Practices and Partners (SNOCAP). J Am Board Fam Med. 2015;28:576–83.

  64. 64.

    Gibson SJ. Translation of clinical practice guidelines for childhood obesity prevention in primary care mobilizes a rural Midwest community. J Am Assoc Nurse Pract. 2016;28:130–7.

  65. 65.

    Gray E, Shields C, Fowles JR. Building competency and capacity for promotion of effective physical activity in diabetes care in Canada. Can J Diabetes. 2017;41:491–8.

  66. 66.

    Guillory C, Gong A, Livingston J, Creel L, Ocampo E, et al. Texas pulse oximetry project: a multicenter educational and quality improvement project for implementation of critical congenital heart disease screening using pulse oximetry. Am J Perinatol. 2017;34:856–60.

  67. 67.

    Gulati A, Harwood CA, Rolph J, Pottinger E, McGregor JM, et al. Is an online skin cancer toolkit an effective way to educate primary care physicians about skin cancer diagnosis and referral? J Eur Acad Dermatol Venereol. 2015;29:2152–9.

  68. 68.

    Haley WE, Beckrich AL, Sayre J, McNeil R, Fumo P, et al. Improving care coordination between nephrology and primary care: a quality improvement initiative using the renal physicians association toolkit. Am J Kidney Dis. 2015;65:67–79.

  69. 69.

    Jones LF, Hawking MKD, Owens R, Lecky D, Francis NA, et al. (2017) An evaluation of the TARGET (Treat Antibiotics Responsibly; Guidance, Education, Tools) Antibiotics Toolkit to improve antimicrobial stewardship in primary care-is it fit for purpose? Fam Pract.

  70. 70.

    Kemertzis MA, Ranjithakumaran H, Hand M, Peate M, Gillam L, et al. Fertility preservation toolkit: a clinician resource to assist clinical discussion and decision making in pediatric and adolescent oncology. J Pediatr Hematol Oncol. 2018.

  71. 71.

    Kohler C, Beck D, Villarreal CL, Trial JL. Interprofessional participation in a statewide collaborative to recognize and treat hypertension in pregnancy...Proceedings of the 2015 AWHONN Convention. J Obstet Gynecol Neonatal Nurs. 2015;44:S50.

  72. 72.

    Latsko J, Dennison B, Houk A, Chisolm S, Gerds A, et al. Use of the aplastic anemia and mds international foundation's treating mds toolkit can increase the frequency of mds education and side effects by oncology nurses. Oncol Nursing Forum. 2015;42:E216.

  73. 73.

    Levy S, Ziemnik RE, Harris SK, Rabinow L, Breen L, et al. (2017) Screening Adolescents for Alcohol Use: Tracking Practice Trends of Massachusetts Pediatricians. J Addict Med.

  74. 74.

    Lyndon A, Cape V. Maternal Hemorrhage Quality Improvement Collaborative Lessons. MCN Am J Matern Child Nurs. 2016;41:363–71.

  75. 75.

    Mabachi NM, Cifuentes M, Barnard J, Brega AG, Albright K, et al. Demonstration of the health literacy universal precautions toolkit: lessons for quality improvement. J Ambul Care Manage. 2016;39:199–208.

  76. 76.

    MacDonald-Wilson KL, Hutchison SL, Karpov I, Wittman P, Deegan PE. A successful implementation strategy to support adoption of decision making in mental health services. Community Ment Health J. 2017;53:251–6.

  77. 77.

    Main EK, Cape V, Abreo A, Vasher J, Woods A, et al. (2017) Reduction of severe maternal morbidity from hemorrhage using a state perinatal quality collaborative. Am J Obstet Gynecol 216: 298.e291-298.e211.

  78. 78.

    Mitchell SE, Martin J, Holmes S, van Deusen LC, Cancino R, et al. How hospitals reengineer their discharge processes to reduce readmissions. J Healthc Qual. 2016;38:116–26.

  79. 79.

    Nicolaidis C, Raymaker D, McDonald K, Kapp S, Weiner M, et al. The development and evaluation of an online healthcare toolkit for autistic adults and their primary care providers. J Gen Intern Med. 2016;31:1180–9.

  80. 80.

    Perumalswami PV, Vu T, Wyatt B, Parrella K, Rogers J, et al. Implementing HepCure—an innovative web-based toolkit for hepatitis c to train primary care providers and Increase Patient Engagement. Hepatology. 2016;64:379A.

  81. 81.

    Pierce C, McGinn K, Mulherin DW, Gonzales J. A multicenter study comparing protocol implementation with and without the sccm protocol toolkit. Critical Care Medicine. 2016;44:1.

  82. 82.

    Rueda-Clausen CF, Benterud E, Bond T, Olszowka R, Vallis MT, et al. Effect of implementing the 5As of obesity management framework on provider-patient interactions in primary care. Clin Obes. 2014;4:39–44.

  83. 83.

    Sarna L, Bialous SA, Wells M, Brook J. Impact of a webcast on nurses' delivery of tobacco dependence treatment. J Clin Nurs. 2017.

  84. 84.

    Septimus E, Hickok J, Moody J, Kleinman K, Avery TR, et al. Closing the translation gap: toolkit-based implementation of universal decolonization in adult intensive care units reduces central line-associated bloodstream infections in 95 community hospitals. Clin Infect Dis. 2016;63:172–7.

  85. 85.

    Shellhaas C, Conrey E, Crane D, Lorenz A, Wapner A, et al. The Ohio gestational diabetes postpartum care learning collaborative: development of a quality improvement initiative to improve systems of care for women. Matern Child Health J. 2016.

  86. 86.

    Sopcak N, Aguilar C, O'Brien MA, Nykiforuk C, Aubrey-Bassler K, et al. Implementation of the BETTER 2 program: a qualitative study exploring barriers and facilitators of a novel way to improve chronic disease prevention and screening in primary care. Implement Sci. 2016;11:158.

  87. 87.

    Taylor YJ, Tapp H, Shade LE, Liu TL, Mowrer JL, et al. (2017) Impact of shared decision making on asthma quality of life and asthma control among children. J Asthma: 0.

  88. 88.

    Thomason SS, Powell-Cope G, Peterson MJ, Guihan M, Wallen ES, et al. A multisite quality improvement project to standardize the assessment of pressure ulcer healing in veterans with spinal cord injuries/disorders. Adv Skin Wound Care. 2016;29:269–76.

  89. 89.

    Wyte-Lake T, Claver M, Der-Martirosian C, Davis D, Dobalian A. Developing a Home-Based Primary Care Disaster Preparedness Toolkit. Disaster Med Public Health Prep. 2017;11:56–63.

  90. 90.

    Luck J, Bowman C, York L, Midboe A, Taylor T, et al. Multimethod evaluation of the VA's peer-to-peer toolkit for patient-centered medical home implementation. Journal of General Internal Medicine. 2014;29:572–8.

  91. 91.

    Huang SS, Septimus E, Kleinman K, Moody J, Hickok J, et al. Targeted versus universal decolonization to prevent ICU infection. N Engl J Med. 2013;368:2255–65.

  92. 92.

    Taylor DR, Loh SF, Mulligan KT, Pulver LK, Tompson AJ, et al. Management of acute postoperative pain in Australian hospitals: Room for improvement. Journal of the Australasian Association for Quality in Health Care. 2010;20:29–36.

  93. 93.

    Dykes PC, Carroll DL, Hurley A, Lipsitz S, Benoit A, et al. Fall prevention in acute care hospitals: a randomized trial. JAMA. 2010;304:1912–8.

  94. 94.

    Wirtschafter DD, Danielsen BH, Main EK, Korst LM, Gregory KD, et al. Promoting antenatal steroid use for fetal maturation: results from the California Perinatal Quality Care Collaborative. J Pediatr. 2006;148:606–12.

  95. 95.

    Jack BW, Chetty VK, Anthony D, Greenwald JL, Sanchez GM, et al. A reengineered hospital discharge program to decrease rehospitalization: a randomized trial. Ann Intern Med. 2009;150:178–87.

  96. 96.

    Nowalk MP, Nutini J, Raymund M, Ahmed F, Albert SM, et al. Evaluation of a toolkit to introduce standing orders for influenza and pneumococcal vaccination in adults: a multimodal pilot project. Vaccine. 2012;30:5978–82.

  97. 97.

    Brooks AC, Diguiseppi G, Laudet A, Rosenwasser B, Knoblach D, et al. Developing an evidence-based, multimedia group counseling curriculum toolkit. J Subst Abuse Treat. 2012;43:178–89.

  98. 98.

    Carise D, Brooks A, Alterman A, McLellan AT, Hoover V, et al. Implementing evidence-based practices in community treatment programs: initial feasibility of a counselor "toolkit". Subst Abus. 2009;30:239–43.

  99. 99.

    Davis MM, Howk S, Spurlock M, McGinnis PB, Cohen DJ, et al. A qualitative study of clinic and community member perspectives on intervention toolkits: "Unless the toolkit is used it won't help solve the problem". BMC Health Serv Res. 2017;17:497.

  100. 100.

    Hempel S, Miake-Lye I, Brega AG, Buckhold F 3rd, Hassell S, et al. Quality improvement toolkits: recommendations for development. Am J Med Qual. 2019:1062860618822102.

  101. 101.

    Rubenstein L, Khodyakov D, Hempel S, Danz M, Salem-Schatz S, et al. How can we recognize continuous quality improvement? Int J Qual Health Care. 2014;26:6–15.

  102. 102.

    Danz MS, Hempel S, Lim YW, Shanman R, Motala A, et al. Incorporating evidence review into quality improvement: meeting the needs of innovators. BMJ Qual Saf. 2013;22:931–9.

  103. 103.

    O'Neill SM, Hempel S, Lim YW, Danz MS, Foy R, et al. Identifying continuous quality improvement publications: what makes an improvement intervention 'CQI'? BMJ Qual Saf. 2011;20:1011–9.

  104. 104.

    Soban LM, Hempel S, Munjas BA, Miles J, Rubenstein LV. Preventing pressure ulcers in hospitals: A systematic review of nurse-focused quality improvement interventions. Jt Comm J Qual Patient Saf. 2011;37:245–52.

  105. 105.

    Hempel S, Rubenstein LV, Shanman RM, Foy R, Golder S, et al. Identifying quality improvement intervention publications--a comparison of electronic search strategies. Implement Sci. 2011;6:85.

  106. 106.

    Rubenstein LV, Hempel S, Farmer MM, Asch SM, Yano EM, et al. Finding order in heterogeneity: types of quality-improvement intervention publications. Qual Saf Health Care. 2008;17:403–8.

  107. 107.

    Yamada J, Shorkey A, Barwick M, Widger K, Stevens BJ. The effectiveness of toolkits as knowledge translation strategies for integrating evidence into clinical care: a systematic review. BMJ Open. 2015;5:e006808.

  108. 108.

    Tapp H, Shade L, Mahabaleshwarkar R, Taylor YJ, Ludden T, et al. Results from a pragmatic prospective cohort study: Shared decision making improves outcomes for children with asthma. J Asthma. 2017;54:392–402.

  109. 109.

    Zuyev L, Benoit AN, Chang FY, Dykes PC. Tailored prevention of inpatient falls: development and usability testing of the fall TIPS toolkit. Comput Inform Nurs. 2011;29:93–100.

  110. 110.

    MacDonald-Wilson KL, Hutchison SL, Karpov I, Wittman P, Deegan PE. A successful implementation strategy to support adoption of decision making in mental health services. Community Ment Health J. 2016.

  111. 111.

    Lin CJ, Nowalk MP, Zimmerman RK, Moehling KK, Conti T, et al. Reducing racial disparities in influenza vaccination among children with asthma. J Pediatr Health Care. 2016;30:208–15.

Download references

Acknowledgements

We thank Jessica Beroes, Isomi Miake-Lye, and Aneesa Motala for administrative assistance and Susan Stockdale for her collaboration and access to VA toolkit documents.

Funding

This work was undertaken as part of the Veterans Affairs (VA) Patient Aligned Care Team (PACT) Demonstration Laboratory initiative, supporting and evaluating VA transition to a patient-centered medical home. Funding for the PACT Demonstration Laboratory initiative is provided by the VA Office of Patient Care Services. Study# 2011-0521: Patient Centered Medical Home Innovation Evidence-Based Support and Evaluation. Dr. O'Hanlon was supported by the VA Office of Academic Affiliations through the VA Advanced Fellowship in Health Services Research. The funding agency had no role in the design, collection, analysis, or interpretation of the data, or in the writing of the manuscript and the decision to submit the manuscript for publication. The findings and conclusions are those of the authors and do not necessarily represent the views of the Department of Veterans Affairs.

Author information

SH designed the review, inclusion screened and checked the data extraction, drafted the manuscript, and is the guarantor of the review. COH inclusion screened and extracted the data. YWL, MD, and LR provided critical input into the design of the review and the manuscript. JL designed and executed the search strategy. LR obtained funding. All authors read and approved the final manuscript.

Correspondence to Susanne Hempel.

Ethics declarations

Ethics approval and consent to participate

The study was reviewed by the Human Subject Protection Committee (HSPC) of the RAND Corporation and determined to be exempt.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:

PRISMA checklist. (DOCX 26 kb)

Additional file 2:

Appendix A: Search terms. Appendix B: Identified publicly available toolkits. Appendix C: Toolkits included in the review. Appendix D: Critical Appraisal QI-MQCS. (DOCX 199 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Keywords

  • Spread, Diffusion of innovation
  • Quality improvement
  • Toolkit, Implementation