Skip to main content

The GUIDES checklist: development of a tool to improve the successful use of guideline-based computerised clinical decision support

Abstract

Background

Computerised decision support (CDS) based on trustworthy clinical guidelines is a key component of a learning healthcare system. Research shows that the effectiveness of CDS is mixed.

Multifaceted context, system, recommendation and implementation factors may potentially affect the success of CDS interventions. This paper describes the development of a checklist that is intended to support professionals to implement CDS successfully.

Methods

We developed the checklist through an iterative process that involved a systematic review of evidence and frameworks, a synthesis of the success factors identified in the review, feedback from an international expert panel that evaluated the checklist in relation to a list of desirable framework attributes, consultations with patients and healthcare consumers and pilot testing of the checklist.

Results

We screened 5347 papers and selected 71 papers with relevant information on success factors for guideline-based CDS. From the selected papers, we developed a 16-factor checklist that is divided in four domains, i.e. the CDS context, content, system and implementation domains. The panel of experts evaluated the checklist positively as an instrument that could support people implementing guideline-based CDS across a wide range of settings globally. Patients and healthcare consumers identified guideline-based CDS as an important quality improvement intervention and perceived the GUIDES checklist as a suitable and useful strategy.

Conclusions

The GUIDES checklist can support professionals in considering the factors that affect the success of CDS interventions. It may facilitate a deeper and more accurate understanding of the factors shaping CDS effectiveness. Relying on a structured approach may prevent that important factors are missed.

Peer Review reports

Background

Increasing the value and reducing waste in healthcare are important issues within resource-constrained systems globally [1]. Both the underuse and overuse of healthcare services can have serious consequences for people’s health and for healthcare spending [2,3,4]. The most important drivers of poor care fall into three domains: money and finance; knowledge, bias and uncertainty; and power and human relationships [5]. Computerised decision support (CDS) based on trustworthy clinical guidelines can play a key role in addressing some of these problems by improving knowledge, accelerating the adoption of new evidence and helping to better manage beliefs, assumptions and uncertainty among healthcare providers and patients [3, 5]. The US National Academy of Medicine (formerly the Institute of Medicine) has identified CDS as a key component of a learning healthcare system [6].

CDS is a technology that provides patient-specific medical knowledge at the point of need. Research into guideline-based CDS over the last 40 years has shown that the effectiveness and success of CDS have been mixed [7]. Systematic reviews regarding CDS effectiveness estimate on average modest increases in guideline adherence and modest reductions in morbidity [8, 9]. CDS does obviously not work in isolation but it is part of a complicated mingling of determinants. Variations in success may be due to problems with the CDS system, the CDS content, and how the CDS is implemented or because CDS may not be able to provide a complete solution to complex problems [10]. Any decision to use CDS, or other additional interventions, should be based on an assessment of the determinants of healthcare practice that affect whether the desired changes can be achieved [11]. Multiple reviews have evaluated determinants of successful development and use of CDS, for example implementation of CDS in inpatient versus outpatient settings, CDS aimed at prevention or disease management, integration of CDS in charting or order entry system and providing CDS to both clinicians and patients [12,13,14,15]. An understanding of which factors or combinations of factors make CDS more or less effective is still being developed [12, 16].

The aim of the GUIDES project was to improve the successful use of guideline-based CDS through the development of a checklist. The purpose of the checklist was to facilitate a deeper and more accurate understanding of which factors make CDS more (or less) effective and to guide CDS implementation by preventing key factors from being overlooked.

Methods

We developed the checklist in the following steps, as previously published in a detailed protocol for the GUIDES project [17]: (1) a systematic review of the research evidence and frameworks on the factors affecting success of guideline-based CDS, (2) a synthesis of the factors identified and creation of the checklist and (3) pilot testing the checklist by testing it in a systematic review of trials on CDS and by using it during focus group discussions about CDS success features. Currently, there is no standardised methodology on how to develop a checklist. However, the steps included in this process allow broad input from diverse sources regarding checklist content and design which matches with the available guidance for checklist development [18, 19].

Step 1: We (SV and SF) systematically reviewed frameworks, systematic reviews, process evaluations and qualitative evidence pertaining to factors for successful CDS implementation according to a predefined protocol. In an additional review, we focussed on head-to-head trials evaluating success factors of interventions with CDS.

Step 2: We extracted all the identified factors in the selected papers to develop the first version of the GUIDES checklist. An international group of experts with experience in CDS and/or guidelines provided input and feedback in two rounds of consultation and one approval round. To compose the expert group, we invited first the corresponding authors of every relevant paper found in the review phase and combined this with the suggestions by the members of the GUIDES project in order to achieve a balanced group with respect to multidisciplinary representation and international representation. The experts gave their feedback through a structured online feedback form that related to a list of desirable framework attributes. We also collected the views of patients and health care consumers in one round through an additional survey to ensure that the adopted strategies are relevant and acceptable to patients. The detailed questions for each survey are available in Additional files 1 and 2. The group of authors discussed the feedback from every consultation round and agreed on the revisions that we made. We also asked the panel of experts to judge if the included factors were ‘always’ , ‘sometimes’ or ‘never important’ for the success of guideline-based CDS interventions. Further, we asked the experts to select what they regarded as the five most important factors and to rank their selection from the most important to the fifth most important. In parallel with the checklist, we also asked feedback from the expert panel on three support worksheets to help users to (1) select the most important recommendations for implementation, (2) evaluate which implementation strategies are appropriate for the prioritised recommendations and (3) assess if CDS is appropriate for the selected recommendations.

Step 3: We invited six colleagues with research backgrounds to pilot the GUIDES checklist by rating a random sample of 30 trial reports that we identified during our review of head-to-head trials. We asked the members of this group to think aloud as they applied the checklist for the first time, and we made notes about how they interacted with the checklist and how they interpreted the different checklist factors. Two participants independently assessed each trial report, to make judgements about how well the CDS interventions performed in relation to the GUIDES factors for successful CDS. We calculated interrater agreement to assess consistency of rating across the researchers.

We further piloted the checklist during six focus groups with general practitioners and patients who discussed a CDS intervention intended to improve care and outcomes for patients with knee osteoarthritis. In the focus group, participants discussed the factors that determine successful use of CDS during an initial brainstorming phase. In a subsequent structured phase, the moderator used the GUIDES checklist to ask probing questions on factors that were not yet discussed. At the end of the interview, the participants individually selected five factors that they considered the most important for the successful use of the CDS strategy. We evaluated to what extent the checklist allowed us to identify additional factors and how many of these factors were rated as important by the participants. The focus groups took place in Norway, Belgium and Finland. The moderators emphasised that both positive and negative feedback about the CDS intervention was important. We audio-recorded each focus group and an observer took notes. We transcribed key parts of the focus groups but we did not do a full transcription of the recordings. Further details are available in a separate report [20]. Here we reported the findings of an evaluation of the process we used to identify factors that could affect the success of the suggested strategy.

Results

Characteristics of the expert panel

A panel of 49 experts provided online feedback to the GUIDES project. Twenty-two of the 49 experts had a background in evidence-based guidelines, 12 were healthcare providers and 42 had a background in CDS. Eight experts were active in CDS system development, implementation and evaluation. Twelve were involved in education, and four had governmental backgrounds, either as CDS programme funders or as health policy analysts. The majority of the experts (84%) had a research background. Seven participants had no experience of CDS. The areas of expertise are based on self-report by the panel members.

The panel consisted of experts from 18 countries across five continents. Ten experts had CDS experience in low- and middle-income countries. Eight reported financial relationships with entities in the health informatics arena; three reported other relationships or activities that may have been potential conflicts of interest. In both feedback rounds, the proportion of participants who responded was above 90%. Four patients and healthcare consumers also provided feedback through an additional survey.

Development of the checklist

Here we describe the results of the literature review, the consultation rounds and the pilot testing. Tables 1 and 2, Figs. 1 and 2 and Additional file 3 provide further details about the content of the GUIDES checklist we developed.

  1. Step 1:

    Review of the research evidence and frameworks

Table 1 An overview of the GUIDES checklist
Table 2 Overview of the GUIDES checklist factors and how to evaluate questions
Fig. 1
figure 1

Diagram presenting the four domains that are important for successful implementation of guideline-based CDS. This diagram is adapted from the formula by Fixsen on successful uses of evidence-based programs in human service settings [100]

Fig. 2
figure 2

Screenshot of the electronic version of the GUIDES checklist illustrating the domains and factors

The search identified 5347 papers. The final selection included 71 papers (including 21 frameworks, 16 systematic reviews, 7 qualitative studies and 27 process evaluations related to CDS trials). We excluded several other systematic reviews because their scope was too narrow, focusing for example on a specific clinical condition, setting or CDS function. We double-checked that no information was lost by excluding these reviews. We also excluded some older systematic reviews because more recent reviews of the same research question were available. We double-checked that no information was lost by excluding these reviews.

The included frameworks most frequently discussed items related to the ease of use of CDS systems, stepwise implementation of CDS interventions, continuous improvements of CDS systems, CDS delivery methods, relevance and accuracy of CDS and the trustworthiness of CDS content [21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41]. The frameworks were often rooted in the experiences of particular national or regional settings. Fifteen frameworks were developed in the USA, five in Europe and one in Australia. None of the included frameworks met all the preferred framework attributes. Many frameworks focussed on a specific clinical setting, type of practice or type of CDS function. The diverse goals of the frameworks made it difficult to evaluate their usability. Most of the frameworks were presented in research papers which detailed CDS classifications, the lessons learned or recommendations. In only two instances was it explicitly mentioned that CDS developers and/or implementers were the primary target audience of the frameworks.

The systematic reviews evaluated a wide range of aspects related to the success of CDS interventions [12, 42,43,44,45,46,47,48,49,50,51,52,53,54,55,56]. The reviews focussed most on the role of the CDS setting and target, the methods to deliver the CDS and the trustworthiness of the decision support. In a separate publication, we discuss these results in relation to the findings of another systematic review that we conducted (Unpublished research submitted to Implementation Science). That report also rates the certainty of the evidence for every factor using the GRADE approach [57].

The qualitative studies and process evaluations that we included often discussed issues related to the acceptance of CDS, the ease of use of a CDS system and the role of other factors influencing compliance with decision support advice [58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91].

  1. Step 2:

    Synthesis of CDS success factors and development of GUIDES checklist

From the selected papers, we extracted 816 excerpts that were related to factors affecting CDS success. We coded all the information contained in the excerpts and used this to construct a first checklist version. GUIDES checklist v1.1 included 38 items classified in four domains, namely, the CDS context, content, system and implementation. The expert panel provided input on this draft checklist, and we made a number of important adjustments.

In the GUIDES checklist v1.2, we reduced the number of factors to 19 by grouping items under factors representing ‘higher-level’ concepts. No items were removed. For each factor, we included a set of questions to help users in their evaluation. Examples of positive and negative impacts were included. Items that were complicated, unclear or ambiguous were reworded.

The expert feedback on v1.2 was positive overall. Eighty-nine percent of the experts judged the checklist to be an appropriate tool for helping to identify factors that should be considered when implementing guideline-based CDS. The remaining 11% of experts were not able to judge this or commented on the limitation that the checklist does not suggest potential solutions for the problems identified or the limitation that the checklist does not address the implementability of guidelines. The experts agreed that the checklist would be applicable across different settings and different types of practices (82%). The remaining experts were uncertain as to how the checklist would perform for the implementation of CDS in small practices, in tertiary care and in low- and middle-income countries. The experts found that the checklist was logically organised in a way that it was easy to understand (93%) and that the factors were labelled and explained in ways that were easy to understand (84%). Twenty-nine percent of the experts indicated that they felt that parts of the checklist were too complicated. Sixty-four percent of the experts found that no potentially important factors were missing. The other experts either found that some descriptions were implicit or suggested to discuss various concepts such as equitable implementation of CDS, patient involvement or use of CDS on handheld devices. Eighty-four percent concluded that the checklist did not include irrelevant factors. Some other experts pointed at interrelations or overlap between the factors. The panel noted that the checklist was suitable for people with a research background (80%). Other experts suggested further adjustments or asked data about how well the checklist achieves its purpose. Forty-two percent observed that pilot testing would be required to evaluate if the checklist is suitable for use by people who are not researchers. Ninety percent of the experts indicated that they intend to use the checklist. Other experts were uncertain, and one expert mentioned not to need the checklist given the personal expertise acquired by many CDS implementations.

Additional file 1 provides more details from the expert panel feedback.

In the GUIDES checklist v1.3, we regrouped the checklist to a list of 16 factors and we made other adjustments as a response to the expert feedback. We presented the checklist once more to the panel, which approved it unanimously. Table 1 provides a short overview of the checklist and the full checklist is available in Additional file 3. The response of the expert panel to the three additional worksheets was mixed. Therefore, we decided not to include these with the GUIDES checklist. However, they are available on request.

Patients and healthcare consumers strongly agreed that guideline-based CDS is an important quality improvement intervention (100%) and that the GUIDES checklist is a suitable (75%) and potentially beneficial tool (100%). They agreed that there is a need to provide CDS on treatment options to both healthcare providers and patients (sometimes simultaneously for shared decision making). Additional file 2 provides more detailed results for this survey.

  1. Step 3:

    Pilot testing the GUIDES checklist

Pilot testing the checklist on a sample of CDS trial reports showed that the participants took 60–90 min to complete the checklist. We noted and adjusted some redundancies across the checklist factors. Interrater agreement was generally poor, with kappa values lower than 0.60. The raters often found the quality of the reporting and information about the trials to be ambiguous or unclear. This led to notable variations between the evaluations. After the initial ratings were completed, the six raters resolved disagreements by discussion. They agreed that there was insufficient information to make judgements about the quality of a CDS intervention for half of the trial reports. Another frequent cause of disagreement related to some raters having overlooked specific information available in the trial reports. In other instances, raters made different judgements in relation to specific factors. The pilot testing indicated that the GUIDES checklist could be applied to a range of different settings and could be used to classify all the descriptive information on interventions and contexts.

Eight healthcare providers and 22 patients participated in the focus group interviews. The focus groups yielded 211 factors potentially affecting CDS success. After combining related factors, we identified 59 unique factors. All the suggested factors could be classified under the GUIDES checklist factors. Half of the suggested factors were related to five checklist items: 1.1 ‘CDS can achieve the planned quality objectives’ , 1.2 ‘The quality of the patient data is sufficient’ , 1.4 ‘CDS can be added to the existing workload, workflows and systems’ , 2.2 ‘The content is relevant and accurate’ and 3.2 ‘The decision support is well delivered’. The participants suggested a median of 17 unique factors during the brainstorm phase and 13 additional factors during the structured phase when the GUIDES checklist was used. Per focus group, the participants prioritised a median of eight factors that were suggested during the brainstorming phase and a median of six factors that were suggested during the structured phase. Overall, ten factors were suggested only during the structured interview phase, out of which six were prioritised.

The most important GUIDES checklist factors

The panel indicated that all the factors we had included were important and that a failure to consider all of them could potentially cause a CDS intervention to fail. They noted, however, that the level of importance of each factor might vary across circumstances and settings. For example, 30% of the experts indicated that factors in the CDS implementation domain were only sometimes important.

The survey of the expert panel showed that they considered factor 2.1 ‘The content provides trustworthy evidence-based information’ to be the most important factor. Other factors that scored high were 2.2 ‘The decision support is relevant and accurate’ , 1.4 ‘CDS can be added to the existing workload, workflows and systems’ and 1.1 ‘CDS can achieve the defined quality objectives’.

Additional file 4 provides more detailed results on the importance related ratings by the expert panel.

Discussion

The goal of the GUIDES project was to increase the success of guideline-based CDS. By developing a checklist, we aimed to assist those involved with the implementation of CDS interventions to consider success factors for guideline-based CDS in a structured way. When designing the checklist, our biggest challenge was to make it comprehensive and, at the same time, concise and easy to apply. The expert panel agreed that the final checklist would be potentially beneficial to people implementing guideline-based CDS across a wide range of settings globally. Ninety percent of study participants indicated that they intend to use the checklist.

Relation to findings from studies in related fields

We identified two studies in related fields. Ross et al. conducted an overview of reviews focusing on factors affecting the implementation of e-health in general [92]. The review did not include any factors beyond those found in the GUIDES checklist. Wu et al. conducted an interdisciplinary systematic review and identified factors shaping the success of CDS interventions which are generalisable across healthcare and non-healthcare fields (for example, defence, finance, aviation and the environment) [93]. The seven features they listed correspond closely with the factors included in the GUIDES checklist.

Limitations and perspectives

The checklist may have design shortcomings because it has not been extensively tested with people who are not researchers. While the majority of the panel members was involved with research, more than a third was active with both research and clinical practice or implementation work. We plan to evaluate the experiences of users in the diverse project settings in which the GUIDES checklist will be used. The electronic GUIDES platform will also help us to collect user feedback systematically. A first test case includes the ELMO trial, where GUIDES was used to plan the implementation strategy for CDS aimed to improve the appropriateness of diagnostic testing [94]. We recognise that some checklist users may need training in how to use it, and we encourage people to contact us for assistance.

In most instances, it is clear why each factor has been assigned to a particular domain. We recognise that in a few instances factors could be assigned to more than one domain (e.g. the factor ‘The system delivers the decision support to the right target person’ has a relation to both the CDS system domain and the implementation domain). Some factors are also interrelated, such as ‘Stakeholders and users accept CDS’, which is also affected by factors in the content, system and implementation domain. Other potential limitations of our checklist are that it does not include strategies to mitigate risks for CDS interventions or to solve problems identified when applying the checklist. For example, the checklist does not address the requirements for guideline design and authoring to facilitate use of guidelines within CDS [95]. We recognise that it would be useful to add information on mitigation strategies in future updates of the GUIDES checklist. Information on strategies to evaluate the success of CDS could also be added in an update. Although the checklist provides a comprehensive overview of the factors affecting the success of CDS interventions, it does not define the minimum requirements for every factor. How, for example, can one determine when a CDS intervention is sufficiently relevant and accurate? What amount of decision support is appropriate? Providing comprehensive detail is challenging given the large variation in CDS contexts and systems, and using the checklist requires careful judgement. Similar limitations have also been identified in other checklist projects for complex interventions [96].

Members of the expert panel gave broad support for the GUIDES checklist. However, we do not yet have evidence that the use of the GUIDES checklist will increase the success of guideline-based CDS interventions.

We are currently undertaking a qualitative evidence synthesis of perceptions and experiences on using CDS to implement recommendations. We will apply the GRADE-CERQual system to assess the confidence that we can have in these findings [97].

How CDS implementers can use the GUIDES checklist

We think that the GUIDES checklist is best used by multidisciplinary teams. Each GUIDES domain is important for the outcomes of a CDS intervention, and we recommend considering all of them. For each factor, users can indicate the degree to which they think the issue has been addressed. When concerns are identified that could negatively affect the success of a CDS intervention, we recommend that the multidisciplinary group discusses the importance of these concerns and reaches a consensus about what follow-up actions are required. Further instructions are provided in the actual checklist (Additional file 1).

How researchers can use the GUIDES checklist

The checklist may be useful for researchers seeking to identify, examine and synthesise factors potentially affecting the success of CDS. Our pilot testing with the focus group showed that the checklist could help to identify and classify important factors shaping the success of CDS interventions.

Appraisals of CDS interventions are difficult when reporting is poor. We perceive this as an important reason for the low interrater reliability that we obtained during the pilot testing. Fewer than 30% of the trials, we included provided sufficient details about the contexts in which the CDS took place or the strategy used to implement the CDS. Approximately 75% of the trials described the CDS system in sufficient detail, but half did not offer sufficient information about the advice provided by the CDS system. Without such detail, it was difficult to assess the applicability and transferability of the CDS to other settings or to identify the factors that contributed to the success or failure of the CDS interventions [98].

The STARE-HI reporting standards are a resource for improving the reporting of evaluation studies in health informatics [99]. These reporting standards recommend the inclusion of a description of the study context and details about the informatics system used. We would advise to include additional criteria in the STARE-HI standards or to develop specific reporting criteria for CDS interventions. The GUIDES checklist may be useful when considering desirable reporting criteria.

Conclusions

We designed the GUIDES checklist to support professionals in considering, in a more structured way, the factors that may affect the success of CDS interventions. In addition to CDS implementation teams, the checklist might also support CDS developers, researchers, funders and educators. Existing frameworks, evidence and expert input informed the development of the checklist. We believe that CDS implementers who use the GUIDES checklist will get a deeper and more accurate understanding of the factors shaping CDS effectiveness and they will less likely overlook important factors.

Abbreviations

CDS:

Computerised decision support

GUIDES:

GUideline Implementation with Decision Support

References

  1. Saini V, Brownlee S, Elshaug AG, Glasziou P, Heath I. Addressing overuse and underuse around the world. Lancet. 2017;390(10090):105–7.

    Article  PubMed  Google Scholar 

  2. Brownlee S, Chalkidou K, Doust J, Elshaug AG, Glasziou P, Heath I, et al. Evidence for overuse of medical services around the world. Lancet. 2017;390:156.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Elshaug AG, Rosenthal MB, Lavis JN, Brownlee S, Schmidt H, Nagpal S, et al. Levers for addressing medical underuse and overuse: achieving high-value health care. Lancet. 2017;390(10090):191–202.

    Article  PubMed  Google Scholar 

  4. Glasziou P, Straus S, Brownlee S, Trevena L, Dans L, Guyatt G, et al. Evidence for underuse of effective medical services around the world. Lancet. 2017;390:169.

    Article  PubMed  Google Scholar 

  5. Saini V, Garcia-Armesto S, Klemperer D, Paris V, Elshaug AG, Brownlee S, et al. Drivers of poor medical care. Lancet. 2017;390(10090):178–90.

    Article  PubMed  Google Scholar 

  6. The Learning Healthcare System. Workshop Summary. In: Olsen LA, Aisner D, JM MG, editors. The National Academies Collection. Washington (DC): Reports funded by National Institutes of Health; 2007.

    Google Scholar 

  7. Owens DK, Bravata DM. Computer-based decision support: wishing on a star? Eff Clin Pract. 2001;4(1):34–8.

    PubMed  CAS  Google Scholar 

  8. Fretheim A, Flottorp S, Oxman AD. Effect of interventions for implementing clinical practice guidelines. Oslo, Norway: Norwegian Knowledge Centre for the Health Services2015. https://www.fhi.no/publ/2015/effekt-av-tiltak-for-implementering-av-kliniske-retningslinjer/.

  9. Moja L, Kwag KH, Lytras T, Bertizzolo L, Brandt L, Pecoraro V, et al. Effectiveness of computerized decision support systems linked to electronic health records: a systematic review and meta-analysis. Am J Public Health. 2014;104(12):e12–22.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Bonetti D, Johnston M, Pitts NB, Deery C, Ricketts I, Tilley C, et al. Knowledge may not be the best target for strategies to influence evidence-based practice: using psychological models to understand RCT effects. Int J Behav Med. 2009;16(3):287–93.

    Article  PubMed  CAS  Google Scholar 

  11. Flottorp SA, Oxman AD, Krause J, Musila NR, Wensing M, Godycki-Cwirko M, et al. A checklist for identifying determinants of practice: a systematic review and synthesis of frameworks and taxonomies of factors that prevent or enable improvements in healthcare professional practice. Implement Sci. 2013;8:35.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Roshanov PS, Fernandes N, Wilczynski JM, Hemens BJ, You JJ, Handler SM, et al. Features of effective computerised clinical decision support systems: meta-regression of 162 randomised trials. BMJ. 2013;346:f657.

    Article  PubMed  Google Scholar 

  13. Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ. 2005;330(7494):765.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Damiani G, Pinnarelli L, Colosimo SC, Almiento R, Sicuro L, Galasso R, et al. The effectiveness of computerized clinical guidelines in the process of care: a systematic review. BMC Health Serv Res. 2010;10:2.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Nuckols TK, Smith-Spangler C, Morton SC, Asch SM, Patel VM, Anderson LJ, et al. The effectiveness of computerized order entry at reducing preventable adverse drug events and medication errors in hospital settings: a systematic review and meta-analysis. Syst Rev. 2014;3:56.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Lobach DF. The road to effective clinical decision support: are we there yet? BMJ. 2013;346:f1616.

    Article  PubMed  Google Scholar 

  17. Van de Velde S, Roshanov P, Kortteisto T, Kunnamo I, Aertgeerts B, Vandvik PO, Flottorp S. Tailoring implementation strategies for evidence-based recommendations using computerised clinical decision support systems: protocol for the development of the GUIDES tools. Implementation science: IS 2016,11(1):29.

  18. Winters BD, Gurses AP, Lehmann H, Sexton JB, Rampersad CJ, Pronovost PJ. Clinical review: checklists—translating evidence into practice. Crit Care. 2009;13(6):210.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Burian BK, Clebone A, Dismukes K, Ruskin KJ. More than a tick box: medical checklist development, design, and use. Anesth Analg. 2018;126(1):223–32.

    Article  PubMed  Google Scholar 

  20. Van de Velde S, Kortteisto T, Spitaels D, Jamtvedt G, Roshanov P, Kunnamo I, Aertgeerts B, Vandvik PO, Flottorp S. Development of a Tailored Intervention With Computerized Clinical Decision Support to Improve Quality of Care for Patients With Knee Osteoarthritis: Multi-Method Study. JMIR Res Protoc. 2018;7(6):e154).

  21. Ash JS, Sittig DF, Guappone KP, Dykstra RH, Richardson J, Wright A, et al. Recommended practices for computerized clinical decision support and knowledge management in community settings: a qualitative study. BMC Med Inform Decis Mak. 2012;12:6.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Bates DW, Kuperman GJ, Wang S, Gandhi T, Kittler A, Volk L, et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc. 2003;10(6):523–30.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Berlin A, Sorani M, Sim I. A taxonomic description of computer-based clinical decision support systems. J Biomed Inform. 2006;39(6):656–67.

    Article  PubMed  Google Scholar 

  24. Coleman J, Slee A. Guidelines for hazard review of eprescribing decision support. 2009.

    Google Scholar 

  25. Handler JA, Feied CF, Coonan K, Vozenilek J, Gillam M, Peacock PR Jr, et al. Computerized physician order entry and online decision support. Acad Emerg Med Off J Soc Acad Emerg Med. 2004;11(11):1135–41.

    Article  Google Scholar 

  26. Horsky J, Schiff GD, Johnston D, Mercincavage L, Bell D, Middleton B. Interface design principles for usable decision support: a targeted review of best practices for clinical prescribing interventions. J Biomed Inform. 2012;45(6):1202–16.

    Article  PubMed  Google Scholar 

  27. Jenders RA, Osheroff JA, Sittig DF, Pifer EA, Teich JM. Recommendations for clinical decision support deployment: synthesis of a roundtable of medical directors of information systems. AMIA Annu Symp Proc. 2007;2007:359–63.

    PubMed Central  Google Scholar 

  28. Kilsdonk E, Peute LW, Jaspers MW. Factors influencing implementation success of guideline-based clinical decision support systems: a systematic review and gaps analysis. Int J Med Inform. 2017;98:56–64.

    Article  PubMed  CAS  Google Scholar 

  29. Kilsdonk E, Peute LWP, Knijnenburg SL, Jaspers MWM. Factors known to influence acceptance of clinical decision support systems. Stud Health Technol Inform. 2011;169:150–4.

    PubMed  CAS  Google Scholar 

  30. McCormack JL, Ash JS. Clinician perspectives on the quality of patient data used for clinical decision support: a qualitative study. AMIA Annu Symp Proc. 2012;2012:1302–9.

    PubMed  PubMed Central  Google Scholar 

  31. Osheroff JA, Healthcare Information and Management Systems Society. Improving outcomes with clinical decision support : an implementer's guide. 2nd ed. Chicago: HIMSS; 2012.

    Book  Google Scholar 

  32. Payne TH, Hines LE, Chan RC, Hartman S, Kapusnik-Uner J, Russ AL, et al. Recommendations to improve the usability of drug-drug interaction clinical decision support alerts. J Am Med Inform Assoc. 2015;22(6):1243–50.

    Article  PubMed  Google Scholar 

  33. Pelayo S, Marcilly R, Bernonville S, Leroy N, Beuscart-Zephir M-C. Human factors based recommendations for the design of medication related clinical decision support systems (CDSS). Stud Health Technol Inform. 2011;169:412–6.

    PubMed  Google Scholar 

  34. Sittig DF, Singh H. A new sociotechnical model for studying health information technology in complex adaptive healthcare systems. Qual Saf Health Care. 2010;19(Suppl 3):i68–74.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Sittig DF, Wright A, Osheroff JA, Middleton B, Teich JM, Ash JS, et al. Grand challenges in clinical decision support. J Biomed Inform. 2008;41(2):387–92.

    Article  PubMed  Google Scholar 

  36. Sweidan M, Williamson M, Reeve JF, Harvey K, O'Neill JA, Schattner P, et al. Identification of features of electronic prescribing systems to support quality and safety in primary care using a modified Delphi process. BMC Med Inform Decis Mak. 2010;10(1):21.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Teich JM, Osheroff JA, Pifer EA, Sittig DF, Jenders RA, Panel CDSER. Clinical decision support in electronic prescribing: recommendations and an action plan: report of the joint clinical decision support workgroup. J Am Med Inform Assoc. 2005;12(4):365–76.

    Article  PubMed  PubMed Central  Google Scholar 

  38. van der Sijs H, Aarts J, Vulto A, Berg M. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc. 2006;13(2):138–47.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Wright A, Ash JS, Erickson JL, Wasserman J, Bunce A, Stanescu A, et al. A qualitative study of the activities performed by people involved in clinical decision support: recommended practices for success. J Am Med Inform Assoc. 2014;21(3):464–72.

    Article  PubMed  Google Scholar 

  40. Wright A, Sittig DF, Ash JS, Bates DW, Feblowitz J, Fraser G, et al. Governance for clinical decision support: case studies and recommended practices from leading institutions. J Am Med Inform Assoc. 2011;18(2):187–94.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Osheroff JA, editor. Improving Medication Use and Outcomes with Clinical Decision Support:: A Step by Step Guide2009: HIMSS.

  42. Ahmadian L, van Engen-Verheul M, Bakhshi-Raiez F, Peek N, Cornet R, de Keizer NF. The role of standardized data and terminological systems in computerized clinical decision support systems: literature review and survey. Int J Med Inform. 2011;80(2):81–93.

    Article  PubMed  Google Scholar 

  43. Mollon B, Chong JJR, Holbrook AM, Sung M, Thabane L, Foster G. Features predicting the success of computerized decision support for prescribing: a systematic review of randomized controlled trials. BMC Med Inform Decis Mak. 2009;9(1):11.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Holt TA, Thorogood M, Griffiths F. Changing clinical practice through patient specific reminders available at the time of the clinical encounter: systematic review and meta-analysis. J Gen Intern Med. 2012;27(8):974–84.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Arditi C, Rege-Walther M, Wyatt JC, Durieux P, Burnand B. Computer-generated reminders delivered on paper to healthcare professionals: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2012;12:CD001175.

    PubMed  Google Scholar 

  46. Damiani G, Pinnarelli L, Colosimo SC, Almiento R, Sicuro L, Galasso R, et al. The effectiveness of computerized clinical guidelines in the process of care: a systematic review. BMC Health Serv Res. 2010;10(1):2.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Fillmore CL, Bray BE, Kawamoto K. Systematic review of clinical decision support interventions with potential for inpatient cost reduction. BMC Med Inform Decis Mak. 2013;13(1):135.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Goddard K, Roudsari A, Wyatt JC. Automation bias: a systematic review of frequency, effect mediators, and mitigators. J Am Med Inform Assoc. 2012;19(1):121–7.

    Article  PubMed  Google Scholar 

  49. Lobach D, Sanders GD, Bright TJ, Wong A, Dhurjati R, Bristow E, et al. Enabling health care decisionmaking through clinical decision support and knowledge management. Evid Rep Technol Assess (Full Rep). 2012;203(203):1–784.

    Google Scholar 

  50. Nuckols TK, Smith-Spangler C, Morton SC, Asch SM, Patel VM, Anderson LJ, et al. The effectiveness of computerized order entry at reducing preventable adverse drug events and medication errors in hospital settings: a systematic review and meta-analysis. Syst Rev. 2014;3(1):56.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Shojania KG, Jennings A, Mayhew A, Ramsay C, Eccles M, Grimshaw J. Effect of point-of-care computer reminders on physician behaviour: a systematic review. CMAJ. 2010;182(5):E216–25.

    Article  PubMed  PubMed Central  Google Scholar 

  52. Gillaizeau F, Chan E, Trinquart L, Colombet I, Walton RT, Rege-Walther M, et al. Computerized advice on drug dosage to improve prescribing practice. Cochrane Database Syst Rev. 2013;11:CD002894.

    Google Scholar 

  53. Goldzweig CL, Orshansky G, Paige NM, Miake-Lye IM, Beroes JM, Ewing BA, et al. Electronic health record-based interventions for improving appropriate diagnostic imaging: a systematic review and meta-analysis. Ann Intern Med. 2015;162(8):557–65.

    Article  PubMed  Google Scholar 

  54. Main C, Moxham T, Wyatt JC, Kay J, Anderson R, Stein K. Computerised decision support systems in order communication for diagnostic, screening or monitoring test ordering: systematic reviews of the effects and cost-effectiveness of systems. Health Technol Assess. 2010;14(48):1–227.

    Article  PubMed  CAS  Google Scholar 

  55. Pearson S-A, Moxey A, Robertson J, Hains I, Williamson M, Reeve J, et al. Do computerised clinical decision support systems for prescribing change practice? A systematic review of the literature (1990-2007). BMC Health Serv Res. 2009;9:154.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Schedlbauer A, Prasad V, Mulvaney C, Phansalkar S, Stanton W, Bates DW, et al. What evidence supports the use of computerized alerts and prompts to improve clinicians’ prescribing behavior? J Am Med Inform Assoc. 2009;16(4):531–8.

    Article  PubMed  PubMed Central  Google Scholar 

  57. Balshem H, Helfand M, Schunemann HJ, Oxman AD, Kunz R, Brozek J, et al. GRADE guidelines: 3. Rating the quality of evidence. J Clin Epidemiol. 2011;64(4):401–6.

    Article  PubMed  Google Scholar 

  58. Goud R, van Engen-Verheul M, de Keizer NF, Bal R, Hasman A, Hellemans IM, et al. The effect of computerized decision support on barriers to guideline implementation: a qualitative study in outpatient cardiac rehabilitation. Int J Med Inform. 2010;79(6):430–7.

    Article  PubMed  Google Scholar 

  59. Martens JD, van der Aa A, Panis B, van der Weijden T, Winkens RA, Severens JL. Design and evaluation of a computer reminder system to improve prescribing behaviour of GPs. Stud Health Technol Inform. 2006;124:617–23.

    PubMed  Google Scholar 

  60. Tai SS, Nazareth I, Donegan C, Haines A. Evaluation of general practice computer templates. Lessons from a pilot randomised controlled trial. Methods Inf Med. 1999;38(3):177–81.

    Article  PubMed  CAS  Google Scholar 

  61. McDermott L, Yardley L, Little P, van Staa T, Dregan A, McCann G, et al. Process evaluation of a point-of-care cluster randomised trial using a computer-delivered intervention to reduce antibiotic prescribing in primary care. BMC Health Serv Res. 2014;14:594.

    Article  PubMed  PubMed Central  Google Scholar 

  62. Miller A, Moon B, Anders S, Walden R, Brown S, Montella D. Integrating computerized clinical decision support systems into clinical work: a meta-synthesis of qualitative research. Int J Med Inform. 2015;84(12):1009–18.

    Article  PubMed  Google Scholar 

  63. Ackerman SL, Gonzales R, Stahl MS, Metlay JP. One size does not fit all: evaluating an intervention to reduce antibiotic prescribing for acute bronchitis. BMC Health Serv Res. 2013;13(1):462.

    Article  PubMed  PubMed Central  Google Scholar 

  64. Lugtenberg M, Weenink JW, van der Weijden T, Westert GP, Kool RB. Implementation of multiple-domain covering computerized decision support systems in primary care: a focus group study on perceived barriers. BMC Med Inform Decis Mak. 2015;15:82.

    Article  PubMed  PubMed Central  Google Scholar 

  65. Flottorp S, Havelsrud K, Oxman AD. Process evaluation of a cluster randomized trial of tailored interventions to implement guidelines in primary care—why is it so hard to change practice? Fam Pract. 2003;20(3):333–9.

    Article  PubMed  Google Scholar 

  66. Keeffe B, Subramanian U, Tierney WM, Udris E, Willems J, McDonell M, et al. Provider response to computer-based care suggestions for chronic heart failure. Med Care. 2005;43(5):461–5.

    Article  PubMed  Google Scholar 

  67. Moxey A, Robertson J, Newby D, Hains I, Williamson M, Pearson S-A. Computerized clinical decision support for prescribing: provision does not guarantee uptake. J Am Med Inform Assoc. 2010;17(1):25–33.

    Article  PubMed  PubMed Central  Google Scholar 

  68. Roumie CL, Elasy TA, Wallston KA, Pratt S, Greevy RA, Liu X, et al. Clinical inertia: a common barrier to changing provider prescribing behavior. Jt Comm J Qual Patient Saf. 2007;33(5):277–85.

    Article  PubMed  Google Scholar 

  69. Wilson BJ, Torrance N, Mollison J, Watson MS, Douglas A, Miedzybrodzka Z, et al. Cluster randomized trial of a multifaceted primary care decision-support intervention for inherited breast cancer risk. Fam Pract. 2006;23(5):537–44.

    Article  PubMed  Google Scholar 

  70. Tierney WM, Miller ME, Overhage JM, McDonald CJ. Physician inpatient order writing on microcomputer workstations. Effects on resource utilization. JAMA. 1993;269(3):379–83.

    Article  PubMed  CAS  Google Scholar 

  71. Bloomfield HE, Nelson DB, van Ryn M, Neil BJ, Koets NJ, Basile JN, et al. A trial of education, prompts, and opinion leaders to improve prescription of lipid modifying therapy by primary care physicians for patients with ischemic heart disease. Qual Saf Health Care. 2005;14(4):258–63.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  72. Tierney WM, Overhage JM, Murray MD, Harris LE, Zhou XH, Eckert GJ, et al. Can computer-generated evidence-based care suggestions enhance evidence-based management of asthma and chronic obstructive pulmonary disease? A randomized, controlled trial. Health Serv Res. 2005;40(2):477–97.

    Article  PubMed  PubMed Central  Google Scholar 

  73. Hobbs FD, Delaney BC, Carson A, Kenkre JE. A prospective controlled trial of computerized decision support for lipid management in primary care. Fam Pract. 1996;13(2):133–7.

    Article  PubMed  CAS  Google Scholar 

  74. Apkon M, Mattera JA, Lin Z, Herrin J, Bradley EH, Carbone M, et al. A randomized outpatient trial of a decision-support information technology tool. Arch Intern Med. 2005;165(20):2388–94.

    Article  PubMed  Google Scholar 

  75. Sequist TD, Gandhi TK, Karson AS, Fiskio JM, Bugbee D, Sperling M, et al. A randomized trial of electronic clinical reminders to improve quality of care for diabetes and coronary artery disease. J Am Med Inform Assoc. 2005;12(4):431–7.

    Article  PubMed  PubMed Central  Google Scholar 

  76. Eccles M, McColl E, Steen N, Rousseau N, Grimshaw J, Parkin D, et al. Effect of computerised evidence based guidelines on management of asthma and angina in adults in primary care: cluster randomised controlled trial. BMJ. 2002;325(7370):941.

    Article  PubMed  PubMed Central  Google Scholar 

  77. Hetlevik I, Holmen J, Kruger O, Kristensen P, Iversen H. Implementing clinical guidelines in the treatment of hypertension in general practice. Blood Press. 1998;7(5–6):270–6.

    PubMed  CAS  Google Scholar 

  78. Fortuna RJ, Zhang F, Ross-Degnan D, Campion FX, Finkelstein JA, Kotch JB, et al. Reducing the prescribing of heavily marketed medications: a randomized controlled trial. J Gen Intern Med. 2009;24(8):897–903.

    Article  PubMed  PubMed Central  Google Scholar 

  79. McCowan C, Neville RG, Ricketts IW, Warner FC, Hoskins G, Thomas GE. Lessons from a randomized controlled trial designed to evaluate computer decision support software to improve the management of asthma. Med Inform Internet Med. 2001;26(3):191–201.

    Article  PubMed  CAS  Google Scholar 

  80. Bindels R, Hasman A, Derickx M, van Wersch JWJ, Winkens RAG. User satisfaction with a real-time automated feedback system for general practitioners: a quantitative and qualitative study. Int J Qual Health Care. 2003;15(6):501–8.

    Article  PubMed  Google Scholar 

  81. Lugtenberg M, Pasveer D, van der Weijden T, Westert GP, Kool RB. Exposure to and experiences with a computerized decision support intervention in primary care: results from a process evaluation. BMC Fam Pract. 2015;16(1):141.

    Article  PubMed  PubMed Central  Google Scholar 

  82. Rotman BL, Sullivan AN, McDonald TW, Brown BW, DeSmedt P, Goodnature D, et al. A randomized controlled trial of a computer-based physician workstation in an outpatient setting: implementation barriers to outcome evaluation. J Am Med Inform Assoc. 1996;3(5):340–8.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  83. Fretheim A, Havelsrud K, Oxman A. Rational Prescribing in Primary care (RaPP): process evaluation of an intervention to improve prescribing of antihypertensive and cholesterol-lowering drugs. Implement Sci. 2006;1(1):19.

    Article  PubMed  PubMed Central  Google Scholar 

  84. Semler MW, Weavind L, Hooper MH, Rice TW, Gowda SS, Nadas A, et al. An electronic tool for the evaluation and treatment of sepsis in the ICU: a randomized controlled trial. Crit Care Med. 2015;43(8):1595–602.

    Article  PubMed  PubMed Central  Google Scholar 

  85. Bindels R, Hasman A, van Wersch JW, Talmon J, Winkens RA. Evaluation of an automated test ordering and feedback system for general practitioners in daily practice. Int J Med Inform. 2004;73(9–10):705–12.

    Article  PubMed  Google Scholar 

  86. Bourgeois FC, Linder J, Johnson SA, Co JP, Fiskio J, Ferris TG. Impact of a computerized template on antibiotic prescribing for acute respiratory infections in children and adolescents. Clin Pediatr (Phila). 2010;49(10):976–83.

    Article  Google Scholar 

  87. El-Kareh RE, Gandhi TK, Poon EG, Newmark LP, Ungar J, Orav EJ, et al. Actionable reminders did not improve performance over passive reminders for overdue tests in the primary care setting. J Am Med Inform Assoc. 2011;18(2):160–3.

    Article  PubMed  PubMed Central  Google Scholar 

  88. Barnard KD, Cradock S, Parkin T, Skinner TC. Effectiveness of a computerised assessment tool to prompt individuals with diabetes to be more active in consultations. Pract Diabetes Int. 2007;24(1):36–41. 6p.

    Article  Google Scholar 

  89. Keely E, Clark H, Karovitch A, Graham I. Screening for type 2 diabetes following gestational diabetes: family physician and patient perspectives. Can Fam Physician. 2010;56(6):558–63.

    PubMed  PubMed Central  Google Scholar 

  90. Johnson KB, Ho YX, Cala CM, Davison C. Showing your work: impact of annotating electronic prescriptions with decision support results. J Biomed Inform. 2010;43(2):321–5.

    Article  PubMed  Google Scholar 

  91. Lin ND, Martins SB, Chan AS, Coleman RW, Bosworth HB, Oddone EZ, et al. Identifying barriers to hypertension guideline adherence using clinician feedback at the point of care. AMIA Annu Symp Proc. 2006;2006:494–8.

    PubMed Central  Google Scholar 

  92. Ross J, Stevenson F, Lau R, Murray E. Factors that influence the implementation of e-health: a systematic review of systematic reviews (an update). Implement Sci. 2016;11(1):146.

    Article  PubMed  PubMed Central  Google Scholar 

  93. Wu HW, Davis PK, Bell DS. Advancing clinical decision support using lessons from outside of healthcare: an interdisciplinary systematic review. BMC Med Inform Decis Mak. 2012;12:90.

    Article  PubMed  PubMed Central  Google Scholar 

  94. Delvaux N, De Sutter A, Van de Velde S, Ramaekers D, Fieuws S, Aertgeerts B. Electronic laboratory medicine ordering with evidence-based order sets in primary care (ELMO study): protocol for a cluster randomised trial. Implement Sci. 2017;12(1):147.

    Article  PubMed  PubMed Central  Google Scholar 

  95. Peleg M. Computer-interpretable clinical guidelines: a methodological review. J Biomed Inform. 2013;46(4):744–63.

    Article  PubMed  Google Scholar 

  96. Egan M, Bambra C, Petticrew M, Whitehead M. Reviewing evidence on complex social interventions: appraising implementation in systematic reviews of the health effects of organisational-level workplace interventions. J Epidemiol Community Health. 2009;63(1):4–11.

    Article  PubMed  CAS  Google Scholar 

  97. Lewin S, Glenton C, Munthe-Kaas H, Carlsen B, Colvin CJ, Gulmezoglu M, et al. Using qualitative evidence in decision making for health and social interventions: an approach to assess confidence in findings from qualitative evidence syntheses (GRADE-CERQual). PLoS Med. 2015;12(10):e1001895.

    Article  PubMed  PubMed Central  Google Scholar 

  98. Cambon L, Minary L, Ridde V, Alla F. A tool to analyze the transferability of health promotion interventions. BMC Public Health. 2013;13:1184.

  99. Brender J, Talmon J, de Keizer N, Nykanen P, Rigby M, Ammenwerth E. STARE-HI—Statement on Reporting of Evaluation Studies in Health Informatics: explanation and elaboration. Appl Clin Inform. 2013;4(3):331–58.

  100. Fixsen DL, Blase KA, Duda MA, Naoom SF, Van Dyke M. Implementation of evidence-based treatments for children and adolescents: research findings and their implications for the future. In: Kazdin JRWAE, editor. Evidence-based psychotherapies for children and adolescents. 2nd ed. New York: Guilford Press; 2010. p. 435–50.

    Google Scholar 

Download references

Acknowledgements

We would like to thank the GUIDES Expert Panel for their expertise in this project and to the patients and public participants who shared their views and experiences with us. We also want to thank the persons that participated in the pilot testing of the checklist: Nicolas Delvaux, Hanne Cloetens, Linn Brandt, David Spitaels, Annemie Heselmans and Luis Marco-Ruiz. The authors also thank Andrew Garratt for his advice on rating scales and psychometric testing.

Public and patient participants: Tom Codron, Kari HÃ¥velsrud, Sarah Rosenbaum and Satu Salonen.

Collaborating authors: Smisha Agarwal, Leila Ahmadian, David Bates, Linn Brandt, Romina Brignardello-Petersen, Carl Cauwenbergh, Yaolong Chen, Nicholas Conway, Nicolas Delvaux, Pierre Durieux, Robert El-Kareh, Atle Fretheim, Robert Greenes, Robert Brian Haynes, Annemie Heselmans, Tim Holt, Robert A. Jenders, Kensaku Kawamoto, Tamara Kredo, Edwin Lomotan, Marjolein Lugtenberg, Luis Marco-Ruiz, Colin McCowan, Lisa McDermott, Stephanie Medlock, Maria Michaels, Blackford Middleton, Marc Mitchell, Lorenzo Moja, Michael Mugisha, Jerome A. Osheroff, Pablo Alonso Coello, Sallie-Anne Pearson, Sylvia Pelayo, Joshua Richardson, Peeter Ross, Nard Schreurs, Matthew Semler, Dean Sittig, Tigest Tamrat, Madis Tiik, Anna Turusheva, Heleen van der Sijs, Robert Vander Stichele, Mieke Vermandere, Abigail Viall, Mahima Venkateswaran, Adam Wright and Taryn Young.

Funding

This project has received funding from the EU’s Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie grant agreement No 654981. The funder was not involved in any parts of the research.

Availability of data and materials

All data generated or analysed during this study are included in this published article and its supplementary information files.

Author information

Authors and Affiliations

Authors

Consortia

Contributions

SVDV wrote this paper with all authors commenting on drafts and approving the final version. SVDV and SF reviewed the research evidence and frameworks, SV extracted and synthesised the CDS success factors, and SF double-checked this work. SVDV coordinated the research, and SF is the guarantor for this study.

Corresponding author

Correspondence to Stijn Van de Velde.

Ethics declarations

Ethics approval and consent to participate

Not applicable

Competing interests

Ilkka Kunnamo, Per Olav Vandvik, Blackford Middleton, David Bates, Jerome Osheroff, Kensaku Kawamoto, Lorenzo Moja, Nard Schreurs, Nicolas Delvaux and Smisha Agarwal declared financial relationships that present or might present a potential conflict of interest. Other relationships that present or might present a potential conflict of interest were declared by Linn Brandt (doing research about a specific type of CDS), Nard Schreurs (relationships with software vendors) and Mieke Vermandere (implementation of a specific type of CDS). The other authors had no potential conflict of interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:

Expert panel feedback. (DOCX 73 kb)

Additional file 2:

Patient and health consumers feedback. (DOCX 33 kb)

Additional file 3:

GUIDES checklist. (PDF 901 kb)

Additional file 4:

Importance ratings of GUIDES factors. (DOCX 28 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Van de Velde, S., Kunnamo, I., Roshanov, P. et al. The GUIDES checklist: development of a tool to improve the successful use of guideline-based computerised clinical decision support. Implementation Sci 13, 86 (2018). https://doi.org/10.1186/s13012-018-0772-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13012-018-0772-3

Keywords