Skip to main content

Table 1 CDSS uptake features

From: Do providers use computerized clinical decision support systems? A systematic review and meta-regression of clinical decision support uptake

CDSS context

1. Was there a formal process to identify barriers and enablers to current behaviour prior to the CDSS study? e.g. mapping barriers and enablers to intervention components

2. Was there a previous study using a CDSS targeting the same primary outcome as the current study, and was that outcome significantly improved by CDSS?

3. Is there a gap between the desired and the baseline clinical behaviour identified by study authors?

4. Has the availability and quality of the patient data needed to inform the CDSS been formally evaluated? e.g. chart review, validation of patient-facing electronic questionnaires

5. Does use of the CDSS enable improvement of the quality of patient data compared to current standard of care? e.g. electronic collection of data, including patient-reported outcomes

6. Was there a formal pre-study evaluation of user perceptions that assessed informational needs and/or perceived benefit to using CDSS, and if so, was it positive?

7. Was specific additional hardware (other than what was already present as part of usual care) required and available for the CDSS?

8. Does the use of CDSS negatively impact the function of existing information systems? e.g. causing new technical issues or slower electronic health record function

9. Was a formal workflow analysis conducted prior to formalization of the intervention and did it demonstrate intervention feasibility?

10. If a workflow analysis was performed, did it demonstrate that baseline workflow would allow the introduction of the CDSS?

CDSS content

11. Are developers from an academic centre and report no significant conflict of interest?

12. Is the CDSS advice based on disease-specific guidelines?

13. Does the CDSS present its reasoning and/or cite research evidence to the user at the time of advice?

14. Does the CDSS present the harms/benefits of provided guidance?

15. Was the CDSS pilot tested and was the accuracy of information specifically assessed?

16. Was there a post-study evaluation of users and were their information needs addressed?

17. Does the CDSS clearly explain/indicate why it was triggered for specific patients/situations?

18. Was there a pre- or post-study evaluation of users and was CDSS information/advice clear?

19. Is the CDSS advice available in the location and software system in which it will be implemented?

20. Does the CDSS advice contradict any current guidelines?

21. Were there any issues with the amount of decision support delivered if the CDSS was pilot tested?

CDSS system

22. Was there a formal usability evaluation performed for the CDSS and was it found to be usable?

23. Was there a pre- or post-study evaluation of users and was workflow facilitation found to be positive?

24. Can the system be customized to provide improved user functionality?

25. Is the system always up and running?

26. Was there a pre- or post-study evaluation of users and was the advice delivery format found to be appropriate?

27. Was there a pre- or post-study evaluation of users and was the visual display/graphic design of CDSS advice found to be appropriate?

28. If the CDSS used specific functions for prioritized decision support (i.e. pop-ups), were they pilot tested or assessed in a post-study evaluation?

29. Does the CDSS provide advice directly to users who will be making the relevant clinical decisions?

30. Does CDSS facilitate collaboration between healthcare providers?

31. Does the CDSS provide advice at the moment and point-of-need?

CDSS implementation

32. Was information about the CDSS available to users (i.e. practical instructions)?

33. Are dedicated personnel and/or web- or paper-based resources available to CDSS users for technical support (i.e. help desk, tech support)?

34. Was user training provided for the CDSS?

35. Were other barriers to the behaviour changes being targeted by the CDSS discussed (i.e. medication costs), and if so were strategies implemented to address those barriers?

36. Was CDSS implemented in temporal steps?

37. Was CDSS usage and performance evaluated during the study?

38. If CDSS usage and performance was monitored during the study, were there strategies in place to fix any identified problems?

39. Were local users consulted during the intervention planning or implementation?

40. Was there discussion of an overall strategy (i.e. Knowledge Translation strategy) to guide the CDSS initiative?

Additional factors from Roshanov et al.

41. Was the CDSS developer involved in authorship of the study?

42. Was CDSS advice provided automatically in the practitioner’s workflow?

43. Did the CDSS provide advice for patients (i.e. educational material)?

44. Did the CDSS require a reason for override of use/recommendations?

45. Does the CDSS have a critiquing function for actions? (i.e. it activates after orders are entered, suggesting they should be cancelled or revised)

46. Does the practitioner have to enter data directly into the CDSS?

47. Does the CDSS provide advice or reminders directly to patients (i.e. independent of the clinician)?

48. Was the CDSS a commercial product?

49. Did practitioners receive advice directly through an electronic interface?

50. Did the CDSS target healthcare practitioners other than physicians?

51. Was periodic performance feedback provided in addition to patient-specific system advice?

52. Was there a co-intervention in the CDSS group?