- Systematic review
- Open Access
Implementation costs of hospital-based computerised decision support systems: a systematic review
Implementation Science volume 18, Article number: 7 (2023)
The importance of accurately costing implementation strategies is increasingly recognised within the field of implementation science. However, there is a lack of methodological guidance for costing implementation, particularly within digital health settings. This study reports on a systematic review of costing analyses conducted alongside implementation of hospital-based computerised decision support systems.
PubMed, Embase, Scopus and CINAHL databases were searched between January 2010 and August 2021. Two reviewers independently screened and selected original research studies that were conducted in a hospital setting, examined the implementation of a computerised decision support systems and reported implementation costs. The Expert Recommendations for Implementing Change Framework was used to identify and categorise implementation strategies into clusters. A previously published costing framework was applied to describe the methods used to measure and value implementation costs. The reporting quality of included studies was assessed using the Consolidated Health Economic Evaluation Reporting Standards checklist.
Titles and abstracts of 1836 articles were screened, with nine articles eligible for inclusion in the review. Implementation costs were most frequently reported under the ‘evaluative and iterative strategies’ cluster, followed by ‘provide interactive assistance’. Labour was the largest implementation-related cost in the included papers, irrespective of implementation strategy. Other reported costs included consumables, durable assets and physical space, which was mostly associated with stakeholder training. The methods used to cost implementation were often unclear. There was variation across studies in the overall quality of reporting.
A relatively small number of papers have described computerised decision support systems implementation costs, and the methods used to measure and value these costs were not well reported. Priorities for future research should include establishing consistent terminology and appropriate methods for estimating and reporting on implementation costs.
The review protocol is registered with PROSPERO (ID: CRD42021272948).
Computerised decision support systems (CDSS) are digital health technologies designed to analyse patient’s clinical data to assist in clinical decision-making at the point of care . CDSS can be knowledge based which involves predetermined rules based on literature, clinical practice or patients, or non-knowledge based which utilises advanced statistical pattern recognition, including artificial intelligence and machine learning, to produce an action or output from the data . The use of CDSS, and other digital health innovations, is increasing as hospitals move towards fully electronic systems for capturing and communicating patient data. However, the complexity of health systems presents challenges for CDSS implementation . Commonly reported barriers include the costs of implementing new systems, as well as technology-specific concerns . Changes to workflow practices associated with the adoption of new digital health initiatives often requires behavioural change from healthcare workers to ensure implementation is successful and sustainable .
Implementation science provides a method to understand the factors that support the successful adoption of evidenced-based innovations. Implementation strategies can then be used to target these factors from an innovation, individual, organisational or system perspective. In evaluating implementation, cost is understood to be a necessary element for success and sustainability . However, costs associated with implementation strategies are under-reported in the literature and often excluded from economic evaluations despite being a recognised implementation outcome [7–9].
A lack of common language surrounding the term ‘implementation’ across disciplines can lead to ambiguity about what constitutes an implementation cost. In the implementation science literature, ‘implementation’ refers to the strategy of actions/methods used to facilitate behaviour change to support adoption, integration and sustainment of innovations into clinical practice . Conversely, in an IT context ‘implementation’ more commonly refers to the operationalisation of an innovation , while in software engineering, ‘implementation’ often includes the software coding process . This review aligns with the implementation science definition and is concerned with the costs associated with implementation strategies.
Approaches to costing implementation strategies are emerging; however, none have been designed for the unique context of digital health innovations. Technological considerations are inter-related with other social and organisational considerations, and unintended consequences can emerge if these dimensions are not balanced, which can lead to implementation failure . An enhanced understanding of the resourcing requirements associated with the implementation of digital health initiatives would provide decision-makers with more accurate and transparent information that would likely support successful implementation efforts.
The aim of this review was to describe the nature of implementation costs that have been estimated within hospital-based CDSS initiatives and to document the methods that have been used to measure and value these costs. This review may inform ongoing research efforts to establish appropriate methodology for costing implementation efforts within digital health and healthcare settings more broadly.
This systematic review is reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines for systemic reviews  and registered with PROSPERO (ID: CRD42021272948).
The population, intervention, comparator, outcomes, study design (PICOS) framework  was used to define the eligibility criteria. Studies that met the following criteria were included:
Population—Conducted in a hospital setting, encompassing both inpatient (tertiary) and outpatient (secondary) services.
Intervention/exposure—CDSS defined as a computerised system used by healthcare professionals to assist in clinical decision-making at point of care . This excludes CDSS's that do not involve a decision support element including technologies that are only concerned with medical records, communication, information exchange, clinical performance, infection outbreak warning and shared decision-making.
Outcomes—Reported the direct and indirect costs of implementation strategies associated with the implementation of CDSS.
Study design—Studies reporting original research.
Articles were excluded if as follows: published prior to 2010 (for pragmatic reasons and due to rapidly developing literature in this field); they were an abstract or dissertation; the setting was primary care, a virtual hospital, hospital in the home, other in home services, or not specified as hospital; they reported on CDSS development but not implementation; they reported on a digital health intervention that did not involve a decision support element (consistent with our definition of CDSS); or no implementation strategies were costed. No language restrictions were applied.
PubMed, Embase (Elsevier), Scopus (Elsevier) and CINAHL (EBSCOhost) databases were searched using a combination of subject headings and keywords across four categories: ‘cost’, ‘implementation’, ‘CDSS’ and ‘hospital’. The search results were restricted to journal articles published between January 2010 and 02 August 2021 (last date searched). Forward and backward citation searching was performed on included papers. The detailed search strategy is outlined in Additional file 1.
All records retrieved from the search were imported into EndNote, deduplicated and then imported into Rayyan for screening . Two authors (T. D. and M. F.) independently performed title and abstract screening of all records in Rayyan. Abstracts that met the inclusion criteria were progressed to full-text screening which was independently conducted by two authors (T. D. and H. C.). Discrepancies at any stage in the screening process were resolved through discussion with T. D., M. F. and H. C. to reach consensus on which articles to include. Study investigators were not contacted for more information.
Data collection and analysis
One author (T. D.) extracted the data from the included studies using a form that was piloted and revised by T. D. and H. C. The data extraction form included study characteristics (author, year published, country, study design) as well as the clinical setting, clinical condition, CDSS type, implementation science theory/model/framework employed , economic study design , implementation strategies used and implementation costs.
The Expert Recommendations for Implementing Change (ERIC) framework was used to identify implementation strategies and related CDSS implementation costs. Implementation strategies reported in the included papers were mapped to one or more discrete implementation strategies defined in the ERIC framework. These individual strategies were then categorised into one of nine clusters of implementation strategies . The ‘humans, “things” and space’ costing framework was applied to identify and describe the methods used to measure CDSS implementation costs. The framework costs hospital interventions by capturing the staff time of ‘humans’ involved, ‘things’ including durable assets and consumables and the physical space utilised .
Data were collated and synthesised using narrative and descriptive summaries. No attempt at meta-analysis was made given the heterogeneity in target population, intervention, study design and outcome measures across included studies.
The reporting quality of included papers was assessed using the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) checklist because most were economic evaluations . Two authors (T. D. and H. C.) independently conducted the quality assessment, and discrepancies were resolved through discussion.
Figure 1 illustrates the article selection process. The search strategy yielded 1836 articles after removing duplicates, from which we reviewed the full text of 133 articles against the defined inclusion criteria. We excluded 125 of these for the following reasons: was a duplicate (n = 1); not an original study (n = 1); primary care setting (n = 4); CDSS was developed but not implemented (n = 20); or CDSS implementation was not costed (n = 99). A total of 8 remaining articles were included in the review [23–30]. Forward citation searching (n = 166) and backward citation searching (n = 240) were conducted on the 8 included papers resulting in the identification of 1 eligible paper that subsequently met the inclusion criteria . Therefore, 9 papers were included in this review.
Key characteristics of the included studies are summarised in Table 1. All except one of the nine included studies were from high-income countries. Two included papers conducted prospective studies across multiple centres [30, 31]. All other studies were retrospective observational single-centre designs that described implementation within a single service [23–29].
All articles were conducted within the context of a health economic study design. Four of the studies were cost-effectiveness analyses [23, 27, 29, 31], three were cost analyses [25, 26, 28] and two were cost-benefit analyses [24, 30]. Economic evaluations took the perspective of the health service [23, 27–29, 31] or was not specified [24–26, 30]. None of the articles reported using an implementation science theory, model or framework to assist CDSS implementation.
The type of CDSS implemented varied across the included studies. Three papers included an alert or early warning system which aided in the identification of patients at risk according to the respective clinical guideline [23, 24, 26]. For example, one of the papers investigated a paediatric early warning system that used a multicomponent scoring tool with an action algorithm to identify hospitalised children with clinical deterioration . Three papers implemented a computerised provider order entry (CPOE) system which aims to reduce medication errors and adverse drug events by minimising illegible writing, unstructured orders and dosing variability [27, 30, 31]. Another study implemented order sets which are a bundle of clinical orders grouped together to improve adherence to clinical guidelines . Three papers implemented a management system for specific clinical contexts. Management systems are multi-functional, typically multidisciplinary and often integrate CPOE systems. A patient data management system was investigated in an included paper which supported bedside clinical documentation. Its functionalities included data acquisition from monitoring and medical devices, bedside CPOE facilitating calculations of drug doses and fluid balances, import interfaces from laboratory/microbiology/radiology data and surgery reports, automated calculation of ventilation times, automated scoring and semiautomated coding of diagnoses/procedures with interface for exporting data directly to the electronic billing system . Another included study implemented a patient blood management system which was defined as, "an evidence-based, multidisciplinary approach to optimising the care of patients who might need transfusion". It aimed to ensure optimal treatment was given while reducing avoidable/inappropriate use of blood and blood components . Electronic medication management system (eMMS) was the final management system studied in an included paper. The eMMS interfaced with the existing CPOE allowing doctors to prescribe medication electronically as well as having functionalities to alert for drug allergy checking, pregnancy warnings, therapeutic duplication and some dose-range checking .
CDSS implementation strategies and cost
Implementation strategies were extracted from the information contained in the included papers and categorised against the ERIC framework. Identified implementation strategies were then cross-checked with reported costs in the included papers to determine if the strategy had been costed. A summary of these findings is presented in Table 2. Figure 2 provides a high-level illustration on the number of papers that costed, partially costed or did not report costs for each cluster of implementation strategies.
Use of evaluative and iterative strategies
Eight papers reported ‘using evaluative and iterative strategies’ [23–29, 31]. The most common implementation strategy in this cluster was stage implementation scale-up which was conducted in five of the papers [24, 25, 27, 29, 31] and costed in four papers [25, 27, 29, 31]. Stage up ranged from 6 months  to 6 years . Audit and provide feedback [23, 28] and purposely re-examine the implementation [24, 31] were implementation strategies present in more than one included paper and consistently costed. Agulnik et al. were the only paper to partially cost the strategies reported under this cluster: the purposeful re-examination of implementation was costed, but the paper did not report costs for the three other strategies (the assessment for readiness and identification of barriers and facilitators, staging implementation scale-up and development and organised quality monitoring system) .
Provide interactive assistance
Seven papers ‘provided interactive assistance’ [23–25, 27, 28, 30, 31]. Centralise technical assistance was the most common implementation strategy in this cluster and was costed in all papers [27, 28, 30], except one . Centralised technical assistance included centralised IT staff to maintain the CDSS , help desk support [27, 30] and "a member of the [electronic patient record (EPR)] staff was responsible for implementing it on the EPR system" . Three papers reported providing clinical supervision, and all papers provided costs for the strategy [23, 24, 28].
Train and educate stakeholders
Seven papers reported strategies in the ‘train and educate stakeholders’ cluster, and all seven papers conducted ongoing training as an implementation strategy [23, 24, 27–31]. Training included: nursing skills fair , 30-min training workshop [24, 28], and annual clinical personnel training . Three papers did not provide details of the training [27, 30, 31]. Most papers costed ongoing training except for two papers that partially costed it by reporting the costs for some clinicians (including pharmacists, nurses) but not for physicians [24, 30]. The final paper that partially costed implementation strategies reported under this cluster was Swart et al.; ongoing training and educational meetings were costed, but train the trainer was not . Develop educational materials was present in more than one included paper and was consistently costed [23, 24].
Develop stakeholder interrelationships
Six papers ‘developed stakeholder interrelationships’ by employing five different implementation strategies [23, 24, 27–30]. Identifying and preparing champions was a common strategy in this cluster and was consistently costed across the papers [24, 28, 30]. Champions and the cost allocations included the followng: a nurse educator with 50% salary support , a transfusion practitioner at 0.8 full-time equivalent (FTE)  and a physician champion with an unclear cost allocation . Only two papers partially costed this cluster [23, 24]. Afshar et al. costed two strategies (use advisory boards and workgroups and involve executive boards) but did not cost the final strategy (use an implementation advisor) in this cluster . As mentioned, Agulnik et al. costed identifying and preparing champions but did not cost two other strategies: model and simulate change and advisory boards and workgroups .
Adapt and tailor to context
Five papers promoted adaptability of the CDSS and consulted clinicians in the process [24, 26, 28, 29, 31]. Clinician consultation occurred regardless if the CDSS was commercially sought, "configuration…required the equivalent of one full-time pharmacist" , or built in-house, "the Transfusion Practitioner was also involved in the design of the CDSS" . One of the five papers also tailored strategies by leveraging an identified facilitator to substitute training for an informational memo sent to the competent clinician group . Only one implementation strategy was not costed in this cluster .
Other implementation strategies
Three papers ‘supported clinicians’ by facilitating the relay of clinical data to providers [23, 28] and creating new clinical teams . All implementation strategies in the cluster to support clinicians were costed [23, 28, 31]. Two papers reported implementation strategies under the cluster ‘utilise financial strategies’, but only one costed the strategies [25, 27]. Two papers described mandating change under the 'change infrastructure' cluster, and both papers costed this implementation strategy [23, 31]. None of the included studies reported any implementation strategies categorised in the ‘engage consumers’ cluster.
Six of the nine studies mentioned that workflows or protocols required alterations to assist with the uptake of CDSS [23, 24, 27, 29–31]. This implementation activity had no associated code under the ERIC framework; thus, the category ‘workflow alterations’ was created. One study explained that implementing the CPOE required a fundamental shift in workflow to allow, "prescribing at the point of care … and required a computer to be installed in each examination room" . Prior to this CPOE implementation, prescribing occurred in offices or at workstations away from the patients . One study did not expand on the extent of workflow alteration but mentioned "workflow-related issues" . Three papers costed this implementation strategy [23, 29, 31].
Measuring CDSS implementation costs
Table 3 outlines the approaches used by each study to measure and value implementation costs categorised as either ‘humans’, ‘things’ or ‘space’.
All papers reported ‘humans’ to be an implementation cost. Two papers measured staff labour directly using activity diaries [26, 29], and one paper also interviewed staff . The activity diary used by Field et al. was weekly reports of staff member’s time spent on a set of predefined activities . Westbrook et al. used work diaries and clinical personnel training schedules as well as interviewing "hospital pharmacists and from clinical information by IT and hospital managers who were involved in eMMS implementation and maintenance" to confirm the accuracy of the data . Although Westbrook et al. recorded human-related implementation costs, these costs were not incorporated into the economic evaluation model as the authors assumed that "staff time spent in attending eMMS training sessions was incorporated into their existing workloads as no new staff were employed to cover their time" . The authors further explained that they used an incremental approach to costing by only accounting for additional staff time associated with the eMMS implementation .
The remaining seven papers did not clearly report the methods they used to cost ‘humans’ [23–25, 27, 28, 30, 31]. The amount of time for planned implementation strategies was applied in four papers to cost personnel time. However, the authors did not explore the possibility of variations to scheduled implementation strategies, for example the need for additional training sessions or meetings that ran over the allotted time [23, 24, 27, 28]. Agulnik et al. measured training costs, "using the mean base salary for nurses at [National Paediatric Oncology] multiplied by the amount of time required for individual training" , and Swart et al. measured ongoing personnel costs from the listed activities including training and meetings . Similarly, three papers reported the FTE portion of staff contributing to implementation strategies, but how the authors determined the FTE quota was not clear [24, 28, 29]. For example, Swart et al. costed their transfusion practitioner who assisted with implementation strategies, "at 0.8 FTE as this was appropriate to the project at the time these data were recorded" . Two papers did not report any information on how personnel implementation costs were measured [30, 31].
The included papers valued staff time in three different ways. Four papers obtained the actual staff salaries at their respective hospitals [24, 28, 29, 31]. Four papers, all from USA, valued staff salaries from average rates for the region or nationwide obtained from Bureau of Labor Statistics [23, 26, 27, 30]. Castellanos et al. study described the salary rates that were applied in the costing but did not mention where these rates were obtained .
Things and space
Three papers costed ‘things’ associated with implementation strategies [23, 24, 29]. ‘Things’ were classified as consumables or durable assets . Two papers costed consumables associated with education or training activities including "pocket sepsis reference cards"  and "training materials and study supplies (including paper, copies, educational materials, computer, Internet)" . It was unclear how the consumables were costed, but the sunk costs appear to be reported. Westbrook et al. costed durable assets including furniture and equipment for training purposes, as well as custom equipment to improve usability . The durable assets were estimated over the time horizon using the unit cost, useful lifespan and amounts reported. All ‘things’ were valued at market price. Only one paper costed ‘space’ associated with the implementation effort. Westbrook et al. included the rental opportunity cost where training took place and measured it at the market price attributed over a 15-year time horizon .
We used the 2022 CHEERS checklist to assess the reporting quality of economic evaluations in the included studies . The percentage of not reported applicable items ranged from 12 to 42% across the papers. The following items were generally reported well in the papers: setting and location, outcome selection, measurement and valuation, costs and resource measurement and valuation, model rationale and description, characterising heterogeneity and uncertainty and the abstract, introduction, main results and discussion. Common limitations across all papers included the absence of reporting a health economic analysis plan and including the approach and effect of engaging with patients and others affected by the study. Additional limitations included not identifying the study as an economic evaluation in the title, study population, comparators, perspective, time horizon and discount rate. Additional file 2 contains detailed scores for each included study.
This systematic review identified nine papers that reported costs of implementation strategies relating to the introduction of a CDSS in a hospital setting. The relatively low number of included papers demonstrates the lack of reporting on implementation costs in this field. Implementation strategies and costs detailed in the included papers spanned all clusters of implementation strategies as defined within the ERIC framework, except for ‘engage consumers’. However, the methods used to cost implementation strategies were not well reported. Labour was the main implementation cost reported across the papers, irrespective of implementation strategy or cluster.
An absence of implementation costs in economic evaluations is problematic as it can contribute to an underestimation of costs, falsely optimistic cost-effectiveness estimates and a disconnect between published evidence and public health decision-making . Nonetheless, the limited reporting on CDSS implementation costs that we describe here is consistent with studies evaluating implementation of digital health technologies and clinical innovations more broadly . A 2006 systematic review investigated the impact health information technologies have on healthcare quality, efficacy and costs and was unable to determine cost efficiency as only 3 of 54 included studies reported the implementation costs . A 2016 systematic review that evaluated the application of economic analysis within the field of improvement and implementation science found that only six of the thirty included studies described implementation costs . These costs included preparatory work, training, education and ongoing costs regarding care quality and outcomes. All costs and outcomes were evaluated retrospectively leading the authors to propose that prospective economic evaluations in implementation studies may not be conducted due to a lack of awareness, perceived lack of importance, political or social sensitivities related to digital health investments or simply that they are not reported in academic publications or potentially other public forums.
Another explanation for the lack of reporting on implementation costs may be due to the lack of capability among digital health implementation evaluators compounded by a lack of appropriate methodological direction and practical tools for use in prospective economic evaluations within this field [38, 39]. Approaches to costing implementation strategies are still emerging. Saldana et al. developed a cost-mapping tool called the cost of implementing new strategies (COINS) . The tool records and maps costs and resources used in an implementation process to the Stages of Implementation of Completion (SIC) framework in a manner similar to the time-driven activity-based costing approach, an established business accounting method . COINS can measure the direct costs and indirect costs (e.g. personnel effort) required throughout implementation stages . Further work by Sohn et al. developed a conceptual framework for assessing implementation costs . The framework classifies implementation costs by resource type, key activities, implementation stage (design, initiation and maintenance), site level (site specific vs ‘above service’ or ‘central’ costs) and as programmatic versus non-programmatic (research) to allow for generalisation to other settings and for key drivers of implementation costs to be identified. Another recent approach by Cidav et al. combines time-driven activity-based costing with an implementation science framework, by Proctor et al., typically used to identify, specify and report implementation strategies and evaluate implementation effectiveness . This is a pragmatic method that outlines the names, actions, actors and temporality for each implementation strategy, determines the duration of each action and then assigns a dollar value to the resources that each action consumes. The resulting data shows how specific components of an implementation strategy influence its overall cost. Finally, two papers have recently been published describing key considerations when costing implementation [10, 42]. Gold and colleagues explained how cost can be included and measured in implementation studies by utilising traditional economic evaluations that compare costs and effectiveness of health interventions . Eisman and colleagues outlined that economic information should be presented in clear and meaningful ways to different stakeholder groups throughout the implementation effort and provide recommendations for cost assessment activities . Despite these recent advances, there are no currently available methods or tools designed for the unique context of digital health implementation.
The digital health setting contains a range of additional challenges when costing implementation. In the included studies, we observed that ‘intervention’ versus ‘implementation’ costs were not clearly defined and often study specific. This ambiguity makes costing implementation strategies challenging . For example, software and hardware are integral to the digital health innovation itself and therefore would typically be considered an intervention cost. However, Westbrook et al. described the introduction of long-life battery laptops fixed to custom trolleys in response to specific hardware issues that were identified in focus groups during the 1-year evaluation [29, 43]. In this sense, the laptops could be considered an adaption strategy to support implementation and therefore be included as an implementation cost. A clear set of common definitions around implementation in these contexts would minimise confusion.
Challenges defining CDSS implementation costs in a generalisable way can also be compounded by complexity associated with background levels of organisational digital maturity. Among organisations with higher levels of digital maturity, existing enterprise architecture and related processes, including business as usual workforce capacity building activities, may impact measurement of resource use and costs required for the implementation of new CDSS initiatives. Evaluating the costs of implementing a similar CDSS initiative in an organisation with a lower-level digital maturity may require additional types and volumes of implementation activities to achieve the same level of CDSS effectiveness. This also highlights the challenge of dealing with sunk implementation costs (e.g. prior investments in lifespan-limited information technology implementations on which a new CDSS is dependent) and opportunity costs (e.g. labour time diverted away from productive clinical care activities to participate in CDSS-related education and training activities). These issues were largely not addressed with methodological rigour among the included studies.
Implications for costing implementation strategies in digital innovation
The methods used to cost implementation in the included papers adopted an ‘accounting’ approach to estimating costs; these approaches involve a financial analysis of direct operating costs . However, for economic evaluations to inform healthcare investment policy, consideration of whether equal or greater health gains could have been achieved elsewhere for the same levels of investment is required [20, 44]. This may include consideration of indirect costs such as lost productivity. This is important, as economic costs have been shown to be 4 to 15% greater than accounting costs . None of the current approaches or tools available to cost implementation considers opportunity costs [8, 35, 40]; the importance of opportunity costs has been highlighted in recent contributions to the literature [10, 42]. While considering stakeholders with diverse perspectives and key cost components during implementation, Eisman et al. stated that in implementation efforts which largely involve the time of frontline staff, productivity is a trade-off that should be estimated as an opportunity cost . Additionally, Gold and colleagues have outlined that opportunity costs are specific to the decision-maker and may change over the time horizon .
CDSS are often implemented to increase service productivity. However, the introduction of a new software or change to established workflows is often accompanied by a ‘learning curve’ effect, where productivity may decline in the short term as staff adapt to the change [45, 46]. In the case of clinicians, the learning curve can make it difficult to provide the same standard of care in the same amount of time. A separate study that investigated barriers to CDSS implementation outlined that because a clinician’s pay schedule is attached to the number of patients seen daily, the learning curve becames a barrier to implementation . Additionally, a systematic review that investigated the perceived barriers and facilitators of electronic health records (EHRs) implementation also found that loss of clinical productivity was a significant barrier by physicians. Conversely, productivity was perceived as a facilitator in studies that explored health professionals, managers and patients perceptions and demonstrated that EHRs were seen as positively influencing workplace efficiency and communication . Two papers included in our review mentioned a learning curve but did not attempt to assign a cost to this, because it was assumed to be incorporated into existing workloads  or was not present due to a 2-year incremental implementation approach .
This review emphasises the importance of ‘workflow alterations’ as an implementation strategy in digital health innovation. While this is not a strategy previously identified in the ERIC framework, a newly developed evaluation framework for EHR-integrated innovations emphasises the importance of having an innovation seamlessly integrate into existing clinical and information system workflows . Additionally, workflow challenges have been cited as a barrier to EHR adoption . The papers included in this review only mentioned alterations to clinical workflows and not to information system workflows. However, only two of the five papers that mentioned workflow alterations reported that their CDSS was integrated with an EHR system [23, 27], and only one paper interviewed IT staff . Workflow challenges appear to be a factor contributing to the complexity of implementing technology and should be considered when adopting digital health innovations.
An additional indirect cost consideration within the field of digital health relates to the expected life cycle of technology. Technology hardware has a relatively limited lifespan, typically between 2 and 7 years depending on replacement practices . Only two of the studies included within this review considered technology lifecycle costs. Westbrook et al. calculated the total cost of hardware using its cost per unit and a 3-year useful life, repeated over 15-year time horizon, as well as determining the annualised cost attributed to the cardiology ward using the proportion of the number of beds in the cardiology ward . In the other study, the expected life of hardware was the same as the time horizon, 5 years, and no specific lifespan considerations were incorporated into the cost analysis .
Software can also have a limited life cycle and can become obsolete or be iterated over time. This has been identified as an issue when the developing evidence base for specific programmes, as subsequent software generations can become available prior to the completion of a traditional randomised controlled trial [49, 50]. Depending on the nature of hardware and software assets, the costs of de-implementation of technology may need to be considered within the context of economic evaluation. De-implementation is its own process, separate to implementation, with relevant behaviour change strategies incurring additional resource use and associated costs . Cost is often used as a justification for de-implementation, but, similar to implementation, little research exists on the costs associated with de-implementation strategies . None of the included papers considered de-implementation, despite one analysing a time horizon of 15 years .
The strength of our findings is limited by the quality of the included studies. The quality of reporting varied across the papers, and missing information was not uncommon. Implementation strategies were not always clearly or comprehensively reported, although this is a common limitation in studies reporting on complex behaviour change interventions [53, 54]. None of the included studies employed an implementation science theory, model or framework; however, this trend may change with the recent development of a framework that addresses the unique challenges of implementing digital health innovations. The nonadoption, abandonment, scale-up, spread and sustainability (NASSS) framework was developed to predict and evaluate the success of technology-supported health (and social) care programmes . Finally, all data in the included papers were collected retrospectively. Prospective data collection about implementation may allow for more comprehensive and accurate costings in future studies.
Few papers have reported the costs associated with the implementation of CDSS in hospitals. Where these costs have been reported, there have been inconsistencies in terminology and approaches, and the methods used to assign costs were generally not well reported. Future research is needed to establish consistent terminology and appropriate methods for estimating and reporting on implementation costs within the context of digital health. Specific areas of focus should include accounting for technology life cycles including de-implementation costs, as well as the workforce productivity impacts associated with adapting to new technologies or processes.
Availability of data and materials
The authors declare that the data supporting findings of this review are available within the paper.
Computerised decision support system
Consolidated Health Economic Evaluation Reporting Standards
Cost of implementing new strategies
Computerised provider order entry
Electronic health record
Electronic medication management system
Electronic patient record
Expert Recommendations for Implementing Change framework
Population, intervention, comparator, outcomes, study design
Preferred Reporting Items for Systematic Reviews and Meta-Analyses
Kim S, Kim EH, Kim HS. Physician knowledge base: clinical decision support systems. Yonsei Med J. 2022;63(1):8–15.
Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Digit Med. 2020;3:17.
Marwaha JS, Landman AB, Brat GA, Dunn T, Gordon WJ. Deploying digital health tools within large, complex health systems: key considerations for adoption and implementation. NPJ digital medicine. 2022;5(1):13.
Kruse CS, Kristof C, Jones B, Mitchell E, Martinez A. Barriers to electronic health record adoption: a systematic literature review. J Med Syst. 2016;40(12):252.
Pieterse M, Kip H, Cruz-Martínez RR. The complexity of ehealth implementation: a theoretical and practical perspective. eHealth Research, Theory and Development: A Multi-Disciplinary Approach. London: Routledge; 2018. p. 247–70.
Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76.
Bowser DM, Henry BF, McCollister KE. Cost analysis in implementation studies of evidence-based practices for mental health and substance use disorders: a systematic review. Implement Sci. 2021;16(1):26.
Cidav Z, Mandell D, Pyne J, Beidas R, Curran G, Marcus S. A pragmatic method for costing implementation strategies using time-driven activity-based costing. Implement Sci. 2020;15(1):28.
Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay CR, Vale L, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess. 2004;8(6):iii–iv 1-72.
Eisman AB, Quanbeck A, Bounthavong M, Panattoni L, Glasgow RE. Implementation science issues in understanding, collecting, and using cost estimates: a multi-stakeholder perspective. Implement Sci. 2021;16(1):75.
Lutkevich B. What is implementation? : Tech Target; 2022 [Available from: https://www.techtarget.com/searchcustomerexperience/definition/implementation#:~:text=Implementation%20is%20the%20execution%20or,for%20something%20to%20actually%20happen.
Kukhareva PV, Weir C, Del Fiol G, Aarons GA, Taft TY, Schlechter CR, et al. Evaluation in life cycle of information technology (ELICIT) framework: supporting the innovation life cycle from business case assessment to summative evaluation. J Biomed Inform. 2022;127:104014.
Cresswell K, Sheikh A. Organizational issues in the implementation and adoption of health information technology innovations: an interpretative review. Int J Med Inform. 2013;82(5):e73–86.
Moher D, Liberati A, Tetzlaff J, Altman DG, Group P. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: the PRISMA statement. Int J Surg. 2010;8(5):336–41.
Akers J, University of NewYork. Systematic reviews: CRD’s guidance for undertaking reviews in health care: University of New York: Centre for Review and Dissemination; 2008.
Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan—a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):210.
Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:53.
Drummond M, Sculpher M, Claxton K, Stoddart G, Torrance G. Methods for economic evaluation of health care programmes. USA: Oxford University Press; 2015.
Waltz TJ, Powell BJ, Matthieu MM, Damschroder LJ, Chinman MJ, Smith JL, et al. Use of concept mapping to characterize relationships among implementation strategies and assess their feasibility and importance: results from the Expert Recommendations for Implementing Change (ERIC) study. Implement Sci. 2015;10(1):109.
Page K, Graves N, Halton K, Barnett AG. Humans, ‘things’ and space: costing hospital infection control interventions. J Hosp Infect. 2013;84(3):200–5.
Husereau D, Drummond M, Augustovski F, de Bekker-Grob E, Briggs AH, Carswell C, et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) 2022 explanation and elaboration: a report of the ISPOR CHEERS II Good Practices Task Force. Value Health. 2022;25(1):10–31.
Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71.
Afshar M, Arain E, Ye C, Gilbert E, Xie M, Lee J, et al. Patient outcomes and cost-effectiveness of a sepsis care quality improvement program in a health system. Crit Care Med. 2019;47(10):1371–9.
Agulnik A, Antillon-Klussmann F, Soberanis Vasquez DJ, Arango R, Moran E, Lopez V, et al. Cost-benefit analysis of implementing a pediatric early warning system at a pediatric oncology hospital in a low-middle income country. Cancer. 2019;125(22):4052–8.
Castellanos I, Schuttler J, Prokosch HU, Burkle T. Does introduction of a patient data management system (PDMS) improve the financial situation of an intensive care unit? BMC Med Inform Decis Mak. 2013;13:107.
Field TS, Garber L, Gagne SJ, Tjia J, Preusse P, Donovan JL, et al. Technological resources and personnel costs required to implement an automated alert system for ambulatory physicians when patients are discharged from hospitals to home. Inform Prim Care. 2012;20(2):87–93.
Forrester SH, Hepp Z, Roth JA, Wirtz HS, Devine EB. Cost-effectiveness of a computerized provider order entry system in improving medication safety ambulatory care. Value Health. 2014;17(4):340–9.
Swart N, Morris S, Murphy MF. Economic value of clinical decision support allied to direct data feedback to clinicians: blood usage in haematology. Vox Sang. 2020;115(4):293–302.
Westbrook JI, Gospodarevskaya E, Li L, Richardson KL, Roffe D, Heywood M, et al. Cost-effectiveness analysis of a hospital electronic medication management system. J Am Med Inform Assoc. 2015;22(4):784–93.
Zimlichman E, Keohane C, Franz C, Everett WL, Seger DL, Yoon C, et al. Return on investment for vendor computerized physician order entry in four community hospitals: the importance of decision support. Jt Comm J Qual Patient Saf. 2013;39(7):312–8.
Vermeulen KM, van Doormaal JE, Zaal RJ, Mol PGM, Lenderink AW, Haaijer-Ruskamp FM, et al. Cost-effectiveness of an electronic medication ordering system (CPOE/CDSS) in hospitalized patients. Int J Med Inform. 2014;83(8):572–80.
Agulnik A, Mora Robles LN, Forbes PW, Soberanis Vasquez DJ, Mack R, Antillon-Klussmann F, et al. Improved outcomes after successful implementation of a pediatric early warning system (PEWS) in a resource-limited pediatric oncology hospital. Cancer. 2017;123(15):2965–74.
van Doormaal JE, Mol PG, Zaal RJ, van den Bemt PM, Kosterink JG, Vermeulen KM, et al. Computerized physician order entry (CPOE) system: expectations and experiences of users. J Eval Clin Pract. 2010;16(4):738–43.
Devine EB, Hollingworth W, Hansen RN, Lawless NM, Wilson-Norton JL, Martin DP, et al. Electronic prescribing at the point of care: a time-motion study in the primary care setting. Health Serv Res. 2010;45(1):152–71.
Sohn H, Tucker A, Ferguson O, Gomes I, Dowdy D. Costing the implementation of public health interventions in resource-limited settings: a conceptual framework. Implement Sci. 2020;15(1):86.
Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, Roth E, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006;144(10):742–52.
Roberts SLE, Healey A, Sevdalis N. Use of health economic evaluation in the implementation and improvement science fields—a systematic literature review. Implem Sci. 2019;14(1):72.
Powell BJ, Fernandez ME, Williams NJ, Aarons GA, Beidas RS, Lewis CC, et al. Enhancing the impact of implementation strategies in healthcare: a research agenda. Front Public Health. 2019;7:3.
Eisman AB, Kilbourne AM, Dopp AR, Saldana L, Eisenberg D. Economic evaluation in implementation science: making the business case for implementation strategies. Psychiatry Res. 2020;283:112433.
Saldana L, Chamberlain P, Bradford WD, Campbell M, Landsverk J. The cost of implementing new strategies (COINS): a method for mapping implementation resources using the stages of implementation completion. Child Youth Serv Rev. 2014;39:177–82.
Saldana L, Ritzwoller DP, Campbell M, Block EP. Using economic evaluations in implementation science to increase transparency in costs and outcomes for organizational decision-makers. Implement Sci Commun. 2022;3(1):40.
Gold HT, McDermott C, Hoomans T, Wagner TH. Cost data in implementation science: categories and approaches to costing. Implement Sci. 2022;17(1):11.
Day RO, Roffe DJ, Richardson KL, Baysari MT, Brennan NJ, Beveridge S, et al. Implementing electronic medication management at an Australian teaching hospital. Med J Aust. 2011;195(9):498–502.
Soares MO, Sculpher MJ, Claxton K. Health opportunity costs: assessing the implications of uncertainty using elicitation methods with experts. Med Decis Making. 2020;40(4):448–59.
Trivedi MH, Kern JK, Marcee A, Grannemann B, Kleiber B, Bettinger T, et al. Development and implementation of computerized clinical guidelines: barriers and solutions. Methods Inf Med. 2002;41(5):435–42.
Almutairi MS, Alseghayyir RM, Al-Alshikh AA, Arafah HM, Househ MS. Implementation of computerized physician order entry (CPOE) with clinical decision support (CDS) features in Riyadh hospitals to improve quality of information. Stud Health Technol Inform. 2012;180:776–80.
Trivedi MH, Daly EJ, Kern JK, Grannemann BD, Sunderajan P, Claassen CA. Barriers to implementation of a computerized decision support system for depression: an observational report on lessons learned in “real world” clinical settings. BMC Med Inform Decis Mak. 2009;9:6.
McGinn CA, Grenier S, Duplantie J, Shaw N, Sicotte C, Mathieu L, et al. Comparison of user groups’ perspectives of barriers and facilitators to implementing electronic health records: a systematic review. BMC Med. 2011;9:46.
Klonoff DC, Kerr D, Wong JC, Pavlovic Y, Koliwad S, Hu J, et al. Digital Diabetes Congress 2017. J Diabetes Sci Technol. 2017;11(5):1045–52.
Vervoort D, Tam DY, Wijeysundera HC. Health Technology Assessment for Cardiovascular Digital Health Technologies and Artificial Intelligence: why is it different? Can J Cardiol. 2022;38(2):259–66.
Walsh-Bailey C, Tsai E, Tabak RG, Morshed AB, Norton WE, McKay VR, et al. A scoping review of de-implementation frameworks and models. Implement Sci. 2021;16(1):100.
Voorn VMA, van Bodegom-Vos L, So-Osman C. Towards a systematic approach for (de)implementation of patient blood management strategies. Transfus Med. 2018;28(2):158–67.
Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:139.
Michie S, Fixsen D, Grimshaw JM, Eccles MP. Specifying and reporting complex behaviour change interventions: the need for a scientific method. Implement Sci. 2009;4:40.
Greenhalgh T, Wherton J, Papoutsi C, Lynch J, Hughes G, A'Court C, et al. Beyond adoption: a new framework for theorizing and evaluating nonadoption, abandonment, and challenges to the scale-up, spread, and sustainability of health and care technologies. J Med Internet Res. 2017;19(11):e367.
The authors gratefully thank the Queensland University of Technology and Australian Government Research Training Program Scholarship for supporting this work.
This work was supported by the Digital Health Cooperative Research Centre (DHCRC). DHCRC is funded under the Commonwealth’s Cooperative Research Centres (CRC) Program. SM is supported by a National Health and Medical Research Council (NHMRC) administered fellowship (no. 1181138). The funders had no role in study design or decision to submit for publication.
Ethics approval and consent to participate
Not applicable. This article does not contain any studies with human participants or animals performed by any of the authors.
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Additional file 1.
Systematic review search string for each data base.
Additional file 2.
Assessment of included studies against the 2022 Consolidated Health Economic Evaluation Reporting Standards (CHEERS) checklist.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
About this article
Cite this article
Donovan, T., Abell, B., Fernando, M. et al. Implementation costs of hospital-based computerised decision support systems: a systematic review. Implementation Sci 18, 7 (2023). https://doi.org/10.1186/s13012-023-01261-8
- Implementation costs
- Computerised decision support systems
- Digital health
- Economic evaluation