- Open Access
Economic evaluation of implementation strategies in health care
© Hoomans and Severens; licensee BioMed Central Ltd. 2014
Received: 8 August 2014
Accepted: 18 September 2014
Published: 18 December 2014
Economic evaluations can inform decisions about the efficiency and allocation of resources to implementation strategies—strategies explicitly designed to inform care providers and patients about the best available research evidence and to enhance its use in their practices. These strategies are increasingly popular in health care, especially in light of growing concerns about quality of care and limits on resources. But such concerns have hardly motivated health authorities and other decision-makers to spend on some form of economic evaluation in their assessments of implementation strategies. This editorial addresses the importance of economic evaluation in the context of implementation science—particularly, how these analyses can be most efficiently incorporated into decision-making processes about implementation strategies.
Economic evaluation assesses the efficiency and allocation of resources to interventions that may improve health care and health outcomes. Economic evaluation applies not only to decisions about interventions or services that directly target patients, like pharmacological treatments and medical devices, but also to decisions about implementation strategies, which are explicitly designed to inform care providers and patients about the best available research evidence and to enhance its use in their practices.
Many inefficiencies in health-care delivery result from overuse of unnecessary services, underuse of beneficial interventions, or medical errors . In light of the growing concerns about the quality of care and budgetary pressures, implementation strategies are used to improve service delivery and outcomes. Potentially effective strategies that can promote an uptake of services can be as straightforward as clinical decision support, education and financial incentives—or can be as complex as total quality management and reforms of health-care systems.
Empirical studies of the effects of implementation strategies related to behavior change and health outcomes have become more numerous ,. Many explore how to most effectively address particular problems of implementation, a question that may be answered with insights into the mechanisms by which implementation works and the use of behavior change theory, from disciplines such as psychology and sociology ,.
But although the prominence of implementation science in health services is increasing, relatively little attention has been paid to another important aspect of implementation strategies: these efforts demand resources, and thus, have costs. Depending on the perspective of the decision-maker and their objective(s), the cost of implementation may include the following: 1) costs associated with executing implementation strategies; 2) the excess cost of service delivery as uptake or implementation changes; 3) the opportunity cost to providers and patients partaking in the implementation activities; and 4) research and development-related expenses resulting from the process of implementing change in health care. Unless the budget for implementation is sufficient, not all possible implementation projects can be supported. Trade-offs must be made, and these trade-offs merit an analysis that can compare costs to their benefits and that can identify the opportunity cost of choices—in other words, an economic evaluation.
Use of economic evaluation
Despite the prevalence of economic evaluation in health services research, its use is not standard practice in assessing implementation strategies. Recent reviews revealed fewer than 60 studies of the efficiency of strategies for implementing clinical practice guidance before 2008 ,,, with no substantial progress since then.a The number of economic evaluations contrasts sharply with the number of studies on implementation strategies assessing only their effect on behavior change and health outcomes.
Why are implementation decisions so seldom guided by economic evaluation? Some of the more plausible reasons include divergent views on cost and cost-effectiveness, limited resources for evaluative research, and the paucity of data for decision-making.
Views on the appropriate role of economics in evaluating implementation strategies may differ importantly between people and over time. Some revert to basic ethical tenets and moral obligations to discard information on cost completely; others view, more realistically, considerations of cost as secondary or complementary to other criteria, such as clinical effectiveness. But regardless of the differences of opinion and how they impact evaluation and decision-making, particular implementation strategies do have costs. Simply ignoring these implications can have undesirable consequences, such as inefficiencies and inequities that compromise the accessibility and delivery of health services—the very reason why spending on implementation of research evidence is considered initially.
Limits on research capacity and appropriate data seem plausible reasons for decision-makers not to base implementation decisions on some form of economic evaluation, but in fact conceal a paradox: implementation decisions need economic evaluations that produce good-quality data for these decisions to be well-informed; economic evaluations need decisions that utilize their results for these evaluations to be supported.
Methods of economic evaluation
Overview of forms of economic evaluation
Form of evaluation
Use for decision making
Measurement of health effects
Economic summary measure
Comparison of implementation strategies that have disparate outcomes
Comparison of implementation strategies that produce a common outcome
Process measures (e.g., professional guidance adherence, patient compliance to medication) or health effects (intermediate or final), measured in natural units
Cost-effectiveness ratio (e.g., cost per case averted, cost per life-year saved), at patient or population level
Comparison of implementation strategies that have morbidity and mortality outcomes
Final health outcomes, including health status, patient preferences, utilities
Cost per quality-adjusted life-year, at patient or population level
Comparison of implementation strategies with different units of outcome (health and nonhealth)
Net health benefit or net monetary benefit, at patient or population level
Comparison of net cost of implementation strategies with equivalent outcomes
Net cost or cost of illness, at patient or population level
Cost and cost-consequences analysis
Common to all forms of economic evaluation is the analysis of cost. Properly conducted—collecting adequate data on the use of all relevant resources on implementation and assigning appropriate tariffs or prices to those resources—cost analysis can help decision-makers address a not unimportant question: How much more will it cost to pursue implementation efforts? (Depending on the question and the purpose of the cost information, costing may require detailed analysis and such accuracy-focused methodology as micro or activity-based costing.) But information from cost analysis, such as budget impact analyses or patient level cost-minimization studies, generally is insufficient to determine whether intervening in implementation problem(s) makes economic sense. Unless the potential alternative strategies are certain to have the same health outcomes across provider and patient practices over time—which is unlikely—implementation decisions will require a joint comparison of costs and outcomes by full economic evaluation.
At the simplest level, economic evaluation entails the mere listing of all cost/benefit implications of each potential choice, as in cost-consequences analyses . This form of economic evaluation was applied in a trial-based study of task substitution for diagnosing fibromyalgia in inpatients . Compared to a specialist-led process, a nurse-led process was reported to have higher patient satisfaction scores, equivalent health outcomes, and lower consumption of care and other resources.
Analyses like these have distinct uses. They provide information for spending decisions to address problems in health care, when possible implementation strategies are expected to have outcomes that are too disparate to be combined meaningfully. Cost-consequences analyses permit value judgments without having to fully specify a relation between all the different measures of outcomes. And yet listing the cost/benefit implications of implementation strategies alone fails an important objective of economic evaluation—to make explicit the opportunity costs of alternative resource uses.
Cost-effectiveness and cost-utility analyses
These opportunity costs can be assessed directly using other forms of economic evaluation: cost-effectiveness or cost-utility analyses. Incremental cost-effectiveness ratios are established by dividing the difference in costs of various implementation strategies by the corresponding difference in health outcomes. Again, the measure(s) of outcomes most appropriate for ratio calculations depends, to an important extent, on the objective of decision-making and the perspective of analysis. Common metrics typically used include incremental cost per life-year gain or per quality-adjusted life-year (QALY).
Examples of incremental cost-effectiveness ratios and suggested decisions about implementation strategies
Comparison of implementation strategies
Intervention considered for implementation
Incremental cost-effectiveness ratio
Suggestions for implementation decision
Mason et al. 2005 
Specialist-nurse led clinics versus usual care
Lipid control in patients with diabetes versus no lipid control
$19,950 per quality-adjusted life-year
Use of specialist-nurse led clinics for implementing lipid control is cost-effective
Scheeres et al. 2008 
Multifaceted strategy, including health professional and patient education and instruction, versus usual care
Cognitive behavior therapy of chronic fatigue syndrome versus regular counseling
€5,320 per recovered patient
Use of multifaceted strategy for implementing cognitive behavior therapy is cost-effective
Walker et al. 2009 
Financial incentives to primary care practices versus usual care
Use of ACE inhibitor and other quality indicators versus conventional care
£5,623 per quality-adjusted life-year
Use of financial incentives for implementing ACE inhibitor and other quality indicators is cost-effective
Hoomans et al. 2009 
Audit and feedback to primary care physicians versus usual care
Intensive control of blood glucose in patients with type 2 diabetes versus conventional control
€25,640 per quality-adjusted life-year
Use of audit and feedback for implementing intensified control of blood glucose is cost-effective
Choudhry et al. 2011 
No co-payments for patients versus co-payments
Preventive medication after myocardial infarction versus no preventive medication
$54 per nonfatal vascular event or vascularization averted (cost-saving)
Use of no co-payments for implementing preventive medication is cost-effective
Mortimer et al. 2013 
Multifaceted strategy targeting primary care physicians, including interactive workshops, versus guideline dissemination alone
Evidence-based care for acute low back pain versus convention
−AU$108 per x-ray referral avoided (cost-saving)
Use of multifaceted strategy for implementing evidence-based care is cost-effective
Gillespie et al. 2014 
Structured patient education with group follow-up versus individual follow-up
Self-management in type 1 diabetes versus conventional care
€19,300 per quality-adjusted life year (cost-saving)
Use of structured patient education with group for implementing self-management is not cost-effective
Wide variation in outcomes is not uncommon given the many types of information inputs used in cost-effectiveness studies. One obvious and important determinant is the cost of implementation; economies of scale and scope may apply as implementation strategies target larger groups of providers and patients, and multiple behaviors and practices. But the cost-effectiveness of such strategies also depends critically on the effect they have on provider and patient behaviors—as measured by guideline adherence and patient compliance—and on the differential outcomes of care between services being implemented. The greater the difference in expected outcomes between usual care and the change being implemented, and the more widespread the implementation, the more likely a strategy is to be cost-effective.
Translating cost-effectiveness ratios into resource allocation decisions can be difficult—even when potential implementation projects are comparable in scale and scope, and information about all the analysis inputs is so precise that the outcomes can be regarded as certain. Strategies that improve health and lower costs should be accepted; rejected should be those that worsen outcomes at higher costs. But what if a strategy is expected to improve (worsen) health outcomes but also cost more (less)?
In such cases, health or safety gains from implementation strategies need to be valued in monetary units, reflecting the budget constraints and opportunity cost of alternative resource uses. Common thresholds for choices of pharmacological treatments and other health services range from €20,000 to €80,000 per life-year or QALY—and similar threshold values may well apply to accept-reject decisions in an implementation context. Yet the critical question is: Do current thresholds fully incorporate the cost of implementation of ‘cost-effective’ health services?
When assigned appropriate thresholds, cost-effectiveness data can be transformed to a more comprehensive measure of implementation strategy efficiency—the net benefit. As earlier applications of this concept suggested ,, the barriers to using so-called cost-benefit analysis (information-intensive, computationally complex) may often be overcome by the analytic benefits it has to offer: i) direct comparison of implementation projects of varying scale and scope, and ii) detailed assessment of uncertainty in implementation decision outcomes.
Toward efficient use of economic evaluation
Once the methods for evaluation (including cost-effectiveness thresholds) are agreed upon, economic evaluation becomes a useful tool in the studying and planning of strategies for implementing change in health care. The question then becomes as follows: How can economic evaluation be performed most efficiently?
Limited collection of economic data
Economic evaluations are more efficient if data collection on outcomes is limited. One approach is to confine the study to the measures of the care process, say cost per change in professional guidance adherence or patient compliance to medication, instead of measuring actual health outcomes. For example, in a Dutch quasi-experiment on the use of financial incentives as an implementation strategy , incentivizing primary care providers was found to reduce prescriptions of targeted drugs, saving costs in comparison to usual care. Because there was also good evidence that denying patient medication did not have (long-term) effects on health outcomes, the incentive plan was likely to be a cost-effective strategy to implement more conservative prescribing practices in primary care.
Other ways to improve efficiency by limiting data collection include shortening the length of patient or provider follow-up or by relying on studies of less rigorous design for data collection.
However, limiting data collection can have undesirable consequences, such as reducing confidence in the accuracy of the conclusions drawn from the analysis. Consider the Dutch study of financial incentives : can the established cost-effectiveness ratios be considered the same across all targeted care providers and prescription drugs? Or do these ratios actually vary by provider, drug, baseline prescriptions rate, or by some other source of heterogeneity? Has enough information been collected to ascertain whether incentivizing providers’ prescribing practices will have spillover effects to non-targeted behaviors and practices? Limiting the collection of economic data can increase evaluative efficiency, but the potential biases in the assessment of strategies need be carefully considered.
Use of decision analytic models
Practical considerations suggest yet another approach to economic evaluation and efficiency improvement: the use of decision analytic models. In modeling studies, economic data on implementation strategies may be synthesized from a range of sources, including theory on behavioral change, rather than from a single trial or observational study. For example, in a Dutch study of the implementation of intensified glycemic control in type 2 diabetes, the comparison of audit and feedback to primary care providers versus usual care was based on an economic model . The model permitted establishing estimates of incremental cost per QALY ratios by linking behavior change to health-related outcomes using a simulation of experiments.
Decision analytic models have many more uses along these lines—for example, they can provide estimates of the expected value of information to form a basis for deciding whether additional data collection is necessary ,. Despite apparent benefits, the use of models in economic evaluation and implementation choices remains uncommon.
Early assessment of implementation decisions
The greatest gain in efficiency for implementation decisions may not be the method of economic evaluation but rather the timing. Economic evaluations are typically being performed ex-post—after some inefficiency has been identified as a problem of implementation and after designing and testing a set of strategies.
Instead of carrying out economic evaluations after the fact, a 3-step ex-ante process of evaluation and decision-making is potentially much more efficient:
Step 1: Assess the expected returns (as measured by net benefit on a monetary or health scale) from promoting the implementation of research evidence or any further change in clinical management or health policy through the use of value of implementation analysis ,,.
Step 2: Make predictions of the implementation cost, which may include research and development-related expenses and the opportunity cost of care providers and patients partaking in the implementation activities.
Early economic assessment can help eliminate cost-ineffective implementation studies early on, allowing resources to be directed toward problems in care where intervening is likely to be more beneficial. Knowledge of the practicality of health service implementation is essential. If valid and meaningful estimates of resource allocation requirements for implementation can be made, the cost of implementation could be considered much sooner in the health-care decision-making process—namely, when the decisions about care components are being made .
Performing economic evaluations is not rocket science. Already commonly used in the assessment of health services, economic evaluation is a potentially useful tool for making decisions on strategies for implementing research knowledge into clinical practice, management, or health policy. Confronted with a problem of implementation, decision-makers who wish to derive the greatest benefits from available resources must spend on some form of economic evaluation. Only by comparing each potential choice across both costs and benefits can the opportunity cost of implementation strategies be assessed. As more economic evaluations are performed and decision-makers leverage analytic techniques such as modeling and value of implementation analysis, the contribution of economic evaluation to decision-making processes will increase. The question is how economic evaluation can most efficiently be incorporated into implementation decisions, not whether it should.
aA comprehensive search in PubMed for English language publications from 1 January 2008 to 1 July 2014, using the terms (in all fields): ‘economic eval* OR cost-effect*’, ‘implementation strateg* OR quality improv*’, revealed 284 records, most of which recorded publications of studies that did not report any cost of implementation or that refrained to perform an economic evaluation of the reported implementation or quality improvement strategy in terms of cost of implementation.
- McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A, Kerr EA: Glyn the quality of health care delivered to adults in the United States. N Engl J Med. 2003, 348 (26): 2635-2645. 10.1056/NEJMsa022615.View ArticlePubMedGoogle Scholar
- Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay CR, Vale L, Whitty P, Eccles MP, Matowe L, Shirran L, Wensing M, Dijkstra R, Donaldson C: Effectiveness and efficiency of guideline dissemination and implementation strategies.Health Technol Assess 2004, 8(6):iii–iv, 1–72,View ArticleGoogle Scholar
- Grol R, Wensing M, Eccles M, Davis D: Improving Patient Care. The Implementation of Change in Clinical Practice. 2013, Wiley-Blackwell, Chichester West Sussex, 2View ArticleGoogle Scholar
- Grol R, Bosch MC, Hulscher ME, Eccles MP, Wensing M: Planning and studying improvement in patient care: the use of theoretical perspectives. Milbank Q. 2007, 85 (1): 93-138. 10.1111/j.1468-0009.2007.00478.x.View ArticlePubMedPubMed CentralGoogle Scholar
- Davies P, Walker AE, Grimshaw JM: A systematic review of the use of theory in the design of guideline dissemination and implementation strategies and interpretation of the results of rigorous evaluations. Implement Sci. 2010, 5: 14-10.1186/1748-5908-5-14. doi: 10.1186/1748-5908-5-14View ArticlePubMedPubMed CentralGoogle Scholar
- Vale L, Thomas R, MacLennan G, Grimshaw J: Systematic review of economic evaluations and cost analyses of guideline implementation strategies. Eur J Health Econ. 2007, 8 (2): 111-121. 10.1007/s10198-007-0043-8.View ArticlePubMedGoogle Scholar
- Hoomans T, Evers S, Ament A, Hübben M, Van der Weijden T, Grimshaw J, Severens J: The methodological quality of economic evaluations of guideline implementation into clinical practice: a systematic review of empiric studies. Value Health. 2007, 10 (4): 305-316. 10.1111/j.1524-4733.2007.00175.x.View ArticlePubMedGoogle Scholar
- Mason J, Wood J, Freemantle N: Designing evaluations of interventions to change professional practice. J Health Serv Res Policy. 1999, 4 (2): 106-111. Erratum in: J Health Serv Res Policy 1999 Jul;4(3):192PubMedGoogle Scholar
- Sculpher M: Evaluating the cost-effectiveness of interventions designed to increase the utilization of evidence-based guidelines. Fam Pract. 2000, 17 (Suppl 1): S26-S32. 10.1093/fampra/17.suppl_1.S26.View ArticlePubMedGoogle Scholar
- McIntosh E: Economic evaluations of guideline implementation strategies. Changing Professional Practice: Theory and Practice of Clinical Guidelines Implementation. Edited by: Thorsen T, Makela M. 1999, Danish Institute for Health Services Research and Development, CopenhagenGoogle Scholar
- Kroese ME, Severens JL, Schulpen GJ, Bessems MC, Nijhuis FJ, Landewé RB: Specialized rheumatology nurse substitutes for rheumatologists in the diagnostic process of fibromyalgia: a cost-consequence analysis and a randomized controlled trial. J Rheumatol. 2011, 38 (7): 1413-1422. 10.3899/jrheum.100753.View ArticlePubMedGoogle Scholar
- Mason JM, Freemantle N, Gibson JM, New JP, SPLINT trial: Specialist nurse-led clinics to improve control of hypertension and hyperlipidemia in diabetes: economic analysis of the SPLINT trial. Diabetes Care. 2005, 28 (1): 40-46. 10.2337/diacare.28.1.40.View ArticlePubMedGoogle Scholar
- Scheeres K, Wensing M, Bleijenberg G, Severens J: Implementing cognitive behaviour therapy for chronic fatigue syndrome in mental health care: a costs and outcomes analysis. BMC Health Serv Res. 2008, 8: 175-10.1186/1472-6963-8-175.View ArticlePubMedPubMed CentralGoogle Scholar
- Walker S, Mason AR, Claxton K, Cookson R, Fenwick E, Fleetcroft R, Sculpher M: Value for money and the quality and outcomes framework in primary care in the UK NHS. Br J Gen Pract. 2010, 60 (574): e213-e220. 10.3399/bjgp10X501859.View ArticlePubMedPubMed CentralGoogle Scholar
- Hoomans T, Abrams KR, Ament AJHA, Evers SMAA, Severens JL: Modelling the value for money of changing clinical practice change: a stochastic application in diabetes care. Med Care. 2009, 47 (10): 1053-1061. 10.1097/MLR.0b013e31819e1f2b.View ArticlePubMedGoogle Scholar
- Choudhry NK, Avorn J, Glynn RJ, Antman EM, Schneeweiss S, Toscano M, Reisman L, Fernandes J, Spettell C, Lee JL, Levin R, Brennan T, Shrank WH: Post-Myocardial Infarction Free Rx Event and Economic Evaluation (MI FREEE) trial. Full coverage for preventive medications after myocardial infarction. N Engl J Med. 2011, 365 (22): 2088-2097. 10.1056/NEJMsa1107913.View ArticlePubMedGoogle Scholar
- Mortimer D, French SD, McKenzie JE, O’Connor DA, Green SE: Economic evaluation of active implementation versus guideline dissemination for evidence-based care of acute low-back pain in a general practice setting. PLoS One. 2013, 8 (10): e75647-10.1371/journal.pone.0075647.View ArticlePubMedPubMed CentralGoogle Scholar
- Gillespie P, O’Shea E, Hara MCO, Dinneen SF: Cost effectiveness of group follow-up after structured education for type 1 diabetes: a cluster randomised controlled trial. Trials. 2014, 15 (1): 227-10.1186/1745-6215-15-227.View ArticlePubMedPubMed CentralGoogle Scholar
- Martens JD, Werkhoven MJ, Severens JL, Winkens RA: Effects of a behaviour independent financial incentive on prescribing behaviour of general practitioners. J Eval Clin Pract. 2007, 13 (3): 369-373. 10.1111/j.1365-2753.2006.00707.x.View ArticlePubMedGoogle Scholar
- Fenwick E, Claxton K, Sculpher M: The value of implementation and the value of information: combined and uneven development. Med Decis Making. 2008, 28: 21-32. 10.1177/0272989X07308751.View ArticlePubMedGoogle Scholar
- Hoomans T, Fenwick EA, Palmer S, Claxton K: Value of information and value of implementation: application of an analytic framework to inform resource allocation decisions in metastatic hormone-refractory prostate cancer. Value Health. 2009, 12 (2): 315-324. 10.1111/j.1524-4733.2008.00431.x.View ArticlePubMedGoogle Scholar
- Hoomans T, Severens JL, Evers SMAA, Ament AJHA: Value for money in changing clinical practice: should decisions about guidelines and implementation strategies be made sequentially or simultaneously?. Med Decis Making. 2009, 29: 207-216. 10.1177/0272989X08327397.View ArticlePubMedGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.