Examining and addressing evidence-practice gaps in cancer care: a systematic review
© Bryant et al.; licensee BioMed Central Ltd. 2014
Received: 20 September 2013
Accepted: 21 March 2014
Published: 25 March 2014
There is increasing recognition of gaps between best scientific evidence and clinical practice. This systematic review aimed to assess the volume and scope of peer-reviewed cancer research output in the years 2000, 2005, and 2010.
Eligible papers were published in English and reported on evidence-practice gaps in cancer care. The electronic database Medline was searched for three time periods using MeSH headings and keywords. Abstracts were assessed against eligibility criteria by one reviewer and checked by a second. Papers meeting eligibility criteria were coded as data-based or non-data-based, and by cancer type of focus. All data-based papers were then further classified as descriptive studies documenting the extent of, or barriers to addressing, the evidence-practice gap; or intervention studies examining the effectiveness of strategies to reduce the evidence-practice gap.
A total of 176 eligible papers were identified. The number of publications significantly increased over time, from 25 in 2000 to 100 in 2010 (p < 0.001). Of the 176 identified papers, 160 were data-based. The majority of these (n = 150) reported descriptive studies. Only 10 studies examined the effectiveness of interventions designed to reduce discrepancies between evidence and clinical practice. Of these, only one was a randomized controlled trial. Of all data-based studies, almost one-third (n = 48) examined breast cancer care.
While the number of publications investigating evidence-practice gaps in cancer care increased over a ten-year period, most studies continued to describe gaps between best evidence and clinical practice, rather than rigorously testing interventions to reduce the gap.
KeywordsEvidence-based practice Guideline adherence Translational medical research Medical oncology Neoplasms
The importance of reducing evidence-practice gaps in healthcare
Evidence-based practice is the ‘conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients’ . There is increasing recognition of gaps between best scientific evidence and clinical practice in many fields of healthcare [2–5]. It has been estimated that up to 40% of patients fail to receive treatments shown to be effective , while 20% to 25% receive treatments that are not needed, or potentially harmful . Reducing the gap between best evidence and clinical practice is associated with reductions in patient morbidity and mortality [6–8], and reduced healthcare costs . In the past ten years, increasing attention has been directed to addressing barriers to the translation of research into clinical practice to improve patient outcomes [10–12].
Concern about evidence-based practice in cancer care
The past decade has been marked by increases in both cancer incidence and survival [13, 14]. However, concern about disparities between best-evidence practice and cancer care has persisted for some time. In the 1999 report ‘Ensuring Quality Cancer Care’ , the US National Cancer Policy Board stated that ‘reasons for failure to deliver high-quality care have not been studied adequately’ (pg 4). The report made a number of recommendations, including that clinical practice guidelines be developed and implemented to ensure optimal care is provided to patients, and that the quality of care provided be measured and monitored . Awareness of the need to address evidence-practice gaps in cancer care was further heightened by the landmark 2001 Institute of Medicine report ‘Crossing the Quality Chasm’, which identified that high rates of misuse, underuse, and overuse of health services have created a ‘chasm’ between best evidence and medical practice .
Volume of research output as a measure of research effort
Given increased acknowledgement of the need to address evidence-practice gaps in cancer care, it might be expected that research efforts to ensure effective translation of knowledge into clinical practice would also have increased over this time period. Although not without limitations , examining the volume of peer-reviewed research output using bibliometric methods is a proxy indicator of scientific productivity [18, 19]. Volume of research output provides information about the research capacity of a field, including the areas where research funding has been allocated, and where clinician and researcher effort have been directed.
Research design as a measure of progression of research effort
While volume of research output provides an indication of the amount of work being conducted in a particular field, it fails to provide information about the type or quality of work. The effective translation of evidence into practice is complex and depends on a number of factors, one of which is the type of research evidence generated . To advance evidence-based practice, a logical progression of research is required. First, there is need to examine whether there is a discrepancy between current practice and best-evidence practice, and if so, the magnitude of the discrepancy. If a gap is found that is likely to have clinically or economically important implications, research is then needed to develop evidence about how best to effectively reduce this gap. Methodologically rigorous intervention studies are critical to produce evidence about the most effective strategies for delivering best practice healthcare. Interventions might aim to educate clinicians , change the practice environment , or change contingencies .
In 2010, Evensen et al.  examined the number of published studies related to the evidence-practice gap across nine specific guidelines related to family medicine, three of which related to cancer. While a large number of studies were identified, only 15% of the studies were intervention studies and few met quality criteria for methodological rigor established by the Cochrane Effective Practice and Organization of Care (EPOC) Group . Providing optimal care to the increasing population of cancer patients and survivors has been described as a public health challenge . Translational step three (T3) of the National Institutes of Health Roadmap  argues that there is a need for research to examine translation of evidence generated into the clinical care provided to patients. However to date, there has been no examination of the research attention given to implementation of evidence into clinical care at the T3 level in the wider cancer literature.
The number of publications examining evidence-practice gaps;
The number of data-based versus non-data-based publications examining evidence-practice gaps, including the number describing evidence practice gaps compared to the number evaluating interventions to reduce evidence practice gaps;
The number of data-based publications examining evidence-practice gaps by cancer type and research design.
Inclusion and exclusion criteria
Eligible papers were those published in English in 2000, 2005, and 2010 that reported on evidence-practice gaps in cancer care. The year 2000 was selected as the starting point for this review given that the influential report by the National Cancer Board was published in 1999 . Studies examining the effectiveness of treatments on cancer recurrence, disease-free survival, or overall survival were excluded, as were studies examining evidence-practice gaps for cancer screening, editorials, letters to the editor, dissertations, and protocol papers.
The electronic database Medline was searched using the OVID platform. The search strategy included three categories of search terms: guideline adherence/evidence based practice, cancer, and treatment types (full search strategy available in Additional file 1). Medline was selected as the database of choice given its focus on biomedicine and health publications in scholarly journals. Searches were restricted to English language publications and human studies. A Google Scholar search using combinations of the above keywords was also conducted to ensure relevant papers were not missed. A copy of the review protocol is available in Additional file 2.
Data-based or non-data-based: Data-based publications were those reporting new data or new analysis of existing data. Non-data-based publications included review, commentary, discussion, or summary papers.
Cancer type: All data-based papers were classified according to the cancer type of the study sample according to the International Classification of Diseases for Oncology  and body system-specific cancer classification.
Research design: All data-based papers were further classified into one of the following categories: Descriptive studies using cross-sectional study designs to document or describe the evidence-practice gap or barriers to addressing the evidence-practice gap; Intervention studies using experimental designs to test strategies to reduce the evidence practice gap. Studies using an experimental design were also assessed as to whether the design was one of the four types allowed by the EPOC criteria- randomized controlled trials, clinical controlled trials, controlled before and after studies, or interrupted time series studies.
A random sample of 10% of papers identified as eligible was checked for relevance and double-coded by a second reviewer (MC).
Chi-Square tests for equal proportions were conducted to determine whether the total volume of research output changed over time, and whether the number of descriptive studies changed over time.
Volume of research output over time
Number of data-based versus non-data-based publications
The number of data-based and non-data-based publications addressing evidence-practice gaps in cancer are reported in Figure 2. A total of 160 data-based and 16 non-data-based publications were identified.
Number of data-based publications by cancer type
Number of data-based publications by research design
Reducing the gap between best evidence and clinical practice in cancer care will improve patient morbidity and mortality [6–8] and reduce healthcare costs . This review identified a small volume of research in cancer care addressing this important issue. Of the 176 publications identified, the majority (91%) provided new data. Only 9% of publications were reviews, commentaries, or summaries of the existing evidence base. This is an encouraging finding and suggests that a substantial proportion of research effort is directed toward empirical work. However, further examination revealed that 94% of data-based publications were descriptive studies, with only ten intervention studies identified. The number of intervention studies did not increase over time. Studies most commonly focused on breast cancer care.
While descriptive research provides important information about current practice, it does not maximize research benefits by comparing the effectiveness of approaches to improve care. The small number of intervention studies, and in particular the small number that used rigorous evaluation designs, may be explained by the difficulty of carrying out well-controlled intervention trials in this field. Changes to clinical practice may require changes to processes of care, clinician knowledge, and organizational culture, so the unit of analysis for such studies is often the hospital, ward, or clinic. These types of system-focused interventions are often incompatible with randomized controlled designs where the unit of randomization is the individual . An alternative to the traditional randomized controlled trial suitable for evaluating interventions involving organizational change is the cluster randomized controlled trial. However, this design poses complex logistical challenges, such as the need to obtain agreement for implementation from all clinicians within the cluster, difficulties obtaining a large enough sample of hospitals or clinics, and the substantial costs associated with evaluating the intervention across many sites . Intervention studies also require multi-disciplinary collaboration and a specific repertoire of research skills, while descriptive research requires relatively less time and fewer resources. In a professional environment where both volume and impact of research output is valued, researchers may be more motivated to undertake descriptive work rather than complex and time intensive intervention studies, or focus their effort on randomized controlled trials of individual-level interventions. The design challenges may be overcome by the use of alternative research designs such as interrupted time series or multiple baseline designs, and by commitment from funding bodies to finance robust intervention trials. Targeted funding rounds that prioritize research on strategies to close the evidence-practice gap may also be important to increasing productivity in this area.
The small number of intervention trials may also be the result of a perception in clinical oncology that optimal cancer care is already being delivered to patients and, therefore, research to ensure translation of clinical research into practice is not needed. It is simplistic to presume, however, that new evidence is routinely integrated into clinical practice guidelines, policy, and then clinical care [31, 32]. Changing established patterns of care is difficult, and often necessitates the involvement of cancer patients and their advocacy groups, individual practitioners, senior administrators of healthcare organizations, and policy makers . Passive dissemination of evidence via clinical practice guidelines and publications produce only small changes in clinical practice, and there is insufficient evidence to show that such strategies have any effect on patient outcomes . Further, uptake of new evidence is inconsistent even when active implementation strategies are undertaken [4, 34]. For example, one descriptive study examined the impact of the American Society of Clinical Oncology (ASCO) guidelines regarding use of hematopoietic colony-stimulating factors (CSF) on cancer care. Six months after active dissemination and implementation of the guidelines, it found that only 61% of CSF prescriptions complied with ASCO guidelines . In addition, the finding that 94% of data-driven papers identified in this review described evidence-practice gaps in cancer care highlights the importance of increasing implementation research effort in the field.
The finding that a large proportion of research effort has been directed toward breast cancer care is in accordance with previous findings that breast cancer research dominates the quality of life research field . Prostate cancer is similar to breast cancer in terms of incidence , and other high-incidence cancers such as lung cancer and bowel cancer have higher mortality rates than breast cancer . Therefore, the focus on breast cancer is not likely to reflect burden, but rather the high profile of this disease and the availability of specific funding for breast cancer research. This suggests a need for more effort toward examining and addressing evidence-practice gaps in care for other cancers with poor outcomes.
These results should be considered in light of several limitations. First, grey literature such as reports, policy documents, and dissertations were not included, nor were protocol papers. While this information may be relevant, grey literature is not peer-reviewed and therefore may not meet the high standards of quality associated with peer-reviewed publication. Inclusion of grey literature may also have biased the review given that papers related to work known by the authors and their network would have been more likely to have been identified than other works. Second, only one author screened the retrieved citations/abstracts. While a random 10% of included studies were double coded by a second reviewer with perfect agreement, this is a limitation of the search execution. Third, there are limitations to using volume of research output as a measure of research effort [36, 38]. Due to publication bias, studies with unfavorable results may not be published, leading to under-representation of the true amount of work carried out in the field [39, 40]. Finally, only studies that self-identified as addressing the evidence-practice gap in cancer care were included. While this may have resulted in some relevant studies not being included, it was not feasible to retrospectively define whether studies were addressing evidence-practice gaps. The use of a large number of search terms that covered evidence-based practice, guideline development, and the evidence-practice gap is likely to have limited the number of relevant studies missed. However, the large number of ways papers relating to evidence-practice gaps or implementation science are indexed is a limitation, compromising the ability of the field to advance in an optimal fashion.
Reducing discrepancies between best evidence and clinical practice is an area of ongoing need across all fields of healthcare. A prerequisite for the effective transfer of evidence into practice is methodologically rigorous research that identifies where evidence-practice gaps exist, and develops and tests interventions to close the gap. The small number of intervention studies addressing evidence-practice gaps in cancer care highlights a clear need to shift research efforts from descriptive studies to robust experimental studies if patients are to receive optimal care. The relatively high number of studies on the translation of evidence into practice in breast cancer care suggests that greater research effort should be directed towards other cancers with high incidence and disease burden such as lung cancer.
This research was supported by a Strategic Research Partnership Grant from the Cancer Council NSW to the Newcastle Cancer Control Collaborative. Dr Jamie Bryant is supported by an Australian Research Council Post Doctoral Industry Fellowship.
- Sackett DL, Rosenberg MC, Gray JA, Haynes RB, Richardson WS: Evidence based medicine: what it is and what it isn’t. Br Med J. 1996, 312: 71-72. 10.1136/bmj.312.7023.71.View ArticleGoogle Scholar
- Cochrane LJ, Olson CA, Murray S, Dupuis M, Tooman T, Hayes S: Gaps between knowing and doing: understanding and assessing the barriers to optimal health care. J Contin Educ Health Prof. 2007, 27 (2): 94-102. 10.1002/chp.106.View ArticlePubMedGoogle Scholar
- Buchan H: Gaps between best evidence and practice: causes for concern. Med J Aust. 2004, 180 (15): S47-S48.Google Scholar
- Grol R: Successes and failures in the implementation of evidence based guidelines for clinical practice. Med Care. 2001, 39 (8): II46-II54.PubMedGoogle Scholar
- McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A, Kerr EA: The quality of health care delivered to adults in the United States. N Engl J Med. 2003, 348 (26): 2635-2645. 10.1056/NEJMsa022615.View ArticlePubMedGoogle Scholar
- Saslow D, Solomon D, Lawson HW, Killackey M, Kulasingam SL, Cain J, Garcia FA, Moriarty AT, Waxman AG, Wilbur DC: American cancer society, american society for colposcopy and cervical pathology, and american society for clinical pathology screening guidelines for the prevention and early detection of cervical cancer. CA Cancer J Clin. 2012, 62 (3): 147-172. 10.3322/caac.21139.View ArticlePubMedPubMed CentralGoogle Scholar
- Komajda M, Lapuerta P, Hermans N, Gonzalez-Juanatey JR, van Veldhuisen DJ, Erdmann E, Tavazzi L, Poole-Wilson P, Le Pen C: Adherence to guidelines is a predictor of outcome in chronic heart failure: the MAHLER survey. Eur Heart J. 2005, 26 (16): 1653-1659. 10.1093/eurheartj/ehi251.View ArticlePubMedGoogle Scholar
- McCullough ML, Patel AV, Kushi LH, Patel R, Willett WC, Doyle C, Thun MJ, Gapstur SM: Following cancer prevention guidelines reduces risk of cancer, cardiovascular disease, and all-cause mortality. Cancer Epidemiol Biomarkers Prev. 2011, 20 (6): 1089-1097. 10.1158/1055-9965.EPI-10-1173.View ArticlePubMedGoogle Scholar
- Shapiro D, Lasker R, Bindman A, Lee P: Containing costs while improving quality of care: the role of profiling and practice guidelines. Annu Rev Public Health. 1993, 14 (1): 219-241. 10.1146/annurev.pu.14.050193.001251.View ArticlePubMedGoogle Scholar
- Grol R, Wensing M: What drives change? Barriers to and incentives for achieving evidence-based practice. Med J Aust. 2004, 180 (6): S57-PubMedGoogle Scholar
- Harrison MB, Légaré F, Graham ID, Fervers B: Adapting clinical practice guidelines to local context and assessing barriers to their use. Can Med Assoc J. 2010, 182 (2): E78-E84. 10.1503/cmaj.081232.View ArticleGoogle Scholar
- Rainbird K, Sanson-Fisher R, Buchan H: Identifying barriers to evidence uptake. 2006, National Institute of Clinical Studies: MelbourneGoogle Scholar
- Australian Institute of Health and Welfare & Australasian Association of Cancer Registries: Cancer in Australia: An overview. 2012, Canberra: AIHWGoogle Scholar
- Coleman M, Forman D, Bryant H, Butler J, Rachet B, Maringe C, Nur U, Tracey E, Coory M, Hatcher J: Cancer survival in Australia, Canada, Denmark, Norway, Sweden, and the UK, 1995–2007 (the international cancer benchmarking partnership): an analysis of population-based cancer registry data. Lancet. 2011, 377 (9760): 127-138. 10.1016/S0140-6736(10)62231-3.View ArticlePubMedPubMed CentralGoogle Scholar
- National Cancer Policy Board, Institute of Medicine, National Research Council: Ensuring Quality Cancer Care. 1999Google Scholar
- Institute of Medicine. Committee on Quality of Health Care in America: Crossing the quality chasm: A new health system for the 21st century. 2001, Washington, D.C: National Academies PressGoogle Scholar
- King J: A review of bibliometric and other science indicators and their role in research evaluation. J Inform Sci. 1987, 13: 261-276. 10.1177/016555158701300501.View ArticleGoogle Scholar
- National Health Service: Putting NHS research on the map: An analysis of scientific publications in England 1990–97. 2001, London: The Wellcome TrustGoogle Scholar
- Davies S: Research and Development. The ‘Better Metrics Project’. Version 7. Edited by: Whitty P. 2006, United Kingdom: Healthcare CommissionGoogle Scholar
- Le May A, Mulhall A, Alexander C: Bridging the research–practice gap: exploring the research cultures of practitioners and managers. J Adv Nurs. 1998, 28 (2): 428-437. 10.1046/j.1365-2648.1998.00634.x.View ArticlePubMedGoogle Scholar
- Marinopoulos SS, Dorman T, Neda R, Wilson LM, Ashar BH, Magaziner JL, Miller RG, Thomas PA, Prokopowicz GP, Qayyum R, Bass EB: Effectiveness of continuing medical education. Evidence report. 2007, Agency for Healthcare Research and Quality: Rockville, MDGoogle Scholar
- Wensing M, Klazinga N, Wollersheim H, Grol R: Organisational and financial interventions. Improving Patient Care. The implementation of change in clinical practice. Edited by: Grol R, Wensing M, Eccles M. 2005, Sydney: ElsevierGoogle Scholar
- Scott A, Sivey P, Ait Ouakrim D, Willenberg L, Naccarella L, Furler J, Young D: The effect of financial incentives on the quality of health care provided by primary care physicians. Cochrane Database Syst Rev. 2011, 9: CD008451-PubMedGoogle Scholar
- Evensen AE, Sanson-Fisher R, D’Este C, Fitzgerald M: Trends in publications regarding evidence-practice gaps: a literature review. Implement Sci. 2010, 5 (11): 1-10.Google Scholar
- Cochrane Effective Practice and Organisation of Care Review Group (EPOC): Data collection checklist. Accessed at http://epoc.cochrane.org/sites/epoc.cochrane.org/files/uploads/datacollectionchecklist.pdf; 2002
- Boyle P, Levin B: World Cancer Report 2008. 2008, Lyon: IARCGoogle Scholar
- Westfall JM, Mold J, Fagan L: Practice-based research- ‘Blue highways’ on the NIH Roadmap. J Am Med Assoc. 2007, 297 (4): 403-406. 10.1001/jama.297.4.403.View ArticleGoogle Scholar
- World Health Organization: International Classification of Diseases for Oncology. 2000, Geneva: WHO, 3Google Scholar
- Heller R, Page J: A population perspective to evidence based medicine:’evidence for population health. J Epidemiol Community Health. 2002, 56 (1): 45-47. 10.1136/jech.56.1.45.View ArticlePubMedPubMed CentralGoogle Scholar
- Sanson-Fisher RW, Bonevski B, Green LW, D’Este C: Limitations of the randomized controlled trial in evaluating population-based health interventions. Am J Prev Med. 2007, 33 (2): 155-161. 10.1016/j.amepre.2007.04.007.View ArticlePubMedGoogle Scholar
- Lean MEJ, Mann JI, Hoek JA, Elliot RM, Schofield G: Translational research. Br Med J. 2008, 337: a863-10.1136/bmj.a863.View ArticleGoogle Scholar
- National Health and Medical Research Council: How to put the evidence into practice: implementation and dissemination strategies. 2000, Canberra: NHMRCGoogle Scholar
- Giguère A, Légaré F, Grimshaw J, Turcotte S, Fiander M, Grudniewicz A, Makosso-Kallyth S, Wolf FM, Farmer AP, Gagnon MP: Printed educational materials: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012, 10: CD004398-PubMedGoogle Scholar
- Buchan H: Gaps between best evidence and practice: causes for concern. Med J Aust. 2004, 180: S48-S49.PubMedGoogle Scholar
- Debrix I, Tilleul P, Milleron B, Grené N, Bouleuc C, Roux D, Lioté H, Madelaine I, Bellanger A, Conort O, Fontan JE, Le Mercier F, Bardin C, Becker A: The relationship between introduction of American society of clinical oncology guidelines and the use of colony-stimulating factors in clinical practice in a Paris university hospital. Clin Ther. 2001, 23 (7): 1116-1127. 10.1016/S0149-2918(01)80095-3.View ArticlePubMedGoogle Scholar
- Sanson-Fisher R, Bailey LJ, Aranda S, D'Este C, Stojanovski E, Sharkey K, Schofield P: Quality of life research: Is there a difference in output between the major cancer types?. Eur J Cancer Care (Engl). 2009, 19 (6): 714-720.View ArticleGoogle Scholar
- Siegel R, Naishadham D, Jemal A: Cancer Statistics, 2012. CA Cancer J Clin. 2012, 62: 10-29. 10.3322/caac.20138.View ArticlePubMedGoogle Scholar
- Abbott M, Doucouliagos H: Research output of Australian universities. Edu Econ. 2004, 12 (3): 251-265. 10.1080/0964529042000258608.View ArticleGoogle Scholar
- Hopewell S, Loudon K, Clarke MJ, Oxman AD, Dickersin K: Publication bias in clinical trials due to statistical significance or direction of trial results. Cochrane Database Syst Rev. 2009, 1 (1): MR000006-PubMedGoogle Scholar
- Easterbrook PJ, Gopalan R, Berlin J, Matthews DR: Publication bias in clinical research. Lancet. 1991, 337 (8746): 867-872. 10.1016/0140-6736(91)90201-Y.View ArticlePubMedGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.