Skip to main content

Elusive search for effective provider interventions: a systematic review of provider interventions to increase adherence to evidence-based treatment for depression



Depression is a common mental health disorder for which clinical practice guidelines have been developed. Prior systematic reviews have identified complex organizational interventions, such as collaborative care, as effective for guideline implementation; yet, many healthcare delivery organizations are interested in less resource-intensive methods to increase provider adherence to guidelines and guideline-concordant practices. The objective of this systematic review was to assess the effectiveness of healthcare provider interventions that aim to increase adherence to evidence-based treatment of depression in routine clinical practice.


We searched five databases through August 2017 using a comprehensive search strategy to identify English-language randomized controlled trials (RCTs) in the quality improvement, implementation science, and behavior change literature that evaluated outpatient provider interventions, in the absence of practice redesign efforts, to increase adherence to treatment guidelines or guideline-concordant practices for depression. We used meta-analysis to summarize odds ratios, standardized mean differences, and incidence rate ratios, and assessed quality of evidence (QoE) using the GRADE approach.


Twenty-two RCTs promoting adherence to clinical practice guidelines or guideline-concordant practices met inclusion criteria. Studies evaluated diverse provider interventions, including distributing guidelines to providers, education/training such as academic detailing, and combinations of education with other components such as targeting implementation barriers. Results were heterogeneous and analyses comparing provider interventions with usual clinical practice did not indicate a statistically significant difference in guideline adherence across studies. There was some evidence that provider interventions improved individual outcomes such as medication prescribing and indirect comparisons indicated more complex provider interventions may be associated with more favorable outcomes. We did not identify types of provider interventions that were consistently associated with improvements across indicators of adherence and across studies. Effects on patients’ health in these RCTs were inconsistent across studies and outcomes.


Existing RCTs describe a range of provider interventions to increase adherence to depression guidelines. Low QoE and lack of replication of specific intervention strategies across studies limited conclusions that can be drawn from the existing research. Continued efforts are needed to identify successful strategies to maximize the impact of provider interventions on increasing adherence to evidence-based treatment for depression.

Trial registration

PROSPERO record CRD42017060460 on 3/29/17

Peer Review reports


Depression is one of the most common mental health disorders worldwide, affecting about 7% of the adult populations in the USA and the European Union [1, 2]. Depression is associated with poor quality of life and significantly decreased psychosocial functioning [3]; high societal costs related to patient care, unstable or unproductive employment, marital and relationship disruption [4,5,6]; and mortality [7, 8]. Depression is most often identified by practitioners in primary care settings [9, 10]. Collaborative care interventions in primary care can significantly and cost-effectively improve depression care outcomes [11,12,13,14] and can improve adherence to clinical guidelines for effective psychological and pharmacological treatments for depression [15,16,17]. However, collaborative care interventions require major commitment to organizational change, including commitment by mental health specialists to support the revamped system. Levels of organizational [18] and provider [19] readiness significantly influence any potential positive effect of collaborative care on outcomes. Given that not all organizations will have the resources or readiness to implement large system redesign efforts, it is important to understand how and whether less intensive intervention efforts that may be easier to adopt can influence provider behavior.

Although most complex interventions that aim to improve depression care include some elements related to guideline-based education [20,21,22,23,24], further research is needed to evaluate the comparative effects of different educational interventions, which do not require organization change, on specific provider behaviors. Knowledge transfer is a burgeoning field that seeks to reduce the gap between research on evidence-based interventions and use of these interventions by generating, sharing, and applying research knowledge in practice [25]. However, knowledge transfer work is only beginning to systematically address methods for achieving clinical guideline-based provider behavior change. Based on conclusions that passive dissemination in educational and quality assurance interventions (e.g., mailing guidelines to providers with no reminders or follow-up) is generally ineffective [20], researchers have emphasized system-level strategies that require restructuring care processes, extensive time for planning, financial reorganization, and establishing new clinics and staff [26, 27]. Yet, education and dissemination interventions may have greater feasibility than large-scale organizational change, may be necessary for promoting organizational and provider readiness, and are often critical components of the broader system-level interventions. In addition, in settings outside of primary care, there is still much reliance on direct knowledge transfer paradigms. Adoption of evidence-based care and fidelity to manualized treatment are among the biggest challenges in specialty care settings [28, 29].

Two prior reviews of organizational and education interventions implemented exclusively in primary care settings concluded that provider training alone does not improve depression care [30, 31]. The more comprehensive review [30] was conducted in 2003, and updated reviews are necessary to increase understanding of what effective behavior change strategies [20] can promote adherence to guidelines and guideline-concordant practices in specific settings. The most recent review [31] focused only on physician (e.g., general practitioners) education and did not address other interventions or providers, including those working in specialty care settings. Moreover, neither review focused on guideline adherence by providers; instead, they included only patient outcomes. Improved guideline adherence is a critical step in the path toward depression outcome improvement. Thus, the results of interventions targeting provider behavior change are important to policy makers, administrators, and providers in assessing how best to increase the use of evidence-based care for depression. Lastly, both reviews assessed only interventions within primary care settings: Since depression is most often identified by practitioners in primary care settings [9, 10], understanding which provider interventions work in these settings is essential. However, the majority of RCTs evaluating medication and behavioral treatments for depression are conducted in specialty care settings [32,33,34,35]. Therefore, specialty care settings (e.g., managed behavioral health care organizations, psychiatry private practice) are also important settings in which to assess the effectiveness of interventions [36].

In this systematic review, we synthesize estimates of the effects of provider interventions, with a specific focus on behavioral health provider change, to promote adherence to evidence-based treatments for depression. We purposefully focus on RCTs with provider outcomes as the primary outcome, and we include both specialty and primary care settings, given the large number of behavioral change strategies that have been proposed to encourage providers to adopt evidence-based treatments for depression in practice [30, 37,38,39]. We also examine whether provider intervention effects vary across provider target of the intervention (i.e., a sole provider or a team of providers).



The review is based on a registered systematic review protocol (PROSPERO record CRD42017060460).

Search strategy

In August 2017, we searched the databases PubMed, PsycINFO, the Cumulative Index of Nursing and Allied Health Literature, the Cochrane Central Register of Controlled Trials, and the Cochrane Database of Systematic Reviews to identify English-language reports of RCTs that evaluated the effects of provider interventions. Searches included depression terms (e.g., depress$, mood dysregulation), general terms for knowledge transfer and organizational quality improvement (e.g., evidence-based guideline, research to practice) [40], terms related to provider interventions for clinical practice guidelines and implementation strategies (e.g., academic detailing, reminder systems), approaches for continuous quality improvement (e.g., CQI; quality manage$, model for improvement), terms for continuous professional education (e.g., continuing education, learning collaborative), and behavior change terms (e.g., reframing, incentive) [24, 41,42,43] (see Additional file 1: Appendix A for full search strategy). We also searched bibliographies of existing systematic reviews and included studies.


Eligible participants were healthcare providers responsible for patient care in the outpatient setting (e.g., primary care physicians, psychiatrists, mental health professionals, nurse practitioners, other general practitioners such as physician assistants). Eligible interventions aimed to increase adherence to depression guidelines and guideline-concordant practice (e.g., continuing education, quality improvement projects, and financial, organizational, or regulatory interventions that used knowledge translation strategies). To determine the effect of interventions on provider behavior change, we excluded studies that primarily assessed the effects of large system redesign efforts, such as collaborative care, where new clinics are established, care is reorganized (e.g., implementing dedicated care managers), and training of existing providers is only a minor component of the larger intervention. We also included interventions aimed at improving depression treatment and excluded studies focused solely on improving screening/assessment or referral behavior. Eligible comparators were no intervention, usual care practice (UCP), wait list control, or other provider interventions (e.g., organizational system redesign or an out of scope intervention). Outcomes documented the adherence of providers to guidelines or to guideline-concordant practices. We evaluated observable, objective changes in provider behavior because they are better markers of intervention success than are provider knowledge, attitudes, satisfaction, or perceived changes, which occur earlier in the change process [44], and while they are often precursors to change, they may not progress to the observable changes in behavior necessary for impacting patient outcomes. Timing involved any intervention duration and any follow-up period, and setting was any outpatient healthcare delivery facility or other physician practice setting. The review was restricted to RCT study design, with studies randomizing provider participants or practice sites to interventions. We aimed to identify the presence and absence of evidence from this robust research design, which allows for the development of the confident evidence statements desired for policy changes.

Data extraction and critical appraisal

We used a standardized approach for systematic reviews with detailed instructions for reviewers to reduce ambiguities. Following a pilot session to ensure similar interpretation of the inclusion and exclusion criteria, two reviewers independently screened all titles and abstracts of retrieved citations. Citations judged as potentially eligible by one or both reviewers were obtained as full text. The reviewers then both screened full-text publications against the specified inclusion and exclusion criteria, abstracted data from those studies that met the inclusion criteria, and assessed their risk of bias. All disagreements were resolved through author discussions. Critical appraisal assessments included the Cochrane Risk of Bias tool [45] and the Quality Improvement Minimum Quality Criteria Set (QI-MQCS) [46] to address internal validity and study-design independent criteria for interventions aiming to improve healthcare.

Analytic plan

We used the Hartung-Knapp-Sidik-Jonkman method for random effects meta-analysis to summarize odds ratios (OR), standardized mean differences (SMD), and incidence rate ratios (IRR) together with the 95% confidence interval (CI). Tests of heterogeneity were performed using the I2 statistic [47]. Values of I2 of 30–60% possibly represent moderate heterogeneity, 50–90% substantial heterogeneity, and 75–100% considerable heterogeneity [48]. We assessed the quality of evidence (QoE) using the Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) approach.

The identified studies assessed a variety of study-specific outcomes. To facilitate comparisons across studies with the available data, we selected a dichotomous, continuous, and incidence rate variable as the main indications of adherence to depression guidelines or guideline-concordant practices (see Table 1). The lead author reviewed the intervention content and selected these main indicators of adherence from a list of all reported outcomes in the study. A content expert checked the selection. To avoid bias, the specific outcomes were selected before analyses. We also analyzed individual provider behavior change outcomes that were reported in more than one study: medication prescribing, contact with patients, specific intervention adherence, and referral to mental healthcare specialists offered to patients. We analyzed the effects on patient outcomes when available. We differentiated comparators as UCP (e.g., no intervention or interventions not aimed at depression treatment), practice redesign efforts (e.g., introduction of a nurse disease manager or a continuous quality improvement team), or other active provider interventions (e.g., training providers in a specific type of behavioral therapy, such as motivational interviewing [49]).

Table 1 Evidence table of included studies


The literature search results are documented in a PRISMA [50] literature flow diagram (see Fig. 1). We reviewed 1737 titles and abstracts, and, of these, we reviewed full texts for 365 citations, identifying 22 eligible studies reported in 34 publications. Studies took place in nine countries and included 2149 providers and 239,477 patients. Twenty studies took place in primary care settings, ranging from primary care offices and academically affiliated primary care practices to family medicine research network practices and continuing medical education groups. Two studies took place in specialty care settings: a private psychiatry practice and a managed behavioral health care organization. Two studies included teams of providers and 20 included a single provider only: 16 studies with primary care physicians, two studies with mental health care providers, and two studies with other general practitioners or clinicians. Duration of the interventions, duration of the implementation periods, and the time points of outcome assessment following the end of the implementation phase were all variable. Studies evaluated many types of provider interventions, ranging from simply disseminating depression guidelines to education strategies such as academic detailing and multi-component strategies involving education plus additional components (e.g., reminders or strategies tailored to individual providers) (see Table 1).

Fig. 1

PRISMA flow diagram

The methodological rigor of the included studies was variable; however, all studies were rated high risk of performance bias related to the lack of blinding of intervention providers. It was generally impossible for a provider to be blinded to delivery of the interventions of interest. With respect to the potential for contamination (i.e., both groups sharing material meant for the intervention group), only three out of the 22 studies were judged to be high risk of contamination bias. Half of the included studies described the context and organizational readiness for quality improvement, while the other half did not meet these criteria. Twelve studies met the criterion for reach/penetration domain and described the number of providers or departments that participated in the study compared to the number of available and potentially eligible participants or departments. Only one study addressed the sustainability of the intervention. Details for all critical appraisal domains are shown in Additional file 1: Appendix B.

Main indications of adherence to depression guidelines

Table 1 outlines the findings from individual studies and Table 2 summarizes the evidence for the pooled analyses that utilized the available dichotomous, continuous, and IRR adherence outcomes. Additional file 1: Appendix C includes a summary of findings table with quality of evidence details. Thirteen studies with 3158 participants reported on the odds of achieving provider adherence by comparing a provider intervention to UCP (Fig. 2). Pooled analyses did not indicate a statistically significant difference in the main guideline adherence outcomes across studies (OR 1.60; CI 0.76, 3.37; 13 RCTs; I2 82%; moderate QoE). Pooled analyses of nine studies, with 1236 participants, using a continuous outcome also did not show a statistically significant difference compared to UCP (SMD 0.17; CI − 0.16, 0.50; I2 86%; low QoE) (Fig. 3). Four studies reporting IRRs also showed no difference between intervention and control groups (IRR 1.16; CI 0.63, 2.15; I2 91%; low QoE). However, all analyses showed substantial heterogeneity. Lastly, three studies with 867 participants reported on the odds of achieving provider adherence by comparing a provider intervention to practice redesign efforts; the difference was not statistically significant (OR 0.81; CI 0.30, 2.19; I2 20%).

Table 2 Summary of findings
Fig. 2

Odds of achieving provider adherence (main indication) compared to usual care practice by intervention type

Fig. 3

Mean difference in achieved provider adherence (main indication) compared to usual practice by intervention type

Medication prescribing

Eleven studies with 4116 participants reported on the odds of improved medication prescribing. The pooled analysis indicated a statistically significant intervention effect favoring the intervention compared to UCP (OR 1.42; CI 1.04, 1.92; I2 53%). Intervention providers were more likely to prescribe according to clinical practice guidelines. Three studies with 414 participants did not indicate a statistically significant effect in reporting on a continuous outcome (SMD 0.15; CI − 0.48, 0.79; I2 37%). Three studies reporting on IRRs did not show a difference between intervention and control groups (IRR 1.02; CI 0.44, 2.36; I2 90%). Two studies with 1738 participants with a practice redesign comparator showed conflicting results, and pooled results were not statistically significant (OR 0.96; CI 0.18, 5.08; I2 0%). Additional file 1: Appendix C contains further details of these analyses and analyses for all other individual outcomes summarize below.

Contact with patients

Three studies with 710 participants reported on the odds of increased patient contacts in studies comparing a provider intervention with UCP. The pooled analysis showed no statistically significant difference between intervention and control groups (OR 6.40; CI 0.13, 322.40; I2 75%). Similarly, three studies comparing a provider intervention with UCP, with 225 participants, also did not report a statistically significant difference using continuous outcomes (SMD 0.17; CI − 0.84, 1.19; I2 56%). One study [51] with 444 participants reported IRR data on the number of provider consultations at 6-month follow-up. This study reported a significant effect favoring the intervention, which consisted of training on guidelines and consultations from experts to address personal barriers to implementing guidelines (IRR 1.78; CI 1.14, 2.78). The only study reporting outcomes on contact with patients compared to a system redesign effort was a small study of 24 participants [52]; the study found no difference for the outcome (SMD 0.07; CI − 0.73, 0.87). Providers in this study intervention group received a detailed report on their patients that contained treatment recommendations based on a computerized algorithm, while the system redesign group received this feedback enhanced with care management of patients by care managers who helped to implement the physicians’ recommendations.

Specific intervention adherence

Indicators of this outcome were variable and included whether or not patients were treated using a specific treatment from the guidelines, the number of providers adhering to the specifications of the guidelines, and a performance score received by the providers on whether or not they offered appropriate treatment specified by the guidelines. Six studies with 1375 participants reported on the odds of adherence compared to UCP. The pooled analysis for dichotomous outcomes did not indicate a statistically significant intervention effect (OR 2.26; CI 0.50, 10.28; I2 90%). Three studies with 597 participants reporting on a continuous outcome also did not indicate a statistically significant difference (SMD 0.23; CI − 1.42, 1.89; I2 96%). No study reported on IRR outcomes. One small, high risk of bias study [53] reported on a measure of intervention adherence where a provider intervention was compared to practice redesign. There was no statistically significant difference between comparator and intervention, which offered education and distribution of practice guidelines to primary care providers (OR 0.30; CI 0.08, 1.14).

Referral offered to patients

Four studies with 896 participants and a UCP comparator reported on the odds of improved referral offered to patients by providers. The pooled analysis did not indicate a systematic intervention effect (OR 1.11; CI 0.33, 3.70; I2 41%). No identified studies reported on continuous or IRR outcomes on referral outcomes compared to practice redesign.

Patient outcomes

We identified 14 studies that reported patient outcomes. Specifically, these studies reported changes in depression rating scale scores, depression treatment response (i.e., proportion of patients with improvement, including remission), depression recovery (i.e., proportion of patients in remission/not meeting depression criteria at follow-up), and treatment adherence (e.g., medication adherence). Nine studies with 2196 participants reported on the mean difference in patient depression rating scales, such as the Hamilton Rating Scale for Depression [54] or the Center for Epidemiologic Studies Depression Scale [55]. A pooled analysis comparing provider interventions to UCP did not indicate a difference in patient outcomes associated with the intervention (SMD − 0.06; CI − 0.14, 0.01; I2 0%). Six studies with 1312 participants reported on depression treatment response (e.g., a score less than 11 on the Beck Depression Inventory [56]). The pooled analysis indicated a statistically significant effect favoring the provider interventions when compared to UCP (OR 1.12; CI 1.04, 1.21; I2 0%). Six studies with 1274 participants reported on patient recovery from depression; the pooled analysis did not indicate a statistically significant effect of the provider intervention when compared to UCP (OR 1.02; CI 0.91, 1.15; I2 0%). Two studies involving 281 participants and a UCP comparator reported on patient treatment adherence as indicated by the number or proportion of patients who took prescribed antidepressants as indicated. The pooled analysis did not indicate a statistically significant difference between the provider intervention and UCP (OR 1.52; CI 0.70, 3.31; I2 0%). Lastly, three studies with 861 participants reported on the mean difference in patient depression rating scales as compared to practice redesign efforts. The pooled analysis did not indicate a statistically significant difference between the provider intervention and practice redesign (SMD 0.09; CI − 0.48, 0.67; I2 52%). Two studies with 478 participants and a practice redesign comparator reported on depression treatment response; the analysis did not indicate a statistically significant difference (OR 0.53; CI 0.01, 40.38; I2 26%).

Findings by type of intervention

Ten studies provided direct comparative effectiveness results utilizing the main indications of adherence to depression guidelines outcomes [51,52,53, 57,58,59,60,61,62,63] (see Additional file 1: Appendix C) of which two reported significant differences. One study reporting a significant difference [51], involving 444 participants, compared provider training in guidelines plus tailored implementation to provider training alone. This study found no statistically significant difference in odds for achieving provider adherence at 6-month follow-up (OR 1.09; CI 0.62, 1.91) but did find a significant effect favoring the group that received provider training in guidelines plus tailored implementation for the incidence risk for achieving provider adherence (IRR 0.85; CI 0.43, 1.69). The other study reporting a significant difference [63], involving 389 participants, found a statistically significant mean difference between intervention and comparator groups (SMD 0.89; CI 0.59, 1.18). The intervention consisted of a 2-day continuing medical education course focused on treatment and differential diagnosis of depression disorders. Providers were assigned to groups in which the education component was tailored to the providers’ self-reported stage of change. The comparator group also received education on treatment and diagnosis of depression disorders (including the same 2-day continuing medical education course), but the education was not tailored to the providers’ stage of change.

Indirect comparisons using meta-regression determined whether within the range of eligible provider interventions, those that combined education with other components, such as tailored implementation strategies, reported better results than education-only interventions. The analyses did not indicate that interventions classified as education only systematically reported different effects than interventions with additional components (dichotomous outcomes p = 0.574, continuous outcomes p = 0.238). We also compared unidimensional and multidimensional interventions. We found no statistically significant effect for dichotomous outcomes (p = 0.707), but the equivalent analysis for studies reporting continuous outcomes approached statistical significance (p = 0.055). To explore this finding further, we rated the intensity of the intervention on a 3-point scale. A meta-regression for the dichotomous adherence outcome did not show an effect (p = 0.973); however, the analysis of the continuous adherence outcome suggested that the intensity of the intervention was associated with the effect size (i.e., the greater the intensity, the greater the adherence; p = 0.033). The analysis should be interpreted with caution because of the small number of studies contributing to individual intensity rating categories.

For subgroup analyses, we stratified the included studies by those distributing guidelines to providers, those with education interventions, and those with more complex interventions that included, for example, an education component in addition to exploring and helping providers overcome barriers to guideline implementation. Figures 2 and 3 show the individual study results for the dichotomous and continuous main indication of adherence outcomes within these broad subgroups. All three subgroups still reported no statistically significant differences between the intervention and the comparator groups. There was no statistically significant intervention effect in studies that simply distributed treatment guidelines (OR 1.28; CI 0.75, 2.19; 3 RCTs; I2 0%; SMD − 0.44; CI − 0.68, − 0.20; 1 RCT). Three studies evaluated an education intervention and showed conflicting results (OR 3.04; CI 0.01, 756.17; I2 95. SMD 0.15; CI − 0.48, 0.79; I2 37%). The pooled analysis of studies of education plus other components also did not indicate a statistically significant intervention effect (OR 1.17; CI 0.62, 2.18; I2 44%, 7 RCTs. SMD 0.37; CI − 0.16, 0.90; I2 80%; 5 RCTs).

Findings by provider target

We did not identify any studies directly comparing effects for different types of healthcare providers. We indirectly compared the 20 interventions that targeted single providers and the two that targeted teams. For the dichotomous adherence outcome, a meta-regression indicated that the intervention effect systematically varied by the type of provider targeted (p = 0.034); yet, the analysis should be interpreted with caution because only one of the team studies contributed data to this [64]. The effect was not replicated in an analysis based on IRR data that compared the other team intervention [65] with the three sole provider interventions that had count outcomes (p = 0.352).

For subgroup analyses, we stratified the results by interventions on single providers versus teams of providers (see Additional file 1: Appendix C). Pooled analyses of studies of interventions that targeted single providers did not report a statistically significant intervention effect on the main adherence outcome (OR 1.42; CI 0.74, 2.73; 12 RCTs; I2 80%). One study [64] compared a team intervention to a control, and the effect for provider follow-up with patients was significant in favor of the intervention group at 12-month follow-up (OR 101.34, CI 6.17, 1664.08). Given the wide confidence interval, we looked at another main adherence outcome (i.e., whether the patient received medication plus counseling) and similarly found an effect favoring the team intervention (OR 1.50; CI 0.83, 2.73).

Findings by setting

We did not identify any studies directly comparing the effects of the setting. To assess whether effects varied by setting, we compared two studies conducted in specialty care settings with 20 conducted in primary care settings (see Additional file 1: Appendix C). A meta-regression on the main continuous adherence outcome did not suggest any systematic effects of the setting (p = 0.385), but the result should be interpreted with caution as only two studies provided data on specialty care settings.


This systematic review compiles research evidence on the effects of healthcare provider interventions on adherence to guidelines or guideline-concordant behavior for depression treatment. We excluded system redesign efforts as interventions for our purposes (e.g., collaborative care where infrastructures are re-organized) and targeted studies that included provider behavior change outcomes. Our findings provide little support for the effectiveness of currently tested provider education or dissemination interventions on provider adherence to depression treatment guidelines; however, there was some evidence that provider interventions improved the outcomes of medication prescribing and patient depression treatment response. Results also suggested that some interventions that were tailored to providers’ needs and that went beyond simply distributing guidelines to providers may improve provider behavior and promote guideline adherence.

Our findings are important for several reasons. First, it is important for healthcare systems to know whether the approaches identified in our review, all of which are less costly to implement than major systems change interventions, such as collaborative care, can change provider behavior. Second, few, if any, interventions including collaborative care for depression are undertaken without an education-focused component. This study can help focus efforts to better evaluate and improve this component. Third, provider education can be a critical step in promoting readiness to improve depression care and, if carried out effectively, may often be the best first implementation step in depression care improvement initiatives. This study provides a foundation for further development of provider education and dissemination methods for improving depression care.

While there is a substantial body of evidence on provider interventions in terms of research volume, it is noteworthy that we evaluated many unique interventions, ranging from the simple distribution of guidelines to education strategies only and further to education that involved multiple follow-up components and trainings. No two studies reported on the same intervention and comparator which limits comparative analyses. We assessed whether educational interventions alone can change provider behavior in clinical practice and, in line with existing reviews [30, 31], our analyses did not find significant effects. Indirect comparisons across the identified studies to detect effect modifiers indicated that more complex interventions (i.e., those with provider education plus additional components and implementation strategies such as tailoring training to address personal implementation barriers) may be associated with more favorable outcomes. However, we did not identify subgroups of interventions that were consistently associated with significant changes in provider behavior. The individual successful approaches observed for main adherence outcomes have not been investigated in more than one study, and findings have not been replicated across independent researcher groups. More studies are needed that attempt to isolate specific provider interventions employed either within system redesigns or in studies that evaluate provider interventions specifically. As described, the methodological rigor in included research studies varied, but none of our analyses were exclusively based on poor quality studies. We also found no indication of publication bias. In addition, because the outcomes we were interested in were often not the primary outcome of the research studies (e.g., studies were often interested in the impact on patients), we are also more confident that estimates in our review are less likely to be affected by publication bias. Still, given the diversity of the interventions evaluated in individual studies, the heterogeneity in results, and results based on single studies, often with imprecision in effect estimates and with follow-up periods of 1 year or less, the quality of evidence remains very limited.

Though we did not identify statistically significant differences for the main adherence outcomes across the interventions compared to UCP, analyses showed heterogeneity and wide confidence intervals that support the possibility of a large range of potential intervention effects. A pooled analysis of 11 RCTs indicated increased odds of improved medication prescribing, which is arguably the aspect of depression care most under the healthcare providers’ control. There was no indication of publication bias; however, we detected considerable heterogeneity and not all studies favored the intervention. Therefore, we believe the finding to have low quality of evidence. Furthermore, no statistically significant difference emerged in an analysis for improved medication prescribing utilizing a continuous operationalization of the outcome. One study [51] showed an increased rate of contact with patients following training and consultations from experts on guidelines that incorporated personal barriers to implementing the guidelines, compared to UCP. However, the result is based on a single study, and therefore, we have limited confidence in this finding. No other specific provider behavior outcome was found to be significant for provider interventions compared to any comparator.

Due to the small number of studies reporting team interventions or interventions in specialty care, we did not find statistically robust evidence that intervention effects varied by targeted provider group or setting. Our review findings suggest that interventions targeting multidisciplinary team members are more effective than interventions targeting only healthcare providers directly, but additional research studies are needed to confirm this finding. Given the lack of studies in specialty care settings, more studies conducted in specialty care settings are also needed to understand how evidence-based interventions can best be adopted by providers outside of psychiatric research settings.

The review findings for effects on patient health were mixed. Although depression treatment response improved across the identified intervention, we did not find significant effects for other patient outcomes such as depression rating scale scores, depression recovery, or treatment adherence. The findings for patient outcomes should be interpreted in context because we restricted the review to studies that reported on provider outcomes. Prior reviews have evaluated how provider interventions affect patient outcomes and have concluded that multi-faceted and system redesign approaches were more effective in improving patient outcomes than simpler or single component interventions, such as distribution of guidelines and education alone [30, 31]. Yet, our review set out to identify interventions that can be implemented in healthcare organizations without practice redesign efforts and more studies are needed that report on both provider behavior change outcomes and patient outcomes in order to better understand whether provider behavior is affected by the intervention and if the change in provider behavior is ultimately affecting patient health.

This review has several strengths, including an a priori research design, duplicate reviewer study selection and data abstraction of study information, a thoughtful and thorough literature search not restricted to a small set of known interventions, detailed critical appraisal, and comprehensive quality of evidence assessments used to formulate review conclusions. Yet, limitations remain. First, our review documents results of RCTs, a robust study design that allows confident evidence statements. Evidence from RCTs and pre-post studies cannot easily be combined methodologically, and we chose to restrict to RCTs. All forest plots are based on studies with concurrent control groups randomly assigned to an intervention condition. Nonetheless, we acknowledge that some authors [66] and the Cochrane Effective Practice and Organisation of Care Group have recommended other study designs such as controlled before-after studies, in addition to RCTs, when evaluating organizational interventions. However, we were specifically interested in the presence and absence of this strong and universally accepted study design to document the state of evidence for provider interventions. An exploratory search for non-RCT literature indicated that results reported with other study designs appear to be similarly mixed in non-randomized controlled studies, time-series, pre-post studies, and cohort studies [67,68,69,70,71,72,73]. Second, by including only studies that measure provider behavior and outcomes, we were able to judge whether the intervention is having the intended effects on the target of the intervention (i.e., the providers). Nonetheless, this restriction excluded a large number of existing research studies that do not report on provider behavior. Third, to be included, studies had to report on depression treatment. Effects on improving recognition, screening, or diagnosis of patients or on increasing referral behavior to specialty mental health care settings should be assessed in future systematic reviews. Lastly, the individual interventions and promoted depression practice guidelines varied across studies. Our review included studies using prominent guidelines such as the Agency for Health Research and Quality (AHRQ) treatment guidelines for depression in primary care [74], in addition to studies using treatment guidelines for which we could not verify whether the guidelines were evidence-based. The specific guidelines utilized within the interventions themselves varied and ranged from the American Psychiatric Association Practice Guidelines for the Treatment of Psychiatric Disorders to the Dutch College of General Practitioners’ Practice Guideline for Depression to the Agency for Health Care Policy and Research Practice Guidelines for Depression [75,76,77]. Many of the studies did not specify in detail how lengthy or how much of a time commitment the guidelines were for providers, which could have accounted for the provider change behaviors findings described within the individual included studies. Some standardization across studies regarding which and how guidelines were utilized in practice appears needed. Such standardization could help account for confounding factors in research studies, but the field may also benefit from a single source of information on best treatment practices for depression.

Conclusions and future directions

Fourteen years ago, Gilbody and colleagues [30] reviewed organizational and educational interventions targeted at primary care providers treating depressed patients. Authors concluded that effective strategies to improve depression management in this setting were multi-faceted (e.g., system redesign approaches including screening for depression, providing education to patients, and realignment of professional roles in an organization). Sikorski and colleagues [31] similarly concluded that provider training alone does not seem to improve depression care. Our review shows that, despite new research, provider interventions focused primarily on guideline distribution or education only are unlikely to be effective in the absence of additional components. Our review did not identify subgroups or categories of interventions that were consistently associated with increased adherence to depression guidelines or guideline-concordant practices. These findings underscore the need for further research to better understand how to effectively change provider behavior in differential care settings without organizational redesigns. Innovations are needed to support healthcare organizations that want to improve guideline adherence but do not intend to invest in efforts to restructure how care is delivered. Research on provider interventions should be supported by a framework that allows for a more structured assessment to identify successful intervention approaches and the effects of individual intervention components.


  1. 1.

    Center for Behavioral Health Statistics and Quality. Key substance use and mental health indicators in the United States: results from the 2015 National Survey on Drug Use and Health (HHS Publication No. SMA 16-4984, NSDUH Series H-51). Rockville: Substance Abuse and Mental Health Services Administration; 2016.

    Google Scholar 

  2. 2.

    Wittchen HU, Jacobi F, Rehm J, Gustavsson A, Svensson M, Jonsson B, Olesen J, Allgulander C, Alonso J, Faravelli C, et al. The size and burden of mental disorders and other disorders of the brain in Europe 2010. Eur Neuropsychopharmacol. 2011;21(9):655–79.

    Article  PubMed  CAS  Google Scholar 

  3. 3.

    Papakostas GI, Petersen T, Mahal Y, Mischoulon D, Nierenberg AA, Fava M. Quality of life assessments in major depressive disorder: a review of the literature. Gen Hosp Psychiatry. 2004;26(1):13–7.

    Article  PubMed  Google Scholar 

  4. 4.

    Kessler RC. The costs of depression. Psychiatr Clin N Am. 2012;35(1):1–14.

    Article  Google Scholar 

  5. 5.

    Wade AG, Haring J. A review of the costs associated with depression and treatment noncompliance: the potential benefits of online support. Int Clin Psychopharmacol. 2010;25(5):288–96.

    Article  PubMed  Google Scholar 

  6. 6.

    Mrazek DA, Hornberger JC, Altar CA, Degtiar I. A review of the clinical, economic, and societal burden of treatment-resistant depression: 1996–2013. Psychiatr Serv. 2014;

  7. 7.

    Chesney E, Goodwin GM, Fazel S. Risks of all-cause and suicide mortality in mental disorders: a meta-review. World Psychiatry. 2014;13(2):153–60.

    Article  PubMed  PubMed Central  Google Scholar 

  8. 8.

    Wulsin LR, Vaillant GE, Wells VE. A systematic review of the mortality of depression. Psychosom Med. 1999;61(1):6–17.

    Article  PubMed  CAS  Google Scholar 

  9. 9.

    Wittchen H-U, Holsboer F, Jacobi F. Met and unmet needs in the management of depressive disorder in the community and primary care: the size and breadth of the problem. J Clin Psychiatry. 2001;62:23–8.

    PubMed  Google Scholar 

  10. 10.

    Bijl RV, Ravelli A. Psychiatric morbidity, service use, and need for care in the general population: results of the Netherlands Mental Health Survey and Incidence Study. Am J Public Health. 2000;90(4):602.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  11. 11.

    Bower P, Gilbody S, Richards D, Fletcher J, Sutton A. Collaborative care for depression in primary care. Br J Psychiatry. 2006;189(6):484.

    Article  PubMed  Google Scholar 

  12. 12.

    Gilbody S, Bower P, Fletcher J, Richards D, Sutton AJ. Collaborative care for depression: a cumulative meta-analysis and review of longer-term outcomes. Arch Intern Med. 2006;166(21):2314–21.

    Article  PubMed  Google Scholar 

  13. 13.

    Huang H, Tabb KM, Cerimele JM, Ahmed N, Bhat A, Kester R. Collaborative care for women with depression: a systematic review. Psychosomatics. 2017;58(1):11–8.

    Article  PubMed  Google Scholar 

  14. 14.

    Thota AB, Sipe TA, Byard GJ, Zometa CS, Hahn RA, McKnight-Eily LR, Chapman DP, Abraido-Lanza AF, Pearson JL, Anderson CW, et al. Collaborative care to improve the management of depressive disorders: a community guide systematic review and meta-analysis. Am J Prev Med. 2012;42(5):525–38.

    Article  PubMed  Google Scholar 

  15. 15.

    Rubenstein LV, Jackson-Triche M, Unutzer J, Miranda J, Minnium K, Pearson ML, Wells KB. Evidence-based care for depression in managed primary care practices. Health Aff (Millwood). 1999;18(5):89–105.

    Article  CAS  Google Scholar 

  16. 16.

    Wells KB, Tang L, Miranda J, Benjamin B, Duan N, Sherbourne CD. The effects of quality improvement for depression in primary care at nine years: results from a randomized, controlled group-level trial. Health Serv Res. 2008;43(6):1952–74.

    Article  PubMed  PubMed Central  Google Scholar 

  17. 17.

    Rost K, Nutting P, Smith J, Werner J, Duan N. Improving depression outcomes in community primary care practice: a randomized trial of the QuEST intervention. J Gen Intern Med. 2001;16(3):143–9.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  18. 18.

    Rubenstein LV, Danz MS, Crain AL, Glasgow RE, Whitebird RR, Solberg LI. Assessing organizational readiness for depression care quality improvement: relative commitment and implementation capability. Implement Sci. 2014;9:173.

    Article  PubMed  PubMed Central  Google Scholar 

  19. 19.

    Chaney EF, Rubenstein LV, Liu C-F, Yano EM, Bolkan C, Lee M, Simon B, Lanto A, Felker B, Uman J. Implementing collaborative care for depression treatment in primary care: a cluster randomized evaluation of a quality improvement practice redesign. Implement Sci. 2011;6(1):121.

    Article  PubMed  PubMed Central  Google Scholar 

  20. 20.

    Grimshaw JM, Shirran L, Thomas R, Mowatt G, Fraser C, Bero L, Grilli R, Harvey E, Oxman A, O'Brien MA. Changing provider behavior: an overview of systematic reviews of interventions. Med Care. 2001;39(8):II2–II45.

    PubMed  CAS  Google Scholar 

  21. 21.

    Grol R, Grimshaw J. From best evidence to best practice: effective implementation of change in patients’ care. Lancet. 2003;362(9391):1225–30.

    Article  PubMed  Google Scholar 

  22. 22.

    Eccles M, Grimshaw J, Walker A, Johnston M, Pitts N. Changing the behavior of healthcare professionals: the use of theory in promoting the uptake of research findings. J Clin Epidemiol. 2005;58(2):107–12.

    Article  PubMed  Google Scholar 

  23. 23.

    Colquhoun H, Leeman J, Michie S, Lokker C, Bragge P, Hempel S, McKibbon KA, G-JY P, Stevens KR, Wilson MG, et al. Towards a common terminology: a simplified framework of interventions to promote and integrate evidence into health practices, systems, and policies. Implement Sci. 2014;9(1):781.

    Article  Google Scholar 

  24. 24.

    Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. 2011;6:42.

  25. 25.

    Pentland D, Forsyth K, Maciver D, Walsh M, Murray R, Irvine L, Sikora S. Key characteristics of knowledge transfer and exchange in healthcare: integrative literature review. J Adv Nurs. 2011;67(7):1408–25.

    Article  PubMed  Google Scholar 

  26. 26.

    Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, Glass JE, York JL. A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev. 2012;69(2):123–57.

    Article  PubMed  Google Scholar 

  27. 27.

    Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

  28. 28.

    Shiner B, D’Avolio LW, Nguyen TM, Zayed MH, Young-Xu Y, Desai RA, Schnurr PP, Fiore LD, Watts BV. Measuring use of evidence based psychotherapy for posttraumatic stress disorder. Adm Policy Ment Health Ment Health Serv Res. 2013;40(4):311–8.

    Article  Google Scholar 

  29. 29.

    Finley EP, Garcia HA, Ketchum NS, McGeary DD, McGeary CA, Stirman SW, Peterson AL. Utilization of evidence-based psychotherapies in Veterans Affairs posttraumatic stress disorder outpatient clinics. Psychol Serv. 2015;12(1):73.

    Article  PubMed  Google Scholar 

  30. 30.

    Gilbody S, Whitty P, Grimshaw J, Thomas R. Educational and organizational interventions to improve the management of depression in primary care: a systematic review. JAMA. 2003;289(23):3145–51.

    Article  PubMed  Google Scholar 

  31. 31.

    Sikorski C, Luppa M, Konig HH, van den Bussche H, Riedel-Heller SG. Does GP training in depression care affect patient outcome?—a systematic review and meta-analysis. BMC Health Serv Res. 2012;12:10.

    Article  PubMed  PubMed Central  Google Scholar 

  32. 32.

    Laoutidis ZG, Mathiak K. Antidepressants in the treatment of depression/depressive symptoms in cancer patients: a systematic review and meta-analysis. BMC Psychiatry. 2013;13(1):1.

    Article  CAS  Google Scholar 

  33. 33.

    Barth J, Munder T, Gerger H, Nuesch E, Trelle S, Znoj H, Juni P, Cuijpers P. Comparative efficacy of seven psychotherapeutic interventions for patients with depression: a network meta-analysis. PLoS Med. 2013;10(5):e1001454.

    Article  PubMed  PubMed Central  Google Scholar 

  34. 34.

    Cuijpers P, Andersson G, Donker T, van Straten A. Psychological treatment of depression: results of a series of meta-analyses. Nordic J Psychiatry. 2011;65(6):354–64.

    Article  Google Scholar 

  35. 35.

    Khan A, Faucett J, Lichtenberg P, Kirsch I, Brown WA. A systematic review of comparative efficacy of treatments and controls for depression. PLoS One. 2012;7(7):e41778.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  36. 36.

    Shidhaye R, Lund C, Chisholm D. Closing the treatment gap for mental, neurological and substance use disorders by strengthening existing health care platforms: strategies for delivery and integration of evidence-based interventions. Int J Ment Heal Syst. 2015;9(1):1.

    Article  Google Scholar 

  37. 37.

    Raney LE. Integrating primary care and behavioral health: the role of the psychiatrist in the collaborative care model. Am J Psychiatr. 2015;172(8):721–8.

    Article  PubMed  Google Scholar 

  38. 38.

    Unützer J, Park M. Strategies to improve the management of depression in primary care. Primary care. 2012;39(2):415–31.

    Article  PubMed  PubMed Central  Google Scholar 

  39. 39.

    Katon W, Unutzer J, Wells K, Jones L. Collaborative depression care: history, evolution and ways to enhance dissemination and sustainability. Gen Hosp Psychiatry. 2010;32(5):456–64.

    Article  PubMed  Google Scholar 

  40. 40.

    Hempel S, Rubenstein LV, Shanman RM, Foy R, Golder S, Danz M, Shekelle PG. Identifying quality improvement intervention publications—a comparison of electronic search strategies. Implement Sci. 2011;6:85.

    Article  PubMed  PubMed Central  Google Scholar 

  41. 41.

    Michie S, Richardson M, Johnston M, Abraham C, Francis JJ, Hardeman W, Eccles MP, Cane J, Wood CE. The behavior change technique taxonomy (v1) of 93 hierarchically-clustered techniques: building an international consensus for the reporting of behavior change interventions. Ann Beh Med. 2013;46:81–95.

  42. 42.

    Leeman J, Baernholdt M, Sandelowski M. Developing a theory-based taxonomy of methods for implementing change in practice. J Adv Nurs. 2007;58:191–200

  43. 43.

    Michie S, Wood CE, Johnston M, Abraham C, Francis JJ, Hardeman W. Behaviour change techniques: the development and evaluation of a taxonomic method for reporting and describing behaviour change interventions (a suite of five studies involving consensus methods, randomised controlled trials and analysis of qualitative data). Health Technology Assess. 2015;19(99):1–188.

    Article  Google Scholar 

  44. 44.

    Prochaska JO, Redding CA, Evers K. The transtheoretical model and stages of change. In: Glanz K, Rimer BK, Lewis FM, editors. Health Behavior and Health Education: Theory, Research, and Practice (3rd Ed). San Francisco: Jossey-Bass, Inc; 2002.

    Google Scholar 

  45. 45.

    Higgins J, Green S: Cochrane Handbook for Systematic Reviews of Interventions Version 51.0. 2011. Available from

  46. 46.

    Hempel S, Shekelle PG, Liu JL, Danz MS, Foy R, Lim Y-W, Motala A, Rubenstein LV. Development of the Quality Improvement Minimum Quality Criteria Set (QI-MQCS): a tool for critical appraisal of quality improvement intervention publications. BMJ Qual Saf. 2015;

  47. 47.

    Borenstein M, Higgins JP, Hedges LV, Rothstein HR. Basics of meta-analysis: I2 is not an absolute measure of heterogeneity. Res Synth Methods. 2017;8(1):5–18.

    Article  PubMed  Google Scholar 

  48. 48.

    Higgins JP, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ. 2003;327(7414):557–60.

    Article  PubMed  PubMed Central  Google Scholar 

  49. 49.

    Miller WM, Rollnick S. Motivational interviewing (3rd ed.): helping people change. New York: Guilford Press; 2013.

    Google Scholar 

  50. 50.

    Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264–9. w264

    Article  PubMed  Google Scholar 

  51. 51.

    Sinnema H, Majo MC, Volker D, Hoogendoorn A, Terluin B, Wensing M, van Balkom A. Effectiveness of a tailored implementation programme to improve recognition, diagnosis and treatment of anxiety and depression in general practice: a cluster randomised controlled trial. Implement Sci. 2015;10:33.

    Article  PubMed  PubMed Central  Google Scholar 

  52. 52.

    Simon GE, VonKorff M, Rutter C, Wagner E. Randomised trial of monitoring, feedback, and management of care by telephone to improve treatment of depression in primary care. BMJ. 2000;320(7234):550–4.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  53. 53.

    Datto CJ, Thompson R, Horowitz D, Disbot M, Oslin DW. The pilot study of a telephone disease management program for depression. Gen Hosp Psychiatry. 2003;25(3):169–77.

    Article  PubMed  Google Scholar 

  54. 54.

    Hamilton M. A rating scale for depression. J Neurol Neurosurg Psychiatry. 1960;23:56–62.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  55. 55.

    Radloff LS. The CES-D scale. Appl Psychol Meas. 1977;1(3):385–401.

    Article  Google Scholar 

  56. 56.

    Beck AT, Ward CH, Mendelson M, Mock J, Erbaugh J. An inventory for measuring depression. Arch Gen Psychiatry. 1961;4:561–71.

    Article  PubMed  CAS  Google Scholar 

  57. 57.

    Baker R, Reddish S, Robertson N, Hearnshaw H, Jones B. Randomised controlled trial of tailored strategies to implement guidelines for the management of patients with depression in general practice. Br J Gen Pract. 2001;51(470):737–41.

    PubMed  PubMed Central  CAS  Google Scholar 

  58. 58.

    Goldberg HI, Wagner EH, Fihn SD, Martin DP, Horowitz CR, Christensen DB, Cheadle AD, Diehr P, Simon G. A randomized controlled trial of CQI teams and academic detailing: can they alter compliance with guidelines? Jt Comm J Qual Improv. 1998;24(3):130–42.

    PubMed  CAS  Google Scholar 

  59. 59.

    Keeley RD, Burke BL, Brody D, Dimidjian S, Engel M, Emsermann C, deGruy F, Thomas M, Moralez E, Koester S, et al. Training to use motivational interviewing techniques for depression: a cluster randomized trial. J Am Board Fam Med. 2014;27(5):621–36.

    Article  PubMed  Google Scholar 

  60. 60.

    Kurian BT, Trivedi MH, Grannemann BD, Claassen CA, Daly EJ, Sunderajan P. A computerized decision support system for depression in primary care. Prim. 2009;11(4):140–6.

    Google Scholar 

  61. 61.

    Rollman BL, Hanusa BH, Gilbert T, Lowe HJ, Kapoor WN, Schulberg HC: The electronic medical record. A randomized trial of its impact on primary care physicians’ initial management of major depression [corrected].[Erratum appears in Arch Intern Med 2001 Mar 12;161(5):705]. Arch Intern Med 2001, 161(2):189–197.

  62. 62.

    Worrall G, Angel J, Chaulk P, Clarke C, Robbins M. Effectiveness of an educational strategy to improve family physicians’ detection and management of depression: a randomized controlled trial. CMAJ. 1999;161(1):37–40.

    PubMed  PubMed Central  CAS  Google Scholar 

  63. 63.

    Shirazi M, Lonka K, Parikh SV, Ristner G, Alaeddini F, Sadeghi M, Wahlstrom R. A tailored educational intervention improves doctor’s performance in managing depression: a randomized controlled trial. J Eval Clin Pract. 2013;19(1):16–24.

    Article  PubMed  Google Scholar 

  64. 64.

    Yawn BP, Dietrich AJ, Wollan P, Bertram S, Graham D, Huff J, Kurland M, Madison S, Pace WD, practices T: TRIPPD: a practice-based network effectiveness study of postpartum depression screening and management. Ann Fam Med 2012, 10(4):320–329.

  65. 65.

    van Eijk ME, Avorn J, Porsius AJ, de Boer A. Reducing prescribing of highly anticholinergic antidepressants for elderly people: randomised trial of group versus individual academic detailing. BMJ. 2001;322(7287):654–7.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  66. 66.

    Barkham M, Parry G. Balancing rigour and relevance in guideline development for depression: the case for comprehensive cohort studies. Psychol Psychother. 2008;81(Pt 4):399–417.

    Article  PubMed  Google Scholar 

  67. 67.

    Bermejo I, Schneider F, Kriston L, Gaebel W, Hegerl U, Berger M, Härter M. Improving outpatient care of depression by implementing practice guidelines: a controlled clinical trial. Int J Qual Health Care. 2009;21(1):29–36.

    Article  PubMed  Google Scholar 

  68. 68.

    Lai IC, Wang MT, Wu BJ, Wu HH, Lian PW. The use of benzodiazepine monotherapy for major depression before and after implementation of guidelines for benzodiazepine use. J Clin Pharm Ther. 2011;36(5):577–84.

    Article  PubMed  CAS  Google Scholar 

  69. 69.

    Lin E, Katon W, Simon G, Korff M, Bush T, Rutter C, Saunders K, Walker E. Achieving guidelines for the treatment of depression in primary care: is physician education enough? Med Care. 1997;35:831–42.

    Article  PubMed  CAS  Google Scholar 

  70. 70.

    Jones LE, Turvey C, Torner JC, Doebbeling CC. Nonadherence to depression treatment guidelines among veterans with diabetes mellitus. Am J Manag Care. 2006;12(12):701–10.

    PubMed  Google Scholar 

  71. 71.

    Sewitch MJ, Blais R, Rahme E, Bexton B, Galarneau S. Receiving guideline-concordant pharmacotherapy for major depression: impact on ambulatory and inpatient health service use. Can J Psychiatry. 2007;52(3):191–200.

    Article  PubMed  Google Scholar 

  72. 72.

    Smolders M, Laurant M, Verhaak P, Prins M, van Marwijk H, Penninx B, Wensing M, Grol R. Adherence to evidence-based guidelines for depression and anxiety disorders is associated with recording of the diagnosis. Gen Hosp Psychiatry. 2009;31(5):460–9.

    Article  PubMed  Google Scholar 

  73. 73.

    Furukawa TA, Onishi Y, Hinotsu S, Tajika A, Takeshima N, Shinohara K, Ogawa Y, Hayasaka Y, Kawakami K. Prescription patterns following first-line new generation antidepressants for depression in Japan: a naturalistic cohort study based on a large claims database. J Affect Disord. 2013;150(3):916–22.

    Article  PubMed  Google Scholar 

  74. 74.

    Depression Guideline Panel: Depression in primary care: volume 2. Treatment of major depression. Clinical Practice Guideline, Number 5. Rockville: US Department of Health and Human Services, Public Health Service, Agency for Health Care Policy and Research; 1993.

  75. 75.

    American Psychiatric Association. American Psychiatric Association Practice Guidelines for the treatment of psychiatric disorders: compendium 2006. American Psychiatric Pub, 2006.

  76. 76.

    Schulberg HC, Katon W, Simon GE, Rush AJ. Treating major depression in primary care practice: an update of the Agency for Health Care Policy and Research Practice Guidelines. Arch Gen Psychiatry. 1998;55(12):1121–7.

    Article  PubMed  CAS  Google Scholar 

  77. 77.

    van Avendonk M, van Weel-Baumgarten E, van der Weele G, Wiersma T, Burgers JS. Summary of the Dutch College of General Practitioners’ practice guideline ‘Depression’. Ned Tijdschr Geneeskd. 2012;156(38):A5101.

    PubMed  Google Scholar 

  78. 78.

    Lin EH, Simon GE, Katzelnick DJ, Pearson SD. Does physician education on depression management improve treatment in primary care? J Gen Intern Med. 2001;16(9):614–9.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  79. 79.

    Bosmans J, de Bruijne M, van Hout H, van Marwijk H, Beekman A, Bouter L, Stalman W, van Tulder M. Cost-effectiveness of a disease management program for major depression in elderly primary care patients. J Gen Intern Med. 2006;21(10):1020–6.

    Article  PubMed  PubMed Central  Google Scholar 

  80. 80.

    Bijl D, Van Marwijk H, Beekman A, De Haan M, Van Tilburg W: A randomized controlled trial to improve the recognition, diagnosis and treatment of major depression in elderly people in general practice: design, first results and feasibility of the West Friesland Study. Primary Care Psychiatry. 2002;8(4):135–40.

    Article  Google Scholar 

  81. 81.

    Callahan CM, Hendrie HC, Dittus RS, Brater DC, Hui SL, Tierney WM. Improving treatment of late life depression in primary care: a randomized clinical trial. J Am Geriatr Soc. 1994;42(8):839–46.

    Article  PubMed  CAS  Google Scholar 

  82. 82.

    Gerrity MS, Cole SA, Dietrich AJ, Barrett JE. Improving the recognition and management of depression: is there a role for physician education? J Fam Pract. 1999;48(12):949–57.

    PubMed  CAS  Google Scholar 

  83. 83.

    Freemantle N, Nazareth I, Eccles M, Wood J, Haines A, Evidence-based OutReach t. A randomised controlled trial of the effect of educational outreach by community pharmacists on prescribing in UK general practice. Br J Gen Pract. 2002;52(477):290–5.

    PubMed  PubMed Central  Google Scholar 

  84. 84.

    Aakhus E, Granlund I, Odgaard-Jensen J, Oxman AD, Flottorp SA. A tailored intervention to implement guideline recommendations for elderly patients with depression in primary care: a pragmatic cluster randomised trial. Implement Sci. 2016;11:32.

    Article  PubMed  PubMed Central  Google Scholar 

  85. 85.

    Aakhus E, Granlund I, Odgaard-Jensen J, Wensing M, Oxman AD, Flottorp SA: Tailored interventions to implement recommendations for elderly patients with depression in primary care: a study protocol for a. pragmatic cluster randomised controlled trial. Trials. 2014;15(1):16.

  86. 86.

    Azocar F, Cuffel B, Goldman W, McCarter L. The impact of evidence-based guideline dissemination for the assessment and treatment of major depression in a managed behavioral health care organization. J Behav Health Serv Res. 2003;30(1):109–18.

    Article  PubMed  Google Scholar 

  87. 87.

    Eccles MP, Steen IN, Whitty PM, Hall L. Is untargeted educational outreach visiting delivered by pharmaceutical advisers effective in primary care? A pragmatic randomized controlled trial. Implement Sci. 2007;2:23.

    Article  PubMed  PubMed Central  Google Scholar 

  88. 88.

    Nazareth I, Freemantle N, Duggan C, Mason J, Haines A: Evaluation of a complex intervention for changing professional behaviour: the Evidence Based Out Reach (EBOR) Trial. Journal of health services research & policy. 2002;7(4):230–38.

    Article  Google Scholar 

  89. 89.

    Horowitz CR, Goldberg HI, Martin DP, Wagner EH, Fihn SD, Christensen DB, Cheadle AD: Conducting a randomized controlled trial of CQI and academic detailing to implement clinical guidelines. The Joint Commission journal on quality improvement. 1996;22(11):734–50.

    Article  PubMed  CAS  Google Scholar 

  90. 90.

    Trivedi MH, Kern JK, Grannemann BD, Altshuler KZ, Sunderajan P: A computerized clinical decision support system as a means of implementing depression guidelines. Psychiatric Services. 2004;55(8):879–85.

    Article  PubMed  Google Scholar 

  91. 91.

    Katzelnick DJ, Simon GE, Pearson SD, Manning WG, Helstad CP, Henk HJ, Cole SM, Lin EH, Taylor LH, Kobak KA: Randomized trial of a depression management program in high utilizers of medical care. Archives of Family Medicine. 2000;9(4):345.

    Article  PubMed  CAS  Google Scholar 

  92. 92.

    Linden M, Westram A, Schmidt LG, Haag C. Impact of the WHO depression guideline on patient care by psychiatrists: a randomized controlled trial. Eur Psychiatry. 2008;23(6):403–8.

    Article  PubMed  Google Scholar 

  93. 93.

    Nilsson G, Hjemdahl P, Hassler A, Vitols S, Wallen NH, Krakau I. Feedback on prescribing rate combined with problem-oriented pharmacotherapy education as a model to improve prescribing behaviour among general practitioners. Eur J Clin Pharmacol. 2001;56(11):843–8.

    Article  PubMed  CAS  Google Scholar 

  94. 94.

    Rollman BL, Hanusa BH, Lowe HJ, Gilbert T, Kapoor WN, Schulberg HC: A randomized trial using computerized decision support to improve treatment of major depression in primary care. J Gen Intern Med. 2002;17(7):493–503.

    Article  PubMed  PubMed Central  Google Scholar 

  95. 95.

    Shirazi M, Parikh SV, Alaeddini F, Lonka K, Zeinaloo AA, Sadeghi M, Arbabi M, Nejatisafa AA, Shahrivar Z, Wahlström R: Effects on knowledge and attitudes of using stages of change to train general practitioners on management of depression: a randomized controlled study. The Canadian Journal of Psychiatry. 2009;54(10):693–700.

    Article  PubMed  Google Scholar 

Download references


We are grateful to the Cochrane Effective Practice and Organisation of Care group and Jeremy Grimshaw for input and resources that shaped the scope of this review. We also thank John Williams and Thomas Concannon for their helpful comments and Patty Smith for administrative assistance.


The review was funded by the Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury (DCoE). The findings and conclusions in this manuscript are those of the authors and do not necessarily represent the views of the Department of Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury.

Availability of data and materials

All data generated or analyzed during this study are included in this published article and its supplementary information files (see Additional file 1: Appendices A–C).

Author information




All authors have made substantial contributions to conception and design (EP, LR, MD, BB, SH), acquisition of data (EP, RK, AM, JL, SH), and analysis and interpretation of data (EP, LR, RK, AM, MB, SH). All were involved in drafting the manuscript (EP, LR, SH) and providing revisions (EP, LR, RK, MD, BB, AM, MB, JL, SH). All authors have given final approval of this final version to be published. EP and SH are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Corresponding author

Correspondence to Eric R. Pedersen.

Ethics declarations

Ethics approval and consent to participate

All procedures for this review were approved by the RAND Human Subjects Review Committee.

Consent for publication

Not applicable

Competing interests

LR is on the editorial board of Implementation Science. All authors declare no other financial or non-financial competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:

Appendix A: Search strategy. Appendix B: Critical appraisal ratings using the Cochrane Risk of Bias tool and the QI-MQCS. Appendix C: Detailed quality of evidence and summary of findings. (DOCX 707 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Pedersen, E.R., Rubenstein, L., Kandrack, R. et al. Elusive search for effective provider interventions: a systematic review of provider interventions to increase adherence to evidence-based treatment for depression. Implementation Sci 13, 99 (2018).

Download citation


  • Depression
  • Provider intervention
  • Guidelines
  • Evidence-based
  • Major depressive disorder
  • Primary care
  • Specialty care