Elusive search for effective provider interventions: a systematic review of provider interventions to increase adherence to evidence-based treatment for depression

Background Depression is a common mental health disorder for which clinical practice guidelines have been developed. Prior systematic reviews have identified complex organizational interventions, such as collaborative care, as effective for guideline implementation; yet, many healthcare delivery organizations are interested in less resource-intensive methods to increase provider adherence to guidelines and guideline-concordant practices. The objective of this systematic review was to assess the effectiveness of healthcare provider interventions that aim to increase adherence to evidence-based treatment of depression in routine clinical practice. Methods We searched five databases through August 2017 using a comprehensive search strategy to identify English-language randomized controlled trials (RCTs) in the quality improvement, implementation science, and behavior change literature that evaluated outpatient provider interventions, in the absence of practice redesign efforts, to increase adherence to treatment guidelines or guideline-concordant practices for depression. We used meta-analysis to summarize odds ratios, standardized mean differences, and incidence rate ratios, and assessed quality of evidence (QoE) using the GRADE approach. Results Twenty-two RCTs promoting adherence to clinical practice guidelines or guideline-concordant practices met inclusion criteria. Studies evaluated diverse provider interventions, including distributing guidelines to providers, education/training such as academic detailing, and combinations of education with other components such as targeting implementation barriers. Results were heterogeneous and analyses comparing provider interventions with usual clinical practice did not indicate a statistically significant difference in guideline adherence across studies. There was some evidence that provider interventions improved individual outcomes such as medication prescribing and indirect comparisons indicated more complex provider interventions may be associated with more favorable outcomes. We did not identify types of provider interventions that were consistently associated with improvements across indicators of adherence and across studies. Effects on patients’ health in these RCTs were inconsistent across studies and outcomes. Conclusions Existing RCTs describe a range of provider interventions to increase adherence to depression guidelines. Low QoE and lack of replication of specific intervention strategies across studies limited conclusions that can be drawn from the existing research. Continued efforts are needed to identify successful strategies to maximize the impact of provider interventions on increasing adherence to evidence-based treatment for depression. Trial registration PROSPERO record CRD42017060460 on 3/29/17 Electronic supplementary material The online version of this article (10.1186/s13012-018-0788-8) contains supplementary material, which is available to authorized users.


(Continued from previous page)
Conclusions: Existing RCTs describe a range of provider interventions to increase adherence to depression guidelines. Low QoE and lack of replication of specific intervention strategies across studies limited conclusions that can be drawn from the existing research. Continued efforts are needed to identify successful strategies to maximize the impact of provider interventions on increasing adherence to evidence-based treatment for depression.
Trial registration: PROSPERO record CRD42017060460 on 3/29/17 Keywords: Depression, Provider intervention, Guidelines, Evidence-based, Major depressive disorder, Primary care, Specialty care Background Depression is one of the most common mental health disorders worldwide, affecting about 7% of the adult populations in the USA and the European Union [1,2]. Depression is associated with poor quality of life and significantly decreased psychosocial functioning [3]; high societal costs related to patient care, unstable or unproductive employment, marital and relationship disruption [4][5][6]; and mortality [7,8]. Depression is most often identified by practitioners in primary care settings [9,10]. Collaborative care interventions in primary care can significantly and cost-effectively improve depression care outcomes [11][12][13][14] and can improve adherence to clinical guidelines for effective psychological and pharmacological treatments for depression [15][16][17]. However, collaborative care interventions require major commitment to organizational change, including commitment by mental health specialists to support the revamped system. Levels of organizational [18] and provider [19] readiness significantly influence any potential positive effect of collaborative care on outcomes. Given that not all organizations will have the resources or readiness to implement large system redesign efforts, it is important to understand how and whether less intensive intervention efforts that may be easier to adopt can influence provider behavior.
Although most complex interventions that aim to improve depression care include some elements related to guideline-based education [20][21][22][23][24], further research is needed to evaluate the comparative effects of different educational interventions, which do not require organization change, on specific provider behaviors. Knowledge transfer is a burgeoning field that seeks to reduce the gap between research on evidence-based interventions and use of these interventions by generating, sharing, and applying research knowledge in practice [25]. However, knowledge transfer work is only beginning to systematically address methods for achieving clinical guideline-based provider behavior change. Based on conclusions that passive dissemination in educational and quality assurance interventions (e.g., mailing guidelines to providers with no reminders or follow-up) is generally ineffective [20], researchers have emphasized system-level strategies that require restructuring care processes, extensive time for planning, financial reorganization, and establishing new clinics and staff [26,27]. Yet, education and dissemination interventions may have greater feasibility than large-scale organizational change, may be necessary for promoting organizational and provider readiness, and are often critical components of the broader system-level interventions. In addition, in settings outside of primary care, there is still much reliance on direct knowledge transfer paradigms. Adoption of evidence-based care and fidelity to manualized treatment are among the biggest challenges in specialty care settings [28,29].
Two prior reviews of organizational and education interventions implemented exclusively in primary care settings concluded that provider training alone does not improve depression care [30,31]. The more comprehensive review [30] was conducted in 2003, and updated reviews are necessary to increase understanding of what effective behavior change strategies [20] can promote adherence to guidelines and guideline-concordant practices in specific settings. The most recent review [31] focused only on physician (e.g., general practitioners) education and did not address other interventions or providers, including those working in specialty care settings. Moreover, neither review focused on guideline adherence by providers; instead, they included only patient outcomes. Improved guideline adherence is a critical step in the path toward depression outcome improvement. Thus, the results of interventions targeting provider behavior change are important to policy makers, administrators, and providers in assessing how best to increase the use of evidence-based care for depression. Lastly, both reviews assessed only interventions within primary care settings: Since depression is most often identified by practitioners in primary care settings [9,10], understanding which provider interventions work in these settings is essential. However, the majority of RCTs evaluating medication and behavioral treatments for depression are conducted in specialty care settings [32][33][34][35]. Therefore, specialty care settings (e.g., managed behavioral health care organizations, psychiatry private practice) are also important settings in which to assess the effectiveness of interventions [36].
In this systematic review, we synthesize estimates of the effects of provider interventions, with a specific focus on behavioral health provider change, to promote adherence to evidence-based treatments for depression. We purposefully focus on RCTs with provider outcomes as the primary outcome, and we include both specialty and primary care settings, given the large number of behavioral change strategies that have been proposed to encourage providers to adopt evidence-based treatments for depression in practice [30,[37][38][39]. We also examine whether provider intervention effects vary across provider target of the intervention (i.e., a sole provider or a team of providers).

Registration
The review is based on a registered systematic review protocol (PROSPERO record CRD42017060460).

Search strategy
In August 2017, we searched the databases PubMed, PsycINFO, the Cumulative Index of Nursing and Allied Health Literature, the Cochrane Central Register of Controlled Trials, and the Cochrane Database of Systematic Reviews to identify English-language reports of RCTs that evaluated the effects of provider interventions. Searches included depression terms (e.g., depress$, mood dysregulation), general terms for knowledge transfer and organizational quality improvement (e.g., evidence-based guideline, research to practice) [40], terms related to provider interventions for clinical practice guidelines and implementation strategies (e.g., academic detailing, reminder systems), approaches for continuous quality improvement (e.g., CQI; quality manage$, model for improvement), terms for continuous professional education (e.g., continuing education, learning collaborative), and behavior change terms (e.g., reframing, incentive) [24,[41][42][43] (see Additional file 1: Appendix A for full search strategy). We also searched bibliographies of existing systematic reviews and included studies.

Eligibility
Eligible participants were healthcare providers responsible for patient care in the outpatient setting (e.g., primary care physicians, psychiatrists, mental health professionals, nurse practitioners, other general practitioners such as physician assistants). Eligible interventions aimed to increase adherence to depression guidelines and guideline-concordant practice (e.g., continuing education, quality improvement projects, and financial, organizational, or regulatory interventions that used knowledge translation strategies). To determine the effect of interventions on provider behavior change, we excluded studies that primarily assessed the effects of large system redesign efforts, such as collaborative care, where new clinics are established, care is reorganized (e.g., implementing dedicated care managers), and training of existing providers is only a minor component of the larger intervention. We also included interventions aimed at improving depression treatment and excluded studies focused solely on improving screening/assessment or referral behavior. Eligible comparators were no intervention, usual care practice (UCP), wait list control, or other provider interventions (e.g., organizational system redesign or an out of scope intervention). Outcomes documented the adherence of providers to guidelines or to guideline-concordant practices. We evaluated observable, objective changes in provider behavior because they are better markers of intervention success than are provider knowledge, attitudes, satisfaction, or perceived changes, which occur earlier in the change process [44], and while they are often precursors to change, they may not progress to the observable changes in behavior necessary for impacting patient outcomes. Timing involved any intervention duration and any follow-up period, and setting was any outpatient healthcare delivery facility or other physician practice setting. The review was restricted to RCT study design, with studies randomizing provider participants or practice sites to interventions. We aimed to identify the presence and absence of evidence from this robust research design, which allows for the development of the confident evidence statements desired for policy changes.

Data extraction and critical appraisal
We used a standardized approach for systematic reviews with detailed instructions for reviewers to reduce ambiguities. Following a pilot session to ensure similar interpretation of the inclusion and exclusion criteria, two reviewers independently screened all titles and abstracts of retrieved citations. Citations judged as potentially eligible by one or both reviewers were obtained as full text. The reviewers then both screened full-text publications against the specified inclusion and exclusion criteria, abstracted data from those studies that met the inclusion criteria, and assessed their risk of bias. All disagreements were resolved through author discussions. Critical appraisal assessments included the Cochrane Risk of Bias tool [45] and the Quality Improvement Minimum Quality Criteria Set (QI-MQCS) [46] to address internal validity and study-design independent criteria for interventions aiming to improve healthcare.

Analytic plan
We used the Hartung-Knapp-Sidik-Jonkman method for random effects meta-analysis to summarize odds ratios (OR), standardized mean differences (SMD), and incidence rate ratios (IRR) together with the 95% confidence interval (CI). Tests of heterogeneity were performed using the I 2 statistic [47]. Values of I 2 of 30-60% possibly represent moderate heterogeneity, 50-90% substantial heterogeneity, and 75-100% considerable heterogeneity [48]. We assessed the quality of evidence (QoE) using the Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) approach.
The identified studies assessed a variety of studyspecific outcomes. To facilitate comparisons across studies with the available data, we selected a dichotomous, continuous, and incidence rate variable as the main indications of adherence to depression guidelines or guideline-concordant practices (see Table 1). The lead author reviewed the intervention content and selected these main indicators of adherence from a list of all reported outcomes in the study. A content expert checked the selection. To avoid bias, the specific outcomes were selected before analyses. We also analyzed individual provider behavior change outcomes that were reported in more than one study: medication prescribing, contact with patients, specific intervention adherence, and referral to mental healthcare specialists offered to patients. We analyzed the effects on patient outcomes when available. We differentiated comparators as UCP (e.g., no intervention or interventions not aimed at depression treatment), practice redesign efforts (e.g., introduction of a nurse disease manager or a continuous quality improvement team), or other active provider interventions (e.g., training providers in a specific type of behavioral therapy, such as motivational interviewing [49]).

Results
The literature search results are documented in a PRISMA [50] literature flow diagram (see Fig. 1). We reviewed 1737 titles and abstracts, and, of these, we reviewed full texts for 365 citations, identifying 22 eligible studies reported in 34 publications. Studies took place in nine countries and included 2149 providers and 239,477 patients. Twenty studies took place in primary care settings, ranging from primary care offices and academically affiliated primary care practices to family medicine research network practices and continuing medical education groups. Two studies took place in specialty care settings: a private psychiatry practice and a managed behavioral health care organization. Two studies included teams of providers and 20 included a single provider only: 16 studies with primary care physicians, two studies with mental health care providers, and two studies with other general practitioners or clinicians. Duration of the interventions, duration of the implementation periods, and the time points of outcome assessment following the end of the implementation phase were all variable. Studies evaluated many types of provider interventions, ranging from simply disseminating depression guidelines to education strategies such as academic detailing and multi-component strategies involving education plus additional components (e.g., reminders or strategies tailored to individual providers) (see Table 1).
The methodological rigor of the included studies was variable; however, all studies were rated high risk of performance bias related to the lack of blinding of intervention providers. It was generally impossible for a provider to be blinded to delivery of the interventions of interest. With respect to the potential for contamination (i.e., both groups sharing material meant for the intervention group), only three out of the 22 studies were judged to be high risk of contamination bias. Half of the included studies described the context and organizational readiness for quality improvement, while the other half did not meet these criteria. Twelve studies met the criterion for reach/penetration domain and described the number of providers or departments that participated in the study compared to the number of available and potentially eligible participants or departments. Only one study addressed the sustainability of the intervention. Details for all critical appraisal domains are shown in Additional file 1: Appendix B. Table 1 outlines the findings from individual studies and  Table 2 summarizes the evidence for the pooled analyses that utilized the available dichotomous, continuous, and IRR adherence outcomes. Additional file 1: Appendix C includes a summary of findings table with quality of evidence details. Thirteen studies with 3158 participants reported on the odds of achieving provider adherence by comparing a provider intervention to UCP (Fig. 2). Pooled analyses did not indicate a statistically significant difference in the main guideline adherence outcomes across studies (OR 1.60; CI 0.76, 3.37; 13 RCTs; I 2 82%; moderate QoE). Pooled analyses of nine studies, with 1236 participants, using a continuous outcome also did not show a statistically significant difference compared to UCP (SMD 0.17; CI − 0.16, 0.50; I 2 86%; low QoE) (Fig. 3). Four studies reporting IRRs also showed no difference between intervention and control groups (IRR 1.16; CI 0.63, 2.15; I 2 91%; low QoE). However, all analyses showed substantial heterogeneity. Lastly, three studies with 867 participants reported on the odds of achieving provider adherence by comparing a provider intervention to practice redesign efforts; the difference was not statistically significant (OR 0.81; CI 0.30, 2.19; I 2 20%).

Medication prescribing
Eleven studies with 4116 participants reported on the odds of improved medication prescribing. The pooled analysis indicated a statistically significant intervention

Contact with patients
Three studies with 710 participants reported on the odds of increased patient contacts in studies comparing a provider intervention with UCP. The pooled analysis showed no statistically significant difference between intervention and control groups (OR 6.40; CI 0.13, 322.40; I 2 75%). Similarly, three studies comparing a provider intervention with UCP, with 225 participants, also did not report a statistically significant difference using continuous outcomes (SMD 0.17; CI − 0.84, 1.19; I 2 56%). One study [51] with 444 participants reported IRR data on the number of provider consultations at 6-month follow-up. This study reported a significant effect favoring the intervention, which consisted of training on guidelines and consultations from experts to address personal barriers to implementing guidelines (IRR 1.78; CI 1.14, 2.78). The only study reporting outcomes on contact with patients compared to a system redesign effort was a small study of 24 participants [52]; the study found no difference for the outcome (SMD 0.07; CI − 0.73, 0.87). Providers in this study intervention group received a detailed report on their patients that contained treatment recommendations based on a computerized algorithm, while the system redesign group received this feedback enhanced with care management of patients by care managers who helped to implement the physicians' recommendations.

Specific intervention adherence
Indicators of this outcome were variable and included whether or not patients were treated using a specific treatment from the guidelines, the number of providers adhering to the specifications of the guidelines, and a performance score received by the providers on whether or not they offered appropriate treatment specified by the guidelines. Six studies with 1375 participants reported on the odds of adherence compared to UCP. The pooled analysis for dichotomous outcomes did not indicate a statistically significant intervention effect (OR 2.26; CI 0.50, 10.28; I 2 90%). Three studies with 597 participants reporting on a continuous outcome also did      [53] reported on a measure of intervention adherence where a provider intervention was compared to practice redesign. There was no statistically significant difference between comparator and intervention, which offered education and distribution of practice guidelines to primary care providers (OR 0.30; CI 0.08, 1.14).

Referral offered to patients
Four studies with 896 participants and a UCP comparator reported on the odds of improved referral offered to patients by providers. The pooled analysis did not indicate a systematic intervention effect (OR 1.11; CI 0.33, 3.70; I 2 41%). No identified studies reported on continuous or IRR outcomes on referral outcomes compared to practice redesign.

Patient outcomes
We identified 14 studies that reported patient outcomes. Specifically, these studies reported changes in depression rating scale scores, depression treatment response (i.e., proportion of patients with improvement, including remission), depression recovery (i.e., proportion of patients in remission/not meeting depression criteria at follow-up), and treatment adherence (e.g., medication adherence). Nine studies with 2196 participants reported on the mean difference in patient depression rating scales, such as the Hamilton Rating Scale for Depression [54] or the Center for Epidemiologic Studies Depression Scale [55]. A pooled analysis comparing provider interventions to UCP did not indicate a difference in patient outcomes associated with the intervention (SMD − 0.06; CI − 0.14, 0.01; I 2 0%). Six studies with 1312 participants reported on depression treatment response (e.g., a score less than 11 on the Beck Depression Inventory [56]

Findings by type of intervention
Ten studies provided direct comparative effectiveness results utilizing the main indications of adherence to depression guidelines outcomes [51][52][53][57][58][59][60][61][62][63] (see Additional file 1: Appendix C) of which two reported significant differences. One study reporting a significant difference [51], involving 444 participants, compared provider training in guidelines plus tailored implementation to provider training alone. This study found no statistically significant difference in odds for achieving provider adherence at 6-month follow-up (OR 1.09; CI 0.62, 1.91) but did find a significant effect favoring the The other study reporting a significant difference [63], involving 389 participants, found a statistically significant mean difference between intervention and comparator groups (SMD 0.89; CI 0.59, 1.18). The intervention consisted of a 2-day continuing medical education course focused on treatment and differential diagnosis of depression disorders. Providers were assigned to groups in which the education component was tailored to the providers' self-reported stage of change. The comparator group also received education on treatment and diagnosis of depression disorders (including the same 2-day continuing medical education course), but the education was not tailored to the providers' stage of change.
Indirect comparisons using meta-regression determined whether within the range of eligible provider interventions, those that combined education with other components, such as tailored implementation strategies, reported better results than education-only interventions. The analyses did not indicate that interventions classified as education only systematically reported different effects than interventions with additional components (dichotomous outcomes p = 0.574, continuous outcomes p = 0.238). We also compared unidimensional and multidimensional interventions. We found no statistically significant effect for dichotomous outcomes (p = 0.707), but the equivalent analysis for studies reporting continuous outcomes approached statistical significance (p = 0.055). To explore this finding further, we rated the intensity of the intervention on a 3-point scale. A meta-regression for the dichotomous adherence outcome did not show an effect (p = 0.973); however, the analysis of the continuous adherence outcome suggested that the intensity of the intervention was associated with the effect size (i.e., the greater the intensity, the greater the adherence; p = 0.033). The analysis should be interpreted with caution because of the small number of studies contributing to individual intensity rating categories.
For subgroup analyses, we stratified the included studies by those distributing guidelines to providers, those with education interventions, and those with more complex interventions that included, for example, an education component in addition to exploring and helping providers overcome barriers to guideline implementation. Figures 2 and 3 show the individual study results for the dichotomous and continuous main indication of adherence outcomes within these broad subgroups. All three subgroups still reported no statistically significant differences between the intervention and the comparator groups. There was no statistically significant intervention effect in studies that simply distributed treatment guidelines (OR 1.28; CI 0. 75

Findings by provider target
We did not identify any studies directly comparing effects for different types of healthcare providers. We indirectly compared the 20 interventions that targeted single providers and the two that targeted teams. For the dichotomous adherence outcome, a meta-regression indicated that the intervention effect systematically varied by the type of provider targeted (p = 0.034); yet, the analysis should be interpreted with caution because only one of the team studies contributed data to this [64]. The effect was not replicated in an analysis based on IRR data that compared the other team intervention [65] with the three sole provider interventions that had count outcomes (p = 0.352).
For subgroup analyses, we stratified the results by interventions on single providers versus teams of providers (see Additional file 1: Appendix C). Pooled analyses of studies of interventions that targeted single providers did not report a statistically significant intervention effect on the main adherence outcome (OR 1.42; CI 0.74, 2.73; 12 RCTs; I 2 80%). One study [64] compared a team intervention to a control, and the effect for provider follow-up with patients was significant in favor of the intervention group at 12-month follow-up (OR 101.34, CI 6.17, 1664.08). Given the wide confidence interval, we looked at another main adherence outcome (i.e., whether the patient received medication plus counseling) and similarly found an effect favoring the team intervention (OR 1.50; CI 0.83, 2.73).

Findings by setting
We did not identify any studies directly comparing the effects of the setting. To assess whether effects varied by setting, we compared two studies conducted in specialty care settings with 20 conducted in primary care settings (see Additional file 1: Appendix C). A meta-regression on the main continuous adherence outcome did not suggest any systematic effects of the setting (p = 0.385), but the result should be interpreted with caution as only two studies provided data on specialty care settings.

Discussion
This systematic review compiles research evidence on the effects of healthcare provider interventions on adherence to guidelines or guideline-concordant behavior for depression treatment. We excluded system redesign efforts as interventions for our purposes (e.g., collaborative care where infrastructures are re-organized) and targeted studies that included provider behavior change outcomes. Our findings provide little support for the effectiveness of currently tested provider education or dissemination interventions on provider adherence to depression treatment guidelines; however, there was some evidence that provider interventions improved the outcomes of medication prescribing and patient depression treatment response. Results also suggested that some interventions that were tailored to providers' needs and that went beyond simply distributing guidelines to providers may improve provider behavior and promote guideline adherence.
Our findings are important for several reasons. First, it is important for healthcare systems to know whether the approaches identified in our review, all of which are less costly to implement than major systems change interventions, such as collaborative care, can change provider behavior. Second, few, if any, interventions including collaborative care for depression are undertaken without an education-focused component. This study can help focus efforts to better evaluate and improve this component. Third, provider education can be a critical step in promoting readiness to improve depression care and, if carried out effectively, may often be the best first implementation step in depression care improvement initiatives. This study provides a foundation for further development of provider education and dissemination methods for improving depression care.
While there is a substantial body of evidence on provider interventions in terms of research volume, it is noteworthy that we evaluated many unique interventions, ranging from the simple distribution of guidelines to education strategies only and further to education that involved multiple follow-up components and trainings. No two studies reported on the same intervention and comparator which limits comparative analyses. We assessed whether educational interventions alone can change provider behavior in clinical practice and, in line with existing reviews [30,31], our analyses did not find significant effects. Indirect comparisons across the identified studies to detect effect modifiers indicated that more complex interventions (i.e., those with provider education plus additional components and implementation strategies such as tailoring training to address personal implementation barriers) may be associated with more favorable outcomes. However, we did not identify subgroups of interventions that were consistently associated with significant changes in provider behavior. The individual successful approaches observed for main adherence outcomes have not been investigated in more than one study, and findings have not been replicated across independent researcher groups. More studies are needed that attempt to isolate specific provider interventions employed either within system redesigns or in studies that evaluate provider interventions specifically. As described, the methodological rigor in included research studies varied, but none of our analyses were exclusively based on poor quality studies. We also found no indication of publication bias. In addition, because the outcomes we were interested in were often not the primary outcome of the research studies (e.g., studies were often interested in the impact on patients), we are also more confident that estimates in our review are less likely to be affected by publication bias. Still, given the diversity of the interventions evaluated in individual studies, the heterogeneity in results, and results based on single studies, often with imprecision in effect estimates and with follow-up periods of 1 year or less, the quality of evidence remains very limited.
Though we did not identify statistically significant differences for the main adherence outcomes across the interventions compared to UCP, analyses showed heterogeneity and wide confidence intervals that support the possibility of a large range of potential intervention effects. A pooled analysis of 11 RCTs indicated increased odds of improved medication prescribing, which is arguably the aspect of depression care most under the healthcare providers' control. There was no indication of publication bias; however, we detected considerable heterogeneity and not all studies favored the intervention. Therefore, we believe the finding to have low quality of evidence. Furthermore, no statistically significant difference emerged in an analysis for improved medication prescribing utilizing a continuous operationalization of the outcome. One study [51] showed an increased rate of contact with patients following training and consultations from experts on guidelines that incorporated personal barriers to implementing the guidelines, compared to UCP. However, the result is based on a single study, and therefore, we have limited confidence in this finding. No other specific provider behavior outcome was found to be significant for provider interventions compared to any comparator.
Due to the small number of studies reporting team interventions or interventions in specialty care, we did not find statistically robust evidence that intervention effects varied by targeted provider group or setting. Our review findings suggest that interventions targeting multidisciplinary team members are more effective than interventions targeting only healthcare providers directly, but additional research studies are needed to confirm this finding. Given the lack of studies in specialty care settings, more studies conducted in specialty care settings are also needed to understand how evidence-based interventions can best be adopted by providers outside of psychiatric research settings. The review findings for effects on patient health were mixed. Although depression treatment response improved across the identified intervention, we did not find significant effects for other patient outcomes such as depression rating scale scores, depression recovery, or treatment adherence. The findings for patient outcomes should be interpreted in context because we restricted the review to studies that reported on provider outcomes. Prior reviews have evaluated how provider interventions affect patient outcomes and have concluded that multi-faceted and system redesign approaches were more effective in improving patient outcomes than simpler or single component interventions, such as distribution of guidelines and education alone [30,31]. Yet, our review set out to identify interventions that can be implemented in healthcare organizations without practice redesign efforts and more studies are needed that report on both provider behavior change outcomes and patient outcomes in order to better understand whether provider behavior is affected by the intervention and if the change in provider behavior is ultimately affecting patient health.
This review has several strengths, including an a priori research design, duplicate reviewer study selection and data abstraction of study information, a thoughtful and thorough literature search not restricted to a small set of known interventions, detailed critical appraisal, and comprehensive quality of evidence assessments used to formulate review conclusions. Yet, limitations remain. First, our review documents results of RCTs, a robust study design that allows confident evidence statements. Evidence from RCTs and pre-post studies cannot easily be combined methodologically, and we chose to restrict to RCTs. All forest plots are based on studies with concurrent control groups randomly assigned to an intervention condition. Nonetheless, we acknowledge that some authors [66] and the Cochrane Effective Practice and Organisation of Care Group have recommended other study designs such as controlled before-after studies, in addition to RCTs, when evaluating organizational interventions. However, we were specifically interested in the presence and absence of this strong and universally accepted study design to document the state of evidence for provider interventions. An exploratory search for non-RCT literature indicated that results reported with other study designs appear to be similarly mixed in non-randomized controlled studies, time-series, pre-post studies, and cohort studies [67][68][69][70][71][72][73]. Second, by including only studies that measure provider behavior and outcomes, we were able to judge whether the intervention is having the intended effects on the target of the intervention (i.e., the providers). Nonetheless, this restriction excluded a large number of existing research studies that do not report on provider behavior. Third, to be included, studies had to report on depression treatment. Effects on improving recognition, screening, or diagnosis of patients or on increasing referral behavior to specialty mental health care settings should be assessed in future systematic reviews. Lastly, the individual interventions and promoted depression practice guidelines varied across studies. Our review included studies using prominent guidelines such as the Agency for Health Research and Quality (AHRQ) treatment guidelines for depression in primary care [74], in addition to studies using treatment guidelines for which we could not verify whether the guidelines were evidence-based. The specific guidelines utilized within the interventions themselves varied and ranged from the American Psychiatric Association Practice Guidelines for the Treatment of Psychiatric Disorders to the Dutch College of General Practitioners' Practice Guideline for Depression to the Agency for Health Care Policy and Research Practice Guidelines for Depression [75][76][77]. Many of the studies did not specify in detail how lengthy or how much of a time commitment the guidelines were for providers, which could have accounted for the provider change behaviors findings described within the individual included studies. Some standardization across studies regarding which and how guidelines were utilized in practice appears needed. Such standardization could help account for confounding factors in research studies, but the field may also benefit from a single source of information on best treatment practices for depression.

Conclusions and future directions
Fourteen years ago, Gilbody and colleagues [30] reviewed organizational and educational interventions targeted at primary care providers treating depressed patients. Authors concluded that effective strategies to improve depression management in this setting were multi-faceted (e.g., system redesign approaches including screening for depression, providing education to patients, and realignment of professional roles in an organization). Sikorski and colleagues [31] similarly concluded that provider training alone does not seem to improve depression care. Our review shows that, despite new research, provider interventions focused primarily on guideline distribution or education only are unlikely to be effective in the absence of additional components. Our review did not identify subgroups or categories of interventions that were consistently associated with increased adherence to depression guidelines or guideline-concordant practices. These findings underscore the need for further research to better understand how to effectively change provider behavior in differential care settings without organizational redesigns. Innovations are needed to support healthcare organizations that want to improve guideline adherence but do not intend to invest in efforts to restructure how care is delivered. Research on provider interventions should be supported by a framework that allows for a more structured assessment to identify successful intervention approaches and the effects of individual intervention components.