Skip to main content

Mediators of measurement-based care implementation in community mental health settings: results from a mixed-methods evaluation

Abstract

Background

Tailored implementation approaches are touted as superior to standardized ones with the reasoning that tailored approaches afford opportunities to select strategies to resolve determinants of the local context. However, results from implementation trials on this topic are equivocal. Therefore, it is important to explore relevant contextual factors that function as determinants to evaluate if they are improved by tailoring and subsequently associated with changes in implementation outcomes (i.e., via statistical mediation) to better understand how tailoring achieves (or does not achieve) its effects. The present study examined the association between a tailored and standardized implementation approach, contextual factors that might mediate change, and a target implementation outcome in an initiative to implement measurement-based care (specifically the clinical integration of the Patient Health Questionnaire [PHQ-9] for depression) in a community mental health organization.

Methods

Using a cluster randomized control design, twelve community-based mental health clinics were assigned to a tailored or standardized implementation group. Clinicians completed a self-report battery assessing contextual factors that served as candidate mediators informed by the Framework for Dissemination at three time points: baseline, 5 months after active implementation support, and 10 months after sustainment monitoring. A subset of clinicians also participated in focus groups at 5 months. The routine use of the PHQ-9 (implementation outcome) was monitored during the 10-month sustainment period. Multi-level mediation analyses assessed the association between the implementation group and contextual factors and the association between contextual factors and PHQ-9 completion. Quantitative results were then elaborated by analyzing qualitative data from exemplar sites.

Results

Although tailored clinics outperformed standard clinics in terms of PHQ-9 completion at the end of active implementation, these group differences disappeared post sustainment monitoring. Perhaps related to this, no significant mediators emerged from our quantitative analyses. Exploratory qualitative analyses of focus group content emphasized the importance of support from colleagues, supervisors, and leadership when implementing clinical innovations in practice.

Conclusions

Although rates of PHQ-9 completion improved across the study, their sustained levels were roughly equivalent across groups and low overall. No mediators were established using quantitative methods; however, several partial quantitative pathways, as well as themes from the qualitative data, reveal fruitful areas for future research.

Trial registration

Standardized versus tailored implementation of measurement-based care for depression. ClinicalTrials.gov NCT02266134, first posted on October 16, 2014

Peer Review reports

Background

Tailored implementation approaches are touted as superior to standardized ones [1]. The rationale is that “one-size-fits-all,” or standardized approaches, ignore empirical evidence that “context matters” for effective implementation,Footnote 1 whereas a tailored approach affords the opportunity to select and match (i.e., tailor) strategies to the determinantsFootnote 2 that serve as barriers to implementation within the local context [1]. A meta-analysis demonstrated superior implementation outcomes for tailored over standardized approaches [4], but more recent results from rigorous implementation trials are equivocal, with some offering null findings, such as the multinational-tailored implementation for chronic diseases (TICD) [5]. In his reflection on TICD, Wensing [5] suggests that perceived determinants and tailored strategies were valid yet insufficient and that emergent determinants may demand adaptation and ongoing tailoring. A related critical unanswered question is whether tailored approaches, that are responsive to relevant contextual factors and resolve those functioning as determinants, improve the likelihood of sustaining an evidence-based intervention past an active implementation period.

Tailored implementation can be defined as the process of matching strategies to address determinants (i.e., barriers) within the local context [6]. The idea is that effective tailored implementation occurs by improving or resolving determinants in a given context. This temporal relationship in which the resolution of determinants leads to improvements in intervention implementation can be explicitly evaluated using mediation analyses. Moreover, longitudinal studies in which experimental exposure differentially yields a change in determinants and outcomes can offer confidence in a causal relationship. To this end, relevant contextual factors need to be examined over time to determine if they are improved by tailoring and subsequently associated with changes in target implementation outcomes.

Current study

We set out to address pressing implementation questions in an initiative to bring measurement-based care (MBC) into community mental health settings. MBC is the systematic assessment of patient symptoms prior to clinical encounters to collaboratively inform care [7]. MBC can be conceptualized as having three components of fidelity. The foundational MBC component, per its evidence base, requires patient-reported outcome completion, using measures like the Patient Health Questionnaire (PHQ-9), prior to every single encounter [8, 9]. This component alone—often referred to as “routine outcome monitoring” or “progress monitoring”—has been demonstrated to improve clinical outcomes [10, 11], and yet it is met with numerous implementation challenges in community mental health settings where fewer than 20% of clinicians engage in MBC [7]. MBC fidelity is optimized when score trajectories are discussed with patients (second component) and treatment is adjusted according to the data (third component). In collaboration with Centerstone—one of the US largest not-for-profit community behavioral health providers—we designed a study to compare the effectiveness of standardized versus tailored approaches to implementation (details about the activities in each group are found in the “Method” section) to support MBC integration.

Our active implementation phase outcomes of this cluster randomized trial are published elsewhere [12]. We found that tailored implementation outperformed the standardized approach in terms of PHQ-9 measure completion (foundational fidelity component) but overall fidelity remained suboptimal in that score review, and collaborative discussion did not significantly increase in either group and MBC fidelity was not associated with improvements in patient depressive severity outcomes.

Aims and hypotheses

In the current study, we explored patient-reported PHQ-9 completion beyond this initial implementation phase and examined the degree to which contextual factors served as determinants of this implementation outcome over time. To do so, we used a mixed-methodological approach in which we first conducted a quantitative mediationFootnote 3 analysis, then analyzed qualitative data from four exemplar sites to explore possible novel determinants, and then integrated the two datasets by merging them for analysis and comparison [14]. We hypothesized that tailored implementation would remain superior and that contextual factors (e.g., structure, norms) would emerge as mediators of patient-reported PHQ-9 completion in the tailored group, but not in the standardized group. This evaluation was largely exploratory given the dearth of implementation theory to inform hypotheses about which specific contextual factors might causally lead to the institutionalization of MBC.

Methods

Measurement-based care: intervention and outcome

MBC has been shown to improve treatment outcomes compared with treatment as usual and may be particularly beneficial for identifying and adjusting treatment when a patient has experienced a lack of improvement or deterioration [9]. In the current study, the PHQ-9 was used as the measure for routine administration to adult patients with a depression diagnosis [15, 16]. The PHQ-9 is a nine-item self-report measure that assesses depressive symptoms that align with the DSM-5 diagnostic criteria for a major depressive episode; scores reflect the severity of depressive symptoms. The PHQ-9 is a widely used measure with strong evidence of reliability and validity, including sensitivity to change [15, 17]. The extant literature indicates that routine administration of the PHQ-9 alone can lead to benefits over treatment as usual as it reveals patient deterioration or lack of progress [9, 15]. Results from our active implementation phase of this trial indicated that the tailored approach led to improvements in PHQ-9 completion, over that of the standardized approach. Accordingly, PHQ-9 completion was the primary outcome of the sustainment phase of this trial.

Design

This mixed-methods study was conducted in 12 community-based mental health clinics in two Midwestern states [18]. Participating clinics were assigned to the implementation group using a dynamic cluster randomized design similar to Chamberlain et al. [19]. Clinics were matched on urbanicity, number of clinicians, and number of patients seeking care for depression. Participants were not blind to the study group. All eligible clinicians were offered a 4-h MBC training followed by 5 months of active implementation support. The type of active implementation support was determined by the group. Details of the study protocol and methods have been described previously [20, 21]. This study was reviewed and approved by Indiana University’s IRB; clinician participants provided informed consent.

Implementation groups

In the tailored group, implementation teams of 5–8 members representing different clinic roles (i.e., clinician, office professionals, directors) were formed and met approximately every 3 weeks for 5 months of active support. Individuals who were identified as influential in the organization using a sociometric network analysis [22] and/or as having positive attitudes toward MBC via a survey [23] or self-nomination were invited to join. Meetings focused on reviewing data on clinic MBC use and strategies to further MBC implementation. Implementation teams could tailor the MBC guideline or subscribe to the standardized recommendation. Several teams opted to expand their guidelines to include individuals 12 years and older given that the PHQ-9 is validated for use with youth. Several teams opted to expand their guideline to include all diagnoses given high rates of comorbid depression in community-based populations. Five of six teams opted to tailor their guideline with details offered in Table 1.

Table 1 PHQ9 administration guidelines specified by clinics in the tailored group

In the standardized group, consultation teams were formed and open to all participating clinicians. However, the schedules of individuals who were identified as influential and/or as having positive attitudes were prioritized. Meetings were led by an external consultant and focused on supporting clinician use of MBC as recommended by the study guideline.

Participants

Clinicians were included if they completed the baseline and 5-month assessment of the contextual measures and had one or more appointments with an eligible patient during the sustainment phase of the study. Of the 155 clinicians with sustainment phase data, 122 had sufficient data on one or more of the candidate measures to be included in the study. See Table 2 for the demographic characteristics of clinicians.

Table 2 Clinician demographics (N = 122)

Patient data pertaining to MBC fidelity were included in the sustainment phase if patients met clinic-specific eligibility criteria in the tailored group, or if in the standardized group, they were diagnosed with depression and were at least 21 years old at the time of their session. Data from a total of 6709 patients were included. These patients accounted for 33,354 sessions that served as the target for our primary implementation outcome (i.e., PHQ-9 completion).

Framework for dissemination

There are numerous determinant frameworks that can guide implementation and evaluation. Across all, it is clear that contextual factors ought to be considered at varying levels of the organization from the beliefs of individuals involved to the broader socio-political conditions in which the organization is embedded. The Framework for Dissemination was used as a guiding framework for the identification and evaluation of a relatively parsimonious set of contextual factors as candidate mediators because it grew from an academic-community partnership, offered a three-phased evaluation process, and integrated the best available evidence [24]. This framework outlines six contextual domains theorized to be associated with successful implementation of a new practice: (i) norms and attitudes: knowledge, beliefs, and values of stakeholders about the intervention as well as perceptions of organizational support for the intervention (i.e., organizational climate); (ii) structure and process: organizational operations and attributes such as decision making processes, types of services, and organization size; (iii) resources: financial, technological, human, social, and political capital that supports implementation; (iv) policies and incentives: the costs and benefits of use of the new intervention as determined by regulatory policies and funding mechanisms; (v) networks and linkages: connections between organizations and between individuals that facilitate knowledge sharing and social support related to the new intervention; and (vi) media and change agents: internal and external sources of credible information and influence such as external trainers, consultants, and internal champions of the intervention.

Measures

The battery of measures used to capture candidate mediators was comprised of six, self-report instruments completed by clinicians at baseline, after 5 months of active implementation, and again 10 months later (i.e., end of sustainment monitoring). See Table 3 for a description of the measure, sample items for each subscale, and how subscales were mapped onto the Framework for Dissemination. Scale reliability estimates in the current sample are provided in Table 4.

Table 3 Measures used to assess candidate mediators
Table 4 Means and standard deviations of candidate mediators

Quantitative analyses

We assessed group main effects and group-by-time interactions across the sustainment phase using generalized linear mixed models. Following recommendations from Singer and Willett [25], we evaluated three unconditional growth models (i.e., models with only temporal variables) to determine the trajectory that best fits the data prior to adding other fixed effects. We compared three unconditional growth models, linear, quadratic, and log-transformed, and selected the best-fitting model on the basis of the Bayesian Information Criterion (BIC). As a final step, we allowed time slopes to vary by each of the random effects and select the model with the lowest BIC. After establishing the unconditional growth model, we fit a model in which the implementation group was added as the main effect and a model in which a group-by-time interaction was added. All data analysis was conducted using R 4.0.3 in RStudio 1.4.1103 [26]. All models were fit using the glmer function in the R lme4 package. PHQ-9 completion was regressed on the mediator in a model that contained intercepts random on the patient, clinician, and clinic, which reflected multiple observations (i.e., sessions) within patients, multiple patients within clinicians, and multiple clinicians within organizations. Models assumed a binary distribution with a logistic link function. Each session represented an observation in which PHQ-9 completion was recorded.

We examined mediation effects in which contextual factors mediate the impact of the implementation group on PHQ-9 completion. Mediators were obtained at the 5-month assessment (end of active implementation), and patient outcomes were data points collected between the 5- and 15-month assessments (during sustainment monitoring). This approach allowed us to establish temporal precedence in the path of the indirect effects in which exposure to the implementation group preceded the assessment of mediators and mediator assessment preceded outcomes. To assess mediation, we tested whether Path a: implementation group predicted differences in the mediator; Path b: the mediator predicted change in PHQ-9 completion; and Path c: group predicted change in the outcome. The first two conditions enumerated above are the core relationships for establishing mediation [27]. Prior to assessing indirect effects, we assessed each of the a and b paths in individual models. All data analysis was conducted using R 4.0.3 in RStudio 1.4.1103 [26]. All a path models were fit with the lmer function in the R lme4 package, version 1.1.26 [28], and the degrees of freedom were estimated using Satterthwaite’s degrees of freedom method implemented using the R merTools package, version 0.5.2 [29]. In each a path model, the mediator was regressed on the implementation group in a model that contained an intercept random on the clinician’s clinic. Each b path model was fit using generalized linear mixed models with the same structure described above for PHQ-9 completion. If significant effects in the a and b paths were observed, indirect effects (i.e., the product of paths a and b) were tested using biased-corrected bootstrapped confidence intervals.

Each null hypothesis significance test is treated individually, and thus, alpha levels are adjusted, which effectively treats each test individually, as opposed to a disjunctive test in which the rejection of any of multiple null hypothesis significance tests would result in the rejection of a joint intersection null hypothesis. Alpha-level adjustments are not required for individual tests but are required for disjunctive tests [30]. Although alpha levels are not adjusted, the possibility of a type I error that can occasionally occur with a multiplicity of tests is an acceptable risk when balanced with type II errors. In accepting this risk [31], we acknowledge that the analyses are exploratory in nature.

Qualitative methods

Data collection

To complement the traditional quantitative mediation analysis, we used qualitative data to surface and explore possible novel mediators, sampling for extreme variation of our target implementation outcome. The qualitative data reported here are drawn from focus group (FG) data collection at the 5-month timepoint to be consistent with the quantitative candidate mediator data. Four clinics were selected based on their variation across the implementation group (tailored/standardized) and trajectory of PHQ-9 completion during sustainment (increased/decreased). Clinics 11 (standardized) and 12 (tailored) evidenced increased PHQ-9 completion over time. Clinics 4 (standardized) and 10 (tailored) evidenced decreased PHQ-9 completion over time. Participating clinicians were predominantly female (61.1%), non-Hispanic White (61.1%) or Black (22.2%) and on average 38.6 years old (SD = 10.4). More than half of clinicians were licensed (55.6) and most identified their theoretical orientation as cognitive behavioral (55.6%) followed by integrative (16.7%). On average, clinicians had been working in the mental health field for 4.4 years (SD = 1.5).

A semi-structured FG guide was used to facilitate data collection. The guide was structured around the six domains of the Framework for Dissemination [24]. The study team saw gaps in quantitative measures for the framework’s core domains that qualitative could possibly fill more appropriately (e.g., policies, incentives). Participating clinicians were identified through purposive sampling [32] for variation in attitudes toward MBC in collaboration with the clinic director. Data collection was conducted by members of the study team trained in qualitative methods. Each FG lasted approximately 1 h; all were digitally recorded and transcribed verbatim.

Data analysis

Analysis occurred in an iterative and team-based process involving a directed qualitative content approach and reflexive team analysis [33, 34]. Transcripts were independently read multiple times by three qualitative analysts on the study team to achieve immersion prior to code development. Codes were derived both deductively and inductively, in order to characterize clinician experiences including both barriers and facilitators. Deductive a priori codes were based on the aforementioned core domains of the Framework for Dissemination that informed the FG guide. Inductive codes were wholly emergent from the data. Codes were independently applied by each analyst to 10% of the transcripts. Inter-coder reliability was then assessed, and following the resolution of any disagreements through discussion, the remaining transcripts were each coded by two analysts using the final coding schema [33]. Throughout the analytic process, team members met regularly to discuss emergent codes and themes and to assess the preliminary results [35]. Careful attention was given to the presence or absence of new and emerging themes throughout the analysis, and thematic saturation was achieved. Throughout the analytic process, the qualitative data software program ATLAS.ti version 7.0 was used for data organization and management.

Results

Quantitative mediation

The comparison of the linear, quadratic, and log-transformed unconditional growth models that were fit as a subsequent step to evaluating the implementation group on PHQ-9 completion indicated that the linear time effect with random slopes for patient and clinician was the best fitting model, see Table 5. The implementation group effects model indicated that levels of PHQ-9 completion did not differ across the sustainment phase (z = 0.30, p = 0.765). The group-by-time interaction model revealed a significant interaction across the sustainment phase (z =  − 2.44, p = 0.015). We probed the interactions with simple effects for treatment conditions at the beginning and end of the sustainment window and with simple slopes for the standardized and tailored groups. PHQ-9 completion was higher at the beginning of the sustainment window for the tailored group and was higher at the end of the sustainment window for the standardized group; however, there were no significant differences between the groups at either time point. Similarly, the tailored group exhibited a decreasing simple slope and the standardized condition exhibited an increasing slope, though neither simple slope was significant. Thus, the interaction represents a crossover interaction whereby the tailored group decreased across the sustainment window and the standardized group increased.

Table 5 Model results: Model 1—main effects model

Means (SD) and Cronbach’s alpha for each mediator are displayed in Table 4. Prior to fitting inferential models, we evaluated each of the reliability constructs using Cronbach’s α. We excluded scales with α < 0.60, which we set as the lower limit for exploratory research [36]. Excluded scales included several from the SOF.

Results for the mediation paths are displayed in Table 6. Regarding Path a, controlling for baseline levels of the outcome, the tailored group exhibited higher ratings for program-level structures on the B&F than did the standardized group at 5 months (t[6] = 3.84, p = 0.005), suggesting greater levels of facilitation of implementation at the program or clinic level. Similarly, the tailored group exhibited greater perceptions that change was possible and encouraged, and training utilization typically occurred at the program level on the SOF than did the standardized group at five months (t[114] = 2.29, p = 0.024; t[113] = 1.99, p = 0.049). Regarding Path b, only the 5-month “practicality” assessed on the ASA was a significant predictor of PHQ-9 completion across the sustainment phase of the study (z = 1.96 p < 0.050).

Table 6 Mediation paths using clinic-specific tailored guideline criteria

Qualitative results

Thematic analysis of the 5-month FG with the four exemplar clinics illuminates additional clinic-level characteristics that may help to explain the quantitative results. Three themes emerged from qualitative analysis of these clinics that suggest the central importance to implementation outcomes (i) of compelling communication from leadership, (ii) supportive supervision, and (iii) clinical consultation opportunities. A fourth potential theme, the presence of strong support staff, also emerged from the analysis, although it was only explicitly discussed at one clinic. Each of these themes is discussed further below with quotes provided in Table 7.

Table 7 Exemplar quotes for qualitative results

Compelling leadership communication

One of the most salient themes emergent in the analysis was the importance of effective and compelling leadership communication about the messaging around MBC. Clinicians viewed favorably, and were motivated to action by, leadership communication that provided clear incentives and/or rationale for the adoption of MBC at their clinics and thus contextualized its implementation. In contrast, leadership communication that was experienced as a top-down directive, delivered without appropriate explanation or contextualization, was viewed unfavorably and resulted in resentment and cynicism. In the four exemplar clinics analyzed, clinicians at both Clinic 11 (standard/increased PHQ-9 Completion) and Clinic 12 (tailored/increased PHQ-9 Completion) reported compelling communication from leadership. Conversely, clinicians at Clinic 4 (standard/decreased PHQ-9 Completion) and Clinic 10 (tailored/decreased PHQ-9 Completion) reported messaging from clinic leadership that failed to provide a clear or compelling rationale for the implementation of MBC.

Supportive supervision

Another emergent theme in the qualitative analysis was the critical role of the supervisor in implementation uptake. Effective, supportive supervision, as evidenced by supervisors’ willingness to devote time to clinical and logistical details relevant to clinicians’ implementation of MBC and their provision of encouragement and support, was described by clinicians at both Clinic 11 and Clinic 12. In contrast, clinicians at Clinics 4 and 10 described supervision that was lacking in these areas.

Clinical consultation opportunities

A third emergent theme in the focus group analysis focused on differential opportunities across clinics for clinicians to connect for consultation with colleagues. In clinics that provided such opportunities, including Clinics 11 and 12, clinicians were able to compare notes about MBC, provide and receive support and tips, and enjoy a sense of professional community. However, in clinics where such opportunities were not present, including Clinics 4 and 10, clinicians reported experiencing a sense of isolation that impeded the chance for professional support.

Strong support staff

A fourth potential mediating characteristic, the importance of strong support staff, also emerged in the analysis. Because it was explicitly discussed only by one clinic, it falls short of a theme, but we include it here because clinicians at one clinic, Clinic 4, reported the impact of its absence and openly compared it with another clinic whose staff they perceived as more helpful in supporting the implementation of MBC.

Discussion

In the present study, no significant mediators emerged from our quantitative analyses, but several partial pathways point toward critical areas for future research and might inform advancements in implementation theory. Our exploratory qualitative analyses tell a story of the importance of support from colleagues, supervisors, and leadership when implementing clinical innovations in practice.

Quantitative results

This study is one of few longitudinal implementation trials with rich quantitative data to engage in rigorous mediation analyses. The experimental design of this study also fulfilled several key requirements for identifying mediators including variable selection guided by implementation theory, longitudinal design in which changes in predictors preceded changes in outcomes across time, random assignment to a group, and experimental manipulation of groups [13]. Although no quantitative mediators emerged, three scales demonstrated significant improvement from baseline to 5 months in the tailored group. These were program-level training utilization and organizational climate for change from the SOF [37] and program-level structures from the B&F [38]. These constructs assess clinician perception of the extent to which new interventions are supported and used at the organizational level. More specifically, the program-level training utilization scale assesses clinician perception of the frequency at which new interventions are integrated into the organization following training (e.g., “How often do new ideas learned from workshops get discussed or presented at your staff meetings?”). The organizational climate change scale assesses organizational expectations and encouragement of change (e.g., “You are encouraged here to try new and different techniques”), and the program-level structure scale assesses organizational infrastructure to support MBC (e.g., “We use fidelity reports to improve our MBC practice”).

A deeper look at how the tailored group unfolded over time provides possible explanations for why these scales were not associated with PHQ-9 completion from 5 to 15 months. During the 5-month active implementation period, clinics in the tailored group held approximately triweekly implementation team meetings. Implementation teams were tasked with selecting and executing strategies to facilitate MBC implementation. As reported in a previous study of these implementation teams’ strategy use, quality management and communication were among the most common categories of strategy use [39]. Quality management took the form of review and discussion of reports of MBC use at the clinic (i.e., “fidelity reports”) and communication strategies often took the form of encouragement to use the PHQ-9 in clinic team meetings, the break room, and via email. When strategies used by the implementation team are compared with individual items from the above scales, it seems possible that clinicians at 5 months were reflecting on the efforts of their clinic implementation team. However, following the 5-month active implementation phase, the study team withdrew active facilitation support. Although some clinics opted to continue meeting, it was typically with reduced frequency and intensity. It is possible that across the next 10 months, these strategies were not consistently used undermining lasting changes in MBC use. Community mental health clinicians are faced with competing and constantly changing demands. For example, as is common in many community mental health organizations, clinicians expressed concern about achieving productivity requirements for billing in focus groups and informally to study staff during training visits and meetings [40]. Continuing to devote unbillable time to implementation team meetings and to-dos generated during meetings might have felt unfeasible. If these program-level strategies were contingent upon the active support of the study team, then its discontinuation may well have undermined MBC sustainment [41].

This raises the question: does tailoring change the context or does tailoring align strategies to be responsive to context and accommodate the barriers to implementation? As an analogy, if a person has the intention to go for a walk but has a broken leg, crutches can be used as a strategy to support the person to go for a walk. However, when this aid is removed, the person can no longer go for a walk. The context (broken leg) remains unchanged, but strategy (crutches) supports movement toward the goal (going for a walk). In parallel, once active study team support was removed, clinics that previously struggled with unclear communication from leadership, administratively focused supervision, and competing demands were faced with these barriers again without the “crutch” of the implementation team. Successful repair of a broken leg that enables a person to walk freely requires setting and bracing the leg as well as taking weight off the leg for a sufficient period of time to heal. In parallel, it could be that strategies to permanently alter (setting and bracing the leg) and/or lengthier involvement by the study team (using the crutches for an extended period of time) are needed to sustain the use of measurement-based care. In the tailored group, the clinic team members were empowered by the study team to direct implementation strategies given their site-specific expertise. However, teams resorted to the use of a limited number of all possible implementation strategies that may not have supported sustainment optimally [39]. Although clinic implementation team members were experts in their own work context, understandably, they were not experts in the science (or practice) of implementation and thus not knowledgeable about more intensive strategies geared toward leadership and managerial change [42].

There are specific packages of implementation strategies such as the Leadership and Organizational Change for Implementation (LOCI) intervention that have been shown to result in alterations in clinician perceptions of leadership style and leadership support for implementation efforts involving repeated training sessions, coaching calls, feedback reports about leadership, and individualized development plans to improve leadership [43, 44]. Although clinic leadership and informal leaders (i.e., opinion leaders) were included on implementation teams, the present study did not make explicit attempts to alter individuals’ approaches to leadership or supervision. And, in fact, candidate mediators reflecting aspects of leadership (i.e., Director Leadership subscale from the SOF, ILS, and Agency Leadership Support subscale from the BFS) did not change as a result of tailoring during the 5-month active implementation period.

In addition to the three scales that demonstrated significant improvement from baseline to 5 months in the tailored group, the practicality subscale of the ASA measure at 5 months was associated with PHQ-9 completion at 15 months. The practicality subscale captures perceptions of the ease of using measures like the PHQ-9 (e.g., “standardized progress measures can efficiently gather information”), meaning that clinicians who found the PHQ-9 easy to use were more likely to sustain administration over time. As a reminder, the standardized group was not an inert control, rather it represented certain implementation “best practices” which included support for the use of MBC via triweekly consultation meetings. Consultation has been shown to be a promising strategy for supporting clinician skill acquisition and implementation of new practices [45]. It is therefore possible that clinicians across groups who used MBC during the 5-month active implementation period gained direct exposure to the ease of use and continued to incorporate it in their practice through the sustainment period. It is somewhat surprising that the experimental group (i.e., tailored or standardized group) did not alter perceptions of practicality across the 5-month active implementation period given the concerted effort by several implementation teams in the tailored group to incorporate PHQ-9 administration into the front-office workflow. However, when office professionals administered the PHQ-9 at patient check-in, clinicians may have not been directly exposed to the speed and efficiency of measure completion.

Qualitative results

Given the variability in the trajectories of PHQ-9 completion during the sustainment phase that could not be explained by the study group, we undertook exploratory qualitative analyses of the focus groups conducted at the end of the 5-month active implementation period to characterize two exemplar clinics that demonstrated improvement in PHQ-9 completion versus two clinics that deteriorated regardless of the study group (see Table 8 for characteristics of exemplar clinics). A comparison of improving and deteriorating clinics revealed the role of leadership, supportive supervision, and clinical consultation with peers in the implementation of MBC. Specifically, clinicians at clinics that experienced increases in PHQ-9 completion across both the standard and tailored groups described clear communication from leadership regarding expectations and rationale for the implementation of MBC, incorporation of MBC discussion during clinical supervision, and opportunities to consult about MBC or receive general support from colleagues as facilitative. Conversely, clinicians at clinics that experienced decreases in PHQ-9 completion described unclear top-down (or no) communication about MBC from leadership, confusion about MBC from supervisors, and isolation from colleagues. These findings map onto extant implementation research that demonstrates the importance of strong leadership and the role of supervisors as leaders in supporting implementation efforts and skill acquisition [46,47,48]. The role of peer support is also reflected in the literature regarding the influence of social norms on behavior [49].

Table 8 Qualitative themes and characteristics of clinics

Although attempts were made to capture some of these domains, particularly leadership, via the quantitative data, baseline levels of these measures were controlled for when quantitatively assessing the association between the experimental group and candidate mediators, meaning it was not possible to determine if pre-study strong/weak leadership and supervision played a role in PHQ-9 completion. In addition, measures that assessed leadership or clinician perceptions of the organization (i.e., ILS and certain subscales of the SOF) did not orient the survey-taker to the level of analysis to which they should respond. For this reason, it is possible that some clinicians responded to survey questions with their direct supervisor in mind while others may have responded with upper-level organizational leadership in mind. This is especially possible given that some clinics housed regional leadership making them more visible to clinicians compared with clinics who were more disconnected from upper-level leadership. Future studies should, therefore, carefully define the level of analysis in the survey directions and, when appropriate, in the body of the survey [50].

Limitations and implications for implementation strategy mediation analyses

There were several limitations to the current study that surface implications for future implementation strategy mediation analyses. First, there are a host of measurement issues, such as the one described above in which measures that assessed leadership or clinician perceptions of the organization did not orient the respondent to the level of analysis to which they should respond. Thinking carefully about the theory of change and the target social unit for an implementation strategy will be important for devising measure instructions because no single study could assess perceptions of all stakeholder leader-like roles. Moreover, prior research has demonstrated that manipulation of the EBPAS language, for example, to refer to general interventions versus specific evidence-based treatments changes clinician response suggesting the importance of assessing intervention-specific attitudes rather than general attitudes [51]. Although we deployed surveys multiple times, it is possible that waiting some period of time post-active implementation support is crucial when measuring candidate mediators; guidance is needed. Moreover, even though our survey battery of relevant contextual factors was quite extensive, measures related to patients, their perspective, and the clinical encounter itself (e.g., length of time, purpose) [52] were altogether missed.

Although qualitative inquiry presents a way in which to surface novel contextual factors where quantitative measures do not exist, the focus group interviewing guide used in this study queried the same domains outlined in the Framework for Dissemination [24] and, as such, may have missed other relevant contextual factors not included in this framework. We did offer clinicians an opportunity at the end of the focus group to share about areas not queried in the main portion, but relatively little new information emerged at that time. In addition, although attempts were made to include clinicians who had varying use and opinions about MBC, it is possible that capacity and willingness to attend resulted in a biased sample of respondents. Rapid ethnographic methods might offer a way to overcome some of the limitations of the traditional focus group methodology, which might be important for the study of factors influencing sustainment [53].

Finally, this approach to tailoring the implementation of measurement-based care, although informed by a framework, was largely atheoretical. The field is at a point of readiness for centering theory to inform practical implementation [54, 55]. In the case of tailoring implementation, causal theory can be leveraged in at least two ways. One, determinants rarely act in isolation yet commonly used approaches to prioritizing barriers for tailoring strategies invite stakeholders to rate each individual factor for its importance and/or feasibility, among other parameters [1]. Bringing theory to bear on this step, to generate a theory of the problem, could result in a more focused set of determinants. This could be done by leveraging the notably sparse implementation theory or by theorizing the temporal interrelationships among local contextual factors and implementation outcomes [56, 57]. Two, strategies could be matched or tailored to determinants by articulating their mechanisms of action and aligning those with target barriers. This could be done by diagramming causal pathways of strategies that center determinants, although more empirical work, is needed to bring confidence to this endeavor [58].

Conclusions

This study explored candidate mediators of MBC sustainment using a longitudinal design that enabled rigorous quantitative analyses mixed with qualitative methods for additional insight. No mediators were established using quantitative methods perhaps due to poor or insufficient measurement, or failure of the implementation strategies to fully resolve determinants and institutionalize MBC. Several partial pathways were identified pointing to fruitful areas for future research. Specifically, quantitative analyses revealed the association between implementation group and program-level training utilization, organizational climate for change, and program-level structures as well as the association between clinician perception of MBC practicality and sustainment of PHQ-9 completion over time. Future research should consider experimental manipulations to further explore the role of practicality and user experience in sustained evidence-based practice use. Future research may also be needed to disentangle strategies that accommodate barriers to implementation versus permanently altering or removing barriers. Qualitative results emphasized the importance of leadership, supervision, and peer support for implementation which aligns with past research. Future research is needed to clarify pragmatic operationalizations of strategies to alter or increase the potency of these determinants particularly in the context of sustainment.

Availability of data and materials

A limited quantitative dataset analyzed in the current study may be available upon reasonable request made to the corresponding author.

Notes

  1. Context includes key features of the environment in which the work is immersed and which are interpreted as meaningful to the success, failure, and unexpected consequences of the intervention(s), as well as the relationship of these to stakeholders. These factors can surface as barriers or facilitators [2]

  2. Factors that obstruct or enable changes in targeted professional behaviors or healthcare delivery processes [3].

  3. A mediator is an intervening variable that may account (statistically) for the relationship between the independent and dependent variable. In this case, contextual factors that surface as barriers may emerge as mediators of the tailored implementation and its outcomes [13]

Abbreviations

FG:

Focus group

PHQ-9:

Patient Health Questionnaire

MBC:

Measurement-based care

SOF:

Survey of Organizational Functioning

EBPAS:

Evidence-Based Practice Attitudes Scale

ASA:

Attitudes Towards Standardized Assessment Scales-Monitoring and Feedback

MFA:

Monitoring and Feedback Attitudes Scale

ICS:

Implementation Climate Scale

ILS:

Implementation Leadership Scale

References

  1. Powell BJ, Beidas RS, Lewis CC, Aarons GA, McMillen JC, Proctor EK, et al. Methods to improve the selection and tailoring of implementation strategies. J Behav Health Serv Res. 2017;44(2):177–94.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Goodman D, Ogrinc G, Davies L, Baker GR, Barnsteiner J, Foster TC, et al. Explanation and elaboration of the SQUIRE (Standards for Quality Improvement Reporting Excellence) Guidelines, V. 2.0: examples of SQUIRE elements in the healthcare improvement literature. BMJ qual saf. 2016;25(12):e7-e.

    Article  Google Scholar 

  3. Krause J, Van Lieshout J, Klomp R, Huntink E, Aakhus E, Flottorp S, et al. Identifying determinants of care for tailoring implementation in chronic diseases: an evaluation of different methods. Implement Sci. 2014;9(1):1–12.

    Article  Google Scholar 

  4. Baker R, Camosso‐Stefinovic J, Gillies C, Shaw EJ, Cheater F, Flottorp S, Robertson N, Wensing M, Fiander M, Eccles MP, Godycki‐Cwirko M. Tailored interventions to address determinants of practice. Cochrane Database Syst Rev. 2015(4).

  5. Wensing M. The tailored implementation in chronic diseases (TICD) project: introduction and main findings. Implement Sci. 2017;12(1):5.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Lewis CC, Scott K, Marriott BR. A methodology for generating a tailored implementation blueprint: an exemplar from a youth residential setting. Implement Sci. 2018;13(1):1–13.

    Article  Google Scholar 

  7. Lewis CC, Boyd M, Puspitasari A, Navarro E, Howard J, Kassab H, et al. Implementing measurement-based care in behavioral health: a review. JAMA Psychiat. 2019;76(3):324–35.

    Article  Google Scholar 

  8. Lambert MJ, Whipple JL, Hawkins EJ, Vermeersch DA, Nielsen SL, Smart DW. Is it time for clinicians to routinely track patient outcome? a meta-analysis. Clin Psychol Sci Pract. 2003;10(3):288.

    Article  Google Scholar 

  9. Lambert MJ, Whipple JL, Kleinstäuber M. Collecting and delivering progress feedback: a meta-analysis of routine outcome monitoring. Psychotherapy. 2018;55(4):520.

    Article  PubMed  Google Scholar 

  10. Brattland H, Koksvik JM, Burkeland O, Gråwe RW, Klöckner C, Linaker OM, et al. The effects of routine outcome monitoring (ROM) on therapy outcomes in the course of an implementation process: a randomized clinical trial. J Couns Psychol. 2018;65(5):641.

    Article  PubMed  Google Scholar 

  11. Peterson AP, Fagan C. Training the next generation in routine outcome monitoring: current practices in psychology training clinics. Train Educ Prof Psychol. 2017;11(3):182.

    Google Scholar 

  12. Lewis CC, Marti CN, Scott K, Walker MR, Boyd M, Puspitasari A, et al. Standardized versus tailored implementation of measurement-based care for depression in community mental health clinics. Psychiatric Services. 2022:appi. ps. 202100284.

  13. Kazdin AE. Mediators and mechanisms of change in psychotherapy research. Annu Rev Clin Psychol. 2007;3:1–27.

    Article  PubMed  Google Scholar 

  14. Fetters MD, Curry LA, Creswell JW. Achieving integration in mixed methods designs—principles and practices. Health Serv Res. 2013;48(6pt2):2134–56.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Löwe B, Kroenke K, Fau - Herzog W, Herzog W, Fau - Gräfe K, Gräfe K. Measuring depression outcome with a brief self-report instrument: sensitivity to change of the Patient Health Questionnaire (PHQ-9). J Affect Disord. 2004;81(1):61–6.

    Article  PubMed  Google Scholar 

  16. Levis B, Benedetti A, Thombs BD. Accuracy of Patient Health Questionnaire-9 (PHQ-9) for screening to detect major depression: individual participant data meta-analysis. BMJ. 2019;365:l1476.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Kroenke K, Spitzer RL, Williams JB, Löwe B. The patient health questionnaire somatic, anxiety, and depressive symptom scales: a systematic review. Gen Hosp Psychiatry. 2010;32(4):345–59.

    Article  PubMed  Google Scholar 

  18. Schoonenboom J, Johnson RB. How to construct a mixed methods research design. KZfSS Kölner Zeitschrift für Soz Sozialpsychol. 2017;69(2):107–31.

    Article  Google Scholar 

  19. Chamberlain P, Brown CH, Saldana L, Reid J, Wang W, Marsenich L, et al. Engaging and recruiting counties in an experiment on implementing evidence-based practice in California. Admin Policy Ment Health Ment Health Serv Res. 2008;35(4):250–60.

    Article  Google Scholar 

  20. Lewis CC, Scott K, Marti CN, Marriott BR, Kroenke K, Putz JW, et al. Implementing measurement-based care (iMBC) for depression in community mental health: a dynamic cluster randomized trial study protocol. Implement Sci. 2015;10(1):1–14.

    Article  Google Scholar 

  21. Lewis CC, Puspitasari A, Boyd MR, Scott K, Marriott BR, Hoffman M, et al. Implementing measurement based care in community mental health: a description of tailored and standardized methods. BMC Health Serv Res. 2018;11(1):76.

    Google Scholar 

  22. Valente TW, Chou CP, Pentz MA. Community coalitions as a system: effects of network change on adoption of evidence-based substance abuse prevention. Am J Public Health. 2007;97(5):880–6.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Jensen-Doss A, Haimes EM, Smith AM, Lyon AR, Lewis CC, Stanick CF, Hawley KM. Monitoring treatment progress and providing feedback is viewed favorably but rarely used in practice. Adm Policy Mental Health Mental Health Serv Res. 2018;45(1):48–61.

  24. Mendel P, Meredith L, Schoenbaum M, Sherbourne C, Wells K. Interventions in organizational and community context: a framework for building evidence on dissemination and implementation in health services research. Admin Policy Ment Health Men Health Serv Res. 2008;35(1–2):21–37.

    Article  Google Scholar 

  25. Singer JD, Willet JB. Applied longitudinal data analysis: modeling change and event occurrence. New York, NY: Oxford University Press; 2003.

    Book  Google Scholar 

  26. RStudio Team. RStudio: integrated development environment for R. RStudio, PBC. 2021.

  27. MacKinnon D. Introduction to statistical mediation analysis. New York, NY: Routledge Academic; 2008.

  28. Bates DM, Sarkar D. lme4: linear mixed-effects models using S4 classes. Package 0.99875–6 ed: Comprehensive R Archive Network; 2007.

  29. Knowles JE, Frederick C. merTools: tools for analyzing mixed effect regression models. 2019.

  30. Rubin M. When to adjust alpha during multiple testing: a consideration of disjunction, conjunction, and individual testing. Synthese. 2021;199(3):10969–1000.

    Article  Google Scholar 

  31. Gelman A, Hill J. Data analysis using regression and multilevel/hierarchical models (final version: 5 July 2006) Please do not reproduce in any form without permission.

  32. Palinkas LA, Horwitz SM, Green CA, Wisdom JP, Duan N, Hoagwood K. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Admin Policy Ment Health Ment Health Serv Res. 2015;42(5):533–44.

    Article  Google Scholar 

  33. Graneheim UH, Lundman B. Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness. Nurse Educ Today. 2004;24(2):105–12.

    Article  CAS  PubMed  Google Scholar 

  34. Hsieh H-F, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.

    Article  PubMed  Google Scholar 

  35. Charmaz K. Constructing grounded theory: a practical guide through qualitative analysis. London: Sage Publications; 2006.

    Google Scholar 

  36. Hair J, Anderson R, Black B, Babin B. Multivariate data analysis: pearson education; 2016.

  37. Broome KM, Flynn PM, Knight DK, Simpson DD. Program structure, staff perceptions, and client engagement in treatment. J Subst Abuse Treat. 2007;33(2):149–58.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Salyers MP, Rollins AL, McGuire AB, Gearhart T. Barriers and facilitators in implementing illness management and recovery for consumers with severe mental illness: trainee perspectives. Adm Policy Ment Health. 2009;36(2):102–11.

    Article  PubMed  Google Scholar 

  39. Boyd MR, Powell BJ, Endicott D, Lewis CC. A method for tracking implementation strategies: an exemplar implementing measurement-based care in community behavioral health clinics. Behav Ther. 2018;49(4):525–37.

    Article  PubMed  Google Scholar 

  40. Franco GE. Productivity standards: do they result in less productive and satisfied therapists? US: Educational Publishing Foundation; 2016. p. 91–106.

    Google Scholar 

  41. Burke LA, Saks AM. Accountability in training transfer: adapting schlenker’s model of responsibility to a persistent but solvable problem. Hum Resour Dev Rev. 2009;8(3):382–402.

    Article  Google Scholar 

  42. Westerlund A, Nilsen P, Sundberg L. Implementation of implementation science knowledge: the research practice gap paradox. Worldviews Evid-Based Nurs. 2019;16(5):332.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Aarons GA, Ehrhart MG, Farahnak LR, Hurlburt MS. Leadership and organizational change for implementation (LOCI): a randomized mixed method pilot study of a leadership and organization development intervention for evidence-based practice implementation. Implement Sci. 2015;10:11.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Skar AS, Braathu N, Peters N, Bækkelund H, Endsjø M, Babaii A, et al. A stepped-wedge randomized trial investigating the effect of the Leadership and Organizational Change for Implementation (LOCI) intervention on implementation and transformational leadership, and implementation climate. BMC Health Serv Res. 2022;22(1):298.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Lyon AR, Liu FF, Connors EH, King KM, Coifman JI, Cook H, McRee E, Ludwig K, Law A, Dorsey S, McCauley E. How low can you go? Examining the effects of brief online training and post-training consultation dose on implementation mechanisms and outcomes for measurement-based care. Implement Sci Commun. 2022;3(1):1–5.

  46. Aarons GA, Sommerfeld DH, Willging CE. The soft underbelly of system change: the role of leadership and organizational climate in turnover during statewide behavioral health reform. Psychol Serv. 2011;8(4):269–81.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Bunger AC, Birken SA, Hoffman JA, MacDowell H, Choy-Brown M, Magier E. Elucidating the influence of supervisors’ roles on implementation climate. Implement Sci. 2019;14(1):93.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Brimhall KC, Fenwick K, Farahnak LR, Hurlburt MS, Roesch SC, Aarons GA. Leadership, organizational climate, and perceived burden of evidence-based practice in mental health services. Adm Policy Ment Health. 2016;43(5):629–39.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Tang MY, Rhodes S, Powell R, McGowan L, Howarth E, Brown B, et al. How effective are social norms interventions in changing the clinical behaviours of healthcare workers? A systematic review and meta-analysis. Implement Sci. 2021;16(1):8.

    Article  PubMed  PubMed Central  Google Scholar 

  50. Schneider B, Ehrhart MG, Macey WH. Organizational climate and culture. Annu Rev Psychol. 2013;64:361–88.

    Article  PubMed  Google Scholar 

  51. Reding ME, Chorpita BF, Lau AS, Innes-Gomberg D. Providers’ attitudes toward evidence-based practices: is it just about providers, or do practices matter, too? Adm Policy Ment Health. 2014;41(6):767–76.

  52. Jones SM, Gaffney A, Unger JM. Using patient-reported outcomes in measurement-based care: perceived barriers and benefits in oncologists and mental health providers. J Public Health. 2021:1–7.

  53. Lewis CC, Hannon PA, Klasnja P, Baldwin L-M, Hawkes R, Blackmer J, et al. Optimizing implementation in cancer control (OPTICC): protocol for an implementation science center. Implement Sci Commun. 2021;2(1):44.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Beidas RS, Dorsey S, Lewis CC, Lyon AR, Powell BJ, Purtle J, et al. Promises and pitfalls in implementation science from the perspective of US-based researchers: learning from a pre-mortem. Implement Sci. 2022;17(1):55.

    Article  PubMed  PubMed Central  Google Scholar 

  55. Weiner BJ, Lewis CC, Sherr K. Practical implementation science: moving evidence into action: Springer Publishing Company; 2022.

  56. Rogers EM, Singhal A, Quinlan MM. Diffusion of innovations. An integrated approach to communication theory and research: Routledge; 2014. p. 432–48.

    Google Scholar 

  57. Kislov R, Pope C, Martin GP, Wilson PM. Harnessing the power of theorising in implementation science. Implement Sci. 2019;14(1):103.

    Article  PubMed  PubMed Central  Google Scholar 

  58. Lewis CC, Klasnja P, Powell BJ, Lyon AR, Tuzzio L, Jones S, et al. From classification to causality: advancing understanding of mechanisms of change in implementation science. Front Public Health. 2018;6:136.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

N/A

Funding

This research was supported by funding from the NIMH R01MH103310 and R13HS025632.

Author information

Authors and Affiliations

Authors

Contributions

CCL and MRB jointly led the conceptualization of the manuscript and drafted the introduction, method, and discussion. CNM led the quantitative analysis, interpretation, and results. KA led the qualitive design and mixed-methods analysis, interpretation, and results. The authors reviewed, edited, and approved the final content of the manuscript.

Corresponding author

Correspondence to Cara C. Lewis.

Ethics declarations

Ethics approval and consent to participate

This study was reviewed and approved by Indiana University’s IRB; clinician participants provided informed consent.

Consent for publication

N/A

Competing interests

The authors have no competing interests to report.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lewis, C.C., Boyd, M.R., Marti, C.N. et al. Mediators of measurement-based care implementation in community mental health settings: results from a mixed-methods evaluation. Implementation Sci 17, 71 (2022). https://doi.org/10.1186/s13012-022-01244-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13012-022-01244-1

Keywords