Open Access
Open Peer Review

This article has Open Peer Review reports available.

How does Open Peer Review work?

Improving and sustaining delivery of CPT for PTSD in mental health systems: a cluster randomized trial

  • Shannon Wiltsey Stirman1Email authorView ORCID ID profile,
  • Erin P. Finley2, 13,
  • Norman Shields3,
  • Joan Cook4,
  • Rachel Haine-Schlagel5,
  • James F. BurgessJr6, 12,
  • Linda Dimeff7,
  • Kelly Koerner7,
  • Michael Suvak8,
  • Cassidy A. Gutner9, 14,
  • David Gagnon6,
  • Tasoula Masina10,
  • Matthew Beristianos11,
  • Kera Mallard11,
  • Vanessa Ramirez2, 13 and
  • Candice Monson10
Implementation Science201712:32

https://doi.org/10.1186/s13012-017-0544-5

Received: 20 January 2017

Accepted: 28 January 2017

Published: 6 March 2017

Abstract

Background

Large-scale implementation of evidence-based psychotherapies (EBPs) such as cognitive processing therapy (CPT) for posttraumatic stress disorder can have a tremendous impact on mental and physical health, healthcare utilization, and quality of life. While many mental health systems (MHS) have invested heavily in programs to implement EBPs, few eligible patients receive EBPs in routine care settings, and clinicians do not appear to deliver the full treatment protocol to many of their patients. Emerging evidence suggests that when CPT and other EBPs are delivered at low levels of fidelity, clinical outcomes are negatively impacted. Thus, identifying strategies to improve and sustain the delivery of CPT and other EBPs is critical. Existing literature has suggested two competing strategies to promote sustainability. One emphasizes fidelity to the treatment protocol through ongoing consultation and fidelity monitoring. The other focuses on improving the fit and effectiveness of these treatments through appropriate adaptations to the treatment or the clinical setting through a process of data-driven, continuous quality improvement. Neither has been evaluated in terms of impact on sustained implementation.

Methods

To compare these approaches on the key sustainability outcomes and provide initial guidance on sustainability strategies, we propose a cluster randomized trial with mental health clinics (n = 32) in three diverse MHSs that have implemented CPT. Cohorts of clinicians and clinical managers will participate in 1 year of a fidelity oriented learning collaborative or 1 year of a continuous quality improvement-oriented learning collaborative. Patient-level PTSD symptom change, CPT fidelity and adaptation, penetration, and clinics’ capacity to deliver EBP will be examined. Survey and interview data will also be collected to investigate multilevel influences on the success of the two learning collaborative strategies. This research will be conducted by a team of investigators with expertise in CPT implementation, mixed method research strategies, quality improvement, and implementation science, with input from stakeholders in each participating MHS.

Discussion

It will have broad implications for supporting ongoing delivery of EBPs in mental health and healthcare systems and settings. The resulting products have the potential to significantly improve efforts to ensure ongoing high quality implementation and consumer access to EBPs.

Trial registration

NCT02449421. Registered 02/09/2015

Keywords

ImplementationSustainabilityDeliveryCPTFidelityLearning collaborativeEBPs

Background

Thousands of public sector clinicians have been trained to deliver evidence-based psychotherapies (EBPs) due to recent mandates and investments in implementation [1, 2]. Patients treated during EBP training programs experience substantial symptom improvement [35]. Despite this progress, efforts to implement and sustain EBPs in these systems face significant challenges. One finding is that penetration (integration of a practice within a service setting and its subsystems) is low [6, 7]. Veteran Affairs clinicians rarely cite established contraindications as reasons not to offer EBPs; [8, 9] instead, they cite challenging symptom profiles, a need for more consultation, and clinic-level barriers [10, 11]. A second challenge is that research suggests that fidelity (adherence to prescribed elements of treatment and competence/skill of delivery) [12] may be low when these interventions are implemented [13, 14]. Challenges in implementing EBPs lead to adaptation without systematic efforts to understand the impact on symptoms and other outcomes [15, 16]. While some adaptation and latitude in EBP delivery may be appropriate [17], effective [1821], or promote implementation [22], other adaptations result in discontinuation of core elements [15], integration of non-evidence-based strategies [13, 23], and worse outcomes [24, 25]. In addition to these challenges, local capacity can impact sustainability and quality of delivery. A significant minority of clinician trainees do not meet established EBP competency criteria but deliver EBP protocols or EBP elements in their practice [11, 13, 15]. Turnover and organizational context can also impact sustained EBP delivery [26, 27]. Patient access and clinical outcomes may suffer as a result [23, 24, 2830]. Thus, in the context of routine care, improvement, rather than maintenance, [31] of clinical outcomes may be an appropriate goal. Finally, while EBPs are cost-effective [32], the budget impact of supporting implementation can be a barrier [3335].

Failure to provide EBPs to those with PTSD can have a significant public health impact. Consequences of inadequate treatment include risk of suicide, overuse of healthcare, work absenteeism, reduced productivity, unemployment, and family disruption [36, 37]. Cognitive processing therapy (CPT), which has been implemented in at least eight countries and in mental health systems throughout the world, has a strong evidence base and is effective with a variety of patient populations [19, 20, 3842]. The current study will aid in identifying strategies to promote sustainability of CPT and improve patient-level outcomes, thereby informing efforts to implement EBPs for psychiatric disorders more broadly.

Researchers have identified serious gaps in knowledge regarding best practices for improving and sustaining EBPs in routine care [4345]. There have been few, if any, experimental investigations of strategies to sustain EBPs. Few studies on sustainability have assessed a full range of the key outcomes, such as patient-level mental health outcomes [46]. To address these gaps and identify effective strategies for sustaining EBPs in routine care, we seek to compare two promising yet different implementation strategies (ISs), fidelity-oriented consultation, and continuous quality improvement.

Proposed interventions

Existing research and theory suggest two contrasting ISs, one oriented to fidelity and the other to mutual adaptation and quality improvement. These strategies have potentially different mechanisms, costs, and relative benefits that could impact their fit in differing contexts.

Fidelity-oriented learning collaborative (FID)

Studies show that without consistent follow-up or ongoing support of clinicians to promote fidelity after initial training, the training effects quickly dissipate [47, 48]. Although a recent meta-analysis demonstrated no overall link between observer-rated fidelity and symptom change across a range of treatments and disorders, when analyses were conducted to avoid the potential temporal confound between fidelity and symptom change and establish the temporal precedence of fidelity, two aspects of fidelity, adherence to the protocol and competence (skill of delivery) in early CBT sessions were associated with subsequent decreases in depression [49]. Later research suggested that fidelity to the key aspects of CPT, as opposed to prescribed session elements, was associated with symptom change [50]. Other studies demonstrated patient-level benefits of fidelity-oriented support for EBPs as compared to general professional development-oriented support [5153]. A FID strategy has been shown to lead to clinician achievement of benchmark levels of fidelity [54]. We would thus expect FID to impact clinical outcomes through CPT fidelity as a mechanism of change. Fidelity support also appears to impact other implementation outcomes, most notably sustained EBP capacity, delivery [55], and support activities [56]. Fidelity support has also been associated with lower staff turnover, which improves workforce capacity [53].

Continuous quality improvement-oriented learning collaboratives (CQI)

Organization-level barriers to implementation are commonly found in routine care settings and may impact sustainability [4, 26, 57, 58]. In our study on CPT training consultation in Canada, [59] in Texas, and in the US VA [10, 11], clinicians cited organization-level challenges to delivering EBPs for PTSD and hesitance about offering EBPs, due in part to a perceived lack of fit between the treatment and the patients [9]. These problems may contribute to the low rates of penetration [6, 7]. The Dynamic Sustainability Framework (DSF) suggests that the dynamic “fit” between an intervention and its delivery context is critical to sustainability [60]. It rejects the assumption that deviation from the protocol leads to decreased benefit and advocates for mutual adaptation and continuous refinement of EBPs in real world contexts [60]. CQI, identified in the DSF as a facilitator of sustainability, has been used successfully in healthcare and mental health settings [61, 62] and has guided EBP adaptation [63]. CQI also fosters learning organizations, which are more likely to improve care and innovate [64]. We therefore expect that the mechanisms by which CQI impacts clinical outcomes are EBP adaptation and functioning as a learning organization.

While high-level leadership support is strong, and structural and policy level changes and investments have been made to support implementation within many systems, [2] the individual clinicians who must ultimately deliver treatments have identified idiosyncratic challenges at local levels. The locally oriented, clinician-led approach of CQI-oriented learning collaboratives [61] (LCs) may facilitate sustainability, but this possibility remains unexamined [65]. On the other hand, greater time, cost, and personnel burden [66, 67] may be a barrier to CQI, and whether the CQI process and resulting adaptations lead to more effective treatment and implementation is unknown. To answer these questions, we will conduct a mixed method, type-III hybrid design, which allows simultaneous study of ISs, their mechanisms, and clinical outcomes [68].

Summary of the proposed project

Clinics in three mental health systems that have implemented CPT for PTSD will submit baseline session recordings and patient data for at least two patients (n = 192) before random assignment to either 12 months of FID (n = 16 clinics, 48 clinicians, 192 patients), or CQI (n = 16 clinics, 48 clinicians, 192 patients). Outcomes will include patient self-reported PTSD symptom outcomes (primary); independent fidelity ratings; and penetration, adaptation, and capacity to deliver CPT [46, 69]. We will also investigate engagement in, credibility and costs of, and satisfaction with each IS [mechanisms by which the interventions impact patient outcomes] as well as contextual factors that may impact sustainability. The mixed method design will include qualitative methods for a richer understanding of the process and outcomes for each of the ISs [70]. This study will capitalize on infrastructure created for a previous trial comparing EBP consultation strategies [59] as well as existing infrastructure in the participating mental health systems and strong collaborative relationships and experience with each participating mental health system. The study has the potential to advance implementation science beyond observational studies of EBP sustainability by providing much-needed information on the effectiveness of interventions to promote long-term EBP implementation.

Methods

Summary

We plan to randomize clinics and associated clinicians in three mental health systems that have implemented CPT for at least 2 years into one of two conditions: CQI-LC or FID. Implementation outcomes will be assessed. ISs will be compared over the course of the 1-year intervention phase and a 1-year follow-up. Mechanisms of action and budget impact will also be explored. A phased rollout of the ISs will allow baseline data to serve as within and across subject comparison data. Potential individual and contextual influences on sustainability will also be assessed through surveys and interviews to elucidate how these factors may impact CPT delivery and IS engagement and outcomes.

Recruitment

Participating clinics and clinicians will be drawn from the three mental health systems: the VA Canada Operational Stress Injury (OSI) National Network and affiliated clinics (VAC), the United States VA system, and the State of Texas (TX). Each of these systems has implemented CPT in the past several years, with standardized training across the networks which includes an initial workshop followed by 6 months of consultation. As Table 1 indicates, the systems are diverse in terms of clinician backgrounds, patient populations, and financing.
Table 1

Characteristics of participating mental health systems

Source

No. of clinicians available for recruitment

No. of clinics (planned/ available)

Clinic type/population

Clinician degrees

Payment/reimbursement

Penetration

VA Canada OSI Network (VAC)

134 OSI clinicians and contracted community clinicians, 65% received CPT provider status

11/38

VAC clinics and community-based/veterans and civilians

M.A., PhD/PsyD

Government veteran benefits, private insurance, self-pay for civilians

~60% of eligible patients (by self-report)

United States VA system (VA)

2996 trained, 73% reached provider status

11/138

VA hospitals and clinics/veterans

PhD/PsyD, MSW, MD

Typically VA benefits, TriCare

112–20% eligible patients [6]

Texas (TX) Community MH Clinics

253 trained/53% reached CPT provider status

10/115

Civilians and rural veterans

MSW, LICSW, LPC, CART, MA, PhD/PsyD

Medicaid, VA Tricare, self-pay, private insurance

To be assessed

We plan to recruit and enroll an average of 3–4 clinicians per clinic across 32 clinics over the first 2 years of the study.

Inclusion/exclusion criteria

We will recruit VAC and VA clinics, TX, and Canadian community clinics that participated in CPT training and have at least 4 clinicians eligible to participate. Clinics are defined as work units under the same supervisor or service line within an organization. Clinicians and supervisors who maintain a caseload will be eligible to participate in the study if they (1) are CPT-trained and provide psychotherapy to individuals with PTSD, (2) are willing to record therapy sessions, provide work samples for baseline assessment, and complete baseline surveys, and (3) have internet access. We considered only enrolling clinicians who had achieved a certain level of CPT adherence (e.g., “provider status”) but recognized that stakeholders need to understand how to improve EBP delivery among all clinicians who deliver EBPs within the system, including the 23–47% of clinicians (Table 1) who receive training but do not achieve provider status. Their inclusion allows assessment of ISs’ ability to improve capacity to deliver high quality CPT.

Eligible patient participants are Veterans or non-Veterans (1) diagnosed by their clinicians to have current PTSD, with a minimum PTSD checklist (PCL) score of 33 (based on the most current validation and cutoff scores that indicate a likely PTSD diagnosis for the PCL-5) [71] 2) who have not previously received CPT, and (3) are willing to consent to completing symptom measures and have their sessions audio recorded and reviewed. Patients are permitted to continue other psychotherapies if the treatment is not specifically focused on PTSD. Ineligible patients are those ineligible for CPT based on the state of research evidence as follows: (1) current uncontrolled psychotic or bipolar disorder, (2) [past month] substance dependence requiring daily use or detoxification (eligibility can be revisited if individuals enter recovery), (3) imminent suicide/homicide risk requiring immediate intervention, and (4) cognitive impairment precluding therapy engagement (e.g., significantly impaired memory or attention).

Randomization and timeline

Randomization will occur at the clinic level. Given the relatively small number of clinics, a stratified randomization procedure will be used to match the ISs (after baseline) on size (# of clinicians in each clinic) and system. The statistician will create a block for each combination of covariates and assign each clinic to the appropriate block. Simple randomization will occur for clinics within each block. As illustrated in the timeline, ISs will occur in three waves, and within each wave, clinics will be randomized further into immediate or delayed (6 months) start. This will facilitate a comparison with no IS [72] using symptom and session data collected from the baseline phase of the delayed start group. Administrative data from before and during the IS phase will facilitate further comparison of symptom change for patients seen before vs. during ISs. During baseline and to keep the delayed start group engaged, we will offer webinars to introduce study technology and provide general updates on CPT research findings (with no guidance for actual practice). Incentives for engagement in baseline include written fidelity feedback from one baseline session after the IS phase starts, clinician and clinic-level gifts for providing baseline data, and CE credits for webinars.

ISs

PracticeGround, an internet-based EBP training and learning community platform, will be used as the platform for IS meetings in this study. It has capacity for usage metrics and provision of CE credits. Previous research demonstrated its effectiveness as a platform for EBP training [73, 74]. A full range of materials for CPT online support (e.g., webinars, video clips, presentations, and handouts on CPT-related topics) have been integrated into PracticeGround.

Each IS cohort will include a maximum of eight clinicians across up to two clinics within the same system. After a 2-hour web-based learning (for CQI) or booster (for FID) sessions, they will meet biweekly for the first 3 months, then monthly over the last 9 months of the [12-months] active intervention phase, with between-meeting message board correspondence and communication within individual clinics. We will review recordings to assess adherence to the FID or CQI manual and differentiation of ISs with a checklist based on a review of CQI elements and the key IS elements identified in the manuals [75] and will track correspondence within each condition to compare communication.

Fidelity-oriented consultation (FID)

FID will be led by a CPT expert using structured format based on the effective standard CPT consultation model [4, 5, 76]. One hour meetings will consist of fidelity (adherence and competence) feedback based on discussion of cases and session audio review during the meeting, with guidance on challenges to CPT fidelity, e.g., how to use CPT without deviation from the manual. Participants will also review CPT training modules to improve fidelity. Facilitators will be certified CPT consultants. PDSA cycles will not be used.

CQI learning collaborative (CQI)

CQI facilitators have been trained in CQI and LC strategies by study team members who have experience in facilitating CQI-oriented LCs. The framework for the CQIs is based on the Institute for Healthcare Improvement’s Breakthrough Series collaborative model for quality improvement [77], which has been used successfully in mental health [61]. Prior research indicated that in-person learning sessions do not add a significant value relative to associated costs [78], so initial learning sessions will be trainings in CQI principles that will occur on PracticeGround. Action periods take place between subsequent hour-long Web-based meetings, allowing time for each team to implement change ideas identified during meetings and study the effectiveness of those change ideas using the Plan-Do-Study-Act (PDSA) cycle of learning. Examples of goals might include increasing CPT engagement, improving effectiveness for individuals with particular symptom profiles, or advocating for more frequent sessions within the clinic. Consistent with the CQI model, non-clinicians who are the key to producing change can participate in the CQI meetings to facilitate change.

Hypotheses and assessment strategy

We propose to assess the impact of CQI and FID on a variety of the key implementation outcomes with the following hypotheses:
  1. 1.
    Patient symptom improvement
    1. (a)

      Our primary outcome, change in PCL-5 scores, non-inferior for CQI as compared to FID

       
    2. (b)

      Outcomes for ISs will be superior to outcomes for patients seen during the no intervention phase

       
    3. (c)

      The ISs will impact change in symptoms through different mechanisms (increased fidelity for FID [30, 79], CPT adaptation, and development of learning organizations for CQI) [64]

       
     
  2. 2.

    Cost, as measured by budget impact, will be greater for CQI, which requires increased personnel burden (management involvement and activities between meetings) [61], than FID.

     
  3. 3.

    Fidelity (adherence and competence) will be greater for FID, which targets fidelity [54], than CQI [60].

     
  4. 4.

    Adaptation (# of changes to CPT) will be greater for CQI than FID due to planned adaptations in CQI [60].

     
  5. 5.

    Penetration (% of eligible patients who receive CPT) will be greater for CQI than FID, as CQI targets barriers to patient enrollment and retention and allows adaptations to improve fit [60].

     
  6. 6.

    Workforce capacity (% of clinicians who receive CPT provider status) will be greater for FID, which is intended to improve skills and promote adherence [54], the benchmarks for quality-rated provider status.

     
  7. 7.

    Organizational capacity (learning organization status and implementation climate) will be greater after CQI, which facilitates development of learning organizations [64] and reduces barriers to implementation [60, 80].

     

Patient measures

To examine hypotheses 1A and B, we will use the posttraumatic stress disorder checklist (PCL-5), a validated 20-item self-report measure that assesses the 20 DSM-5 symptoms of PTSD [71] that patients routinely complete in CPT. Each item is measured on a 5-point Likert scale. These are recorded in VA clinical records and can be extracted at the clinic and patient level. VAC collects the PCL-5 in their secure, Web-based Client-Reported Outcome Monitoring Information System (CROMIS), which is based on the OQ outcomes monitoring system [81]. For CPT patients, PCLs are done at pre-treatment, at every CPT session, at post-treatment and 3-months post-treatment, for a total of 12–15 administrations.

Additional patient measures

Patient demographics and diagnoses will be collected from clinicians and used as covariates in analyses. Patients will also complete The Outcome Questionnaire-45 (OQ-45), a validated, 15 min measure of functioning and other key outcomes of interest (symptoms, interpersonal problems, social role functioning, and quality of life) that are of interest in mental health [82, 83] (baseline, mid- and post-treatment) and the Patient Health Questionnaire (PHQ-9) [84] to assess symptoms of depression (at every session). They will complete a post-treatment Client Satisfaction Questionnaire (CSQ-8) [85] to assess satisfaction with treatment at the end of treatment.

Cost (budget impact)

To test hypothesis 2 and lay the groundwork for future, more comprehensive economic evaluations, we will use a framework for best practices for budget impact analysis [86].

Clinician CPT fidelity assessment

Observer-rated fidelity measures

We will assess fidelity for hypothesis 3 and as a potential mechanism of change (hypothesis 1C) via observer ratings of a randomly selected subset of session recordings. The CPT fidelity measure (Nishith and Resick Cognitive Processing Therapy Fidelity Scale. Unpublished Manuscript) examines clinicians’ adherence and competence to specific CPT interventions prescribed in each session. Clinicians are rated on their adherence to the protocol (i.e., the degree to which they performed a particular task, on a 0–2 Likert-type scale) and competence in delivery of these elements (rated on a 7-point, Likert-type scale). Sessions will be continuously uploaded by clinicians to a secure server (using procedures that were successfully used with geographically dispersed clinicians in our consultation study) [62, 87] and randomly selected for rating to minimize bias that may occur if clinicians were instructed to only provide sessions at designated time points. Rater agreement will be maintained through ongoing training, and raters will rate at least 15% of the sessions to assess reliability.

Self-reported adherence measure (secondary measure of fidelity)

As a routine aspect of documentation, at every session, clinicians will complete a 5–8-item adherence checklist for “unique and essential” CPT elements that are embedded in required VA template clinical notes and which will also be embedded in the CROMIS/OQ platforms.

Adaptation

To assess adaptation (hypothesis 4), we will use a framework and coding system of modifications and adaptations made to EBPs [88] that has previously been applied to interview and observation data on CPT adaptation [26]. Session recordings will be independently rated to identify modifications.

Penetration, clinician satisfaction, LC engagement, and CPT activity reporting (P&S)

As in our consultation study, at baseline, 4, 8, and 12 months, and at the follow-up, clinicians will complete a survey on CPT activity. Questions will include number/proportion of PTSD cases who received CPT since the last assessment, number of CPT sessions delivered, satisfaction with the IS over the past month; frequency of non-study consultation or training; contextual adaptations; and confidence in CPT delivery. To assess the validity of self-reported penetration, detailed interviews with a subset of clinicians will assess eligibility of each patient with PTSD on the caseload and identify CPT elements utilized with each.

Capacity

To measure workforce capacity to deliver CPT (hypothesis 6), we will examine the proportion of clinicians in each clinic who qualify for a more advanced form of provider status, quality-rated CPT provider status (QRCPS) at each assessment. QRCPS is awarded to clinicians who have conducted CPT with at least three patients in the past year, and who achieve an average of two (out of three points) or higher adherence score and an average of four (out of six, indicating good) competence scores on a randomly selected session.

To assess organizational capacity and contextual factors (hypothesis 7), we will use two measures.

The Learning Organization Inventory (LOI) [80] is a psychometrically valid assessment of learning environment, concrete learning processes, and whether leaders reinforce learning. Given the potential for CQI to foster development of learning organizations, LOI will be assessed at two additional times for our analysis of potential mechanisms. We will also assess organization- and clinician-level constructs that have been associated with implementation success in prior research [27, 89]. The Implementation Climate Assessment (ICA) [89] will be used as an additional measure of capacity.

Potential moderators (exploratory)

Additional measures will be used to explore organization- and clinician-level moderating effects on the impact of ISs and/or use of EBPs, they will be administered at the baseline, 4-, 6-,12-, and 24-months timepoints (also to examine change over time) of the study [90].

The Organizational Social Context (OSC) measures climate, culture, and work attitudes. It is psychometrically sound [91] and predicted sustainability in prior research [27].

The Implementation Leadership Scale (ILS) [92] is a brief measure of (1) proactive, (2) knowledgeable, (3) supportive, (4) perseverant leadership with excellent internal consistency, convergent, and discriminant validity.

The Clinician Demographic Characteristics and Experience Questionnaire (CDCEQ) will be administered at baseline to assess relevant clinician demographic information and experience.

The Perceived Characteristics of Intervention Scale (PCIS) [93] is a brief survey that assesses innovation characteristics first identified by Rogers that are hypothesized to influence adoption and sustainability [93].

Interviews and qualitative strategy

We will also interview a subset of clinicians and administrators to contextualize and extend findings from the quantitative data collection. We will sample from each clinic at baseline. At post-LC and follow-up, a purposive sampling strategy will be utilized to ensure representation across systems and to capture perspectives of clinicians and administrators who experience different outcomes within each IS condition (e.g., low vs. high penetration, fidelity, and CPT utilization). The interviews will assess multilevel influences on engagement in ISs and on CPT delivery. The interview guide will be based on the tenants of the DSF [60] and the Consolidated Framework for Implementation Research [94].

Analytic strategy

We will examine descriptive statistics and distribution of all variables. Outcome variables will be transformed as needed if data violate assumptions of normality. We will evaluate effectiveness of randomization by comparing baseline measures of the key variables and demographics using χ2 and t tests.

Aim 1

The first aim is to examine the impact of two LC interventions for sustained EBP delivery on the use and effectiveness of CPT. We will test hypothesis 1A (non-inferiority of FID relative to CQI) by examining [95] whether the difference in change between the groups after the ISs is complete (12 months) and at follow-up is smaller than a predetermined clinically reliable difference (i.e., the non-inferiority margin “delta”).

To estimate change, we will use multilevel regression (i.e., growth curve analyses, mixed effect regression, hierarchical linear modeling), which offers numerous strengths for analyzing change over time for nested data, including efficiency in handling missing data, powerful and accurate estimation procedures adjusting for clustering, and modeling flexibility (e.g., allows for the inclusion of continuous or categorical, time invariant or time varying, covariates and predictors). This approach is extremely robust to missing data and considered “state of the art” for analyzing unbalanced data sets due to dropouts and loss to follow-up [96].

A three-level model will be evaluated to test the impact of IS on change in PTSD. Time (since pre-treatment) will be entered as a level-1 predictor to estimate change in PCL-5 scores. Level 2 will be included to account for the individual assessments nested within patients. Since randomization occurred at the clinic level, IS (dummy coded variable) will be entered as a level-3 predictor to examine the impact of IS on change over time in PTSD symptoms. Given the small number of systems, system will be a level-3 (clinic level) variable to assess and adjust for its influence. We will examine the influence of potentially significant covariates (e.g., baseline PCL-5, VA vs. community; veteran status, other clinician, or patient characteristics/diagnoses) and include in analyses if necessary.

To examine hypothesis 1B (that both ISs will be superior to no intervention), we will conduct piecewise multilevel growth modeling, which will allow us to estimate separate trajectories for each phase of the study by condition. This approach is being increasingly used in randomized controlled trials to more clearly elucidate change across multiple phases [97].

To further compare the impact of the ISs to no intervention (hypothesis 1B), we will use PCL and diagnostic data extracted from VAC’s CROMIS and the VA records. We will separately examine administrative PCL data for all patients with PTSD seen by VAC and VA clinicians through CROMIS and VA’s clinical CPT templates and clinical reminder data, both of which have capacity for clinic and clinician-level linkage and data extraction.

Hypothesis 1C. Different mechanisms of change will be evident for the two ISs. We hypothesize that FID will impact outcomes through increased fidelity and that CQI will impact outcomes through adaptation and development of learning organizations. Lagged regression modeling is a useful way to examine temporal precedence between contemporaneous time-varying phenomena, where a causal effect of variable X1 on X2 is established when X1 at time 1 significantly contributes to the behavior of X2 at time 2 while controlling for the effect of X2 at time 1 [98, 99]. To evaluate temporal precedence, lagged multivariate multilevel models will be evaluated [100]. Multivariate models will allow for the inclusion of two outcomes in the same model [100]. Each of the outcomes will be predicted by the level of that outcome at the previous time point (i.e., autoregressive effect, e.g., PTSD at time 2 will be predicted by PTSD at time 1) and the other variable (observer-rated fidelity, adaptation, or LOS score) in the model at the previous assessment occasion (i.e., cross-lagged path PTSD will be predicted by the proposed mechanism and vice-versa). We will also account for overall increases and decreases in the model by including time as a predictor of each outcome.

Sample size justification and power calculations

We require a sufficient sample to detect non-inferiority based on a clinically significant change of 5 points on the PCL [101, 102]. Calculations of a design effect (measure of how the design effects the standard error of the parameters) [103, 104] accounted for clustering. We computed intraclass correlations (ICC) with repeated observations of CPT patients at the clinic (ρ = 0.12), clinician (ρ = 0.16), and patient (ρ = 0.38) level in our prior research in Canada and VA, yielding a design effect of up to 2.32 [83]. Power to detect a clinically meaningful difference on PTSD symptom scores over time, taking the design effect into account, is .80 in a two-sided test, using repeated measures, with an alpha-level of 0.05 and a standard deviation (sd) of 7 points (based on prior research), for a sample size during the IS phase of 288 patients (144 per condition) within 32 clusters. In the proposed study, we expect to randomize 32 clinics, with an average of three clinicians per clinic and at least four patients per clinician during both the IS and follow-up phase. For each patient, PTSD symptoms will be measured at nine time points, as it is typical to assess symptoms frequently in CPT (baseline, every other session, post-treatment, and follow-up). To very conservatively account for potential patient-level attrition and missing data during treatment, we estimated an average of four observations per patient. Furthermore, we considered the longer study length, rates of turnover in each system, attrition rate in previous studies, balanced with participation in an active intervention, system support, and incentives, to anticipate substantial, but not unmanageable, attrition. Full participation in the IS and follow-up would yield 384 patients and at least 1536 observations, and thus, we could see a clinician attrition rate of 33% while remaining adequately powered to detect non-inferiority of CQI, and superiority of the ISs as compared to the baseline/non-IS phase data (an additional 2 patients per clinician; up to 192 patients). This projection exceeds the rate of attrition seen in our prior studies in these systems and is sufficient to test for non-inferiority on OQ measure as a secondary analysis, as the OQ-45 has an RCI of 14 (sd = 16) [105]. Finally, the sample will allow for a stable estimate of a fidelity based on the recommended G = .70–.80 level of dependability) [106].

3.I.5 Testing of additional hypotheses and exploratory analyses. To compare the budgetary impact of each IS (hypothesis 2), a budget impact analysis (BIA) [107] will be conducted to estimate the cost to the system of implementing and sustaining each IS. Results will be interpreted in conjunction with the findings for hypothesis 1A. If evidence of non-inferiority is found, the less expensive IS may be warranted, but significant differences in favor of the more costly IS might justify the added expense.

Hypotheses 3–7 involve continuous outcome data and will employ a similar multilevel regression strategy as described in section 3.I.1. We will also examine interactions between these variables and ISs to assess for a moderating effect on patient outcomes and will similarly explore potential moderating effects of attitudes (EBPAS) and social context (OSC) and leadership (ILS) on the key outcomes of interest. We will explore whether the system impacts the effects of the ISs by testing interactions between system, IS, and time.

Aim 2

The second aim is to assess each IS’s acceptability and perceived fit in diverse mental health systems.

Qualitative and mixed method analytic strategy

We will use interviews, qualitative strategies, and surveys to understand barriers and facilitators of the ISs and to CPT. Qualitative and quantitative data will be integrated to facilitate a fine-grained understanding of processes and characteristics that influence clinicians’ use and perceptions of CPT as well as fit and effectiveness of ISs within each system. A priori codes will be based on the DSF and Aarons et al.’s model of EBP implementation and sustainability [34, 85]. Using NVIVO 11 software, we will code transcripts from different stakeholders and through consensus, and identify and operationalize additional emerging codes. We will identify the central themes by diversity and triangulation across data sources, the frequency with which specific influences were identified, and attention to the key decision points and interactions described in data sources. Survey data on contextual factors will be examined in conjunction with qualitative data for the purposes of contextualization, validation, and triangulation [70, 108] to understand how contextual factors influence experience with CPT and ISs.

Discussion

Little research has been conducted to investigate different strategies to promote the sustained use of interventions. This study will contribute to a growing literature on sustainability by employing a mixed method approach to examining two different strategies across different mental health systems, offering several innovations. The application of a hybrid-type-III design [and data on the budgetary impact of each IS] [35, 47] allows the study of relative advantages of ISs and comparison of clinical outcomes in diverse care settings. Information about budget impact, time, personnel burden, perceived fit, and the key implementation outcomes will provide meaningful guidance for policy and decision-making on future implementation efforts. Penetration, fidelity, adaptation, capacity, and patient outcomes are the key implementation outcomes [69] that have not been objectively examined in prior research [46]. Patient outcomes have not been routinely assessed in mental health sustainability research [46] 3). Fidelity and adaptation have rarely been objectively assessed, and prior efforts to objectively evaluate fidelity have lacked full participation. This study is also one of the first examinations of the tenets of the Dynamic Sustainability Framework (DSF) [60], which encourages CQI and adaptation to improve sustainability and clinical outcomes. Many previous implementation studies have focused on either initial implementation, changing clinician behavior [51, 105] and skill or on promoting change at the leadership, organization, and system level [34]. Testing clinic-based CQI will allow us to examine the extent to which clinician-led, data-driven mutual adaptation can facilitate sustainability. We anticipate that this research will yield several products: a manual and toolkit for effective IS(s), papers describing implementation and patient-level outcomes, an examination of mechanisms of influence for each implementation strategy, papers reporting on multilevel predictors of sustained CPT implementation, a mixed method comparison of system, clinic, and clinician experiences of ISs in three mental health systems and comparisons of clinician vs. observer-generated data on fidelity and adaptation.

Abbreviations

BIA: 

Budget impact analysis

CDCEQ: 

Clinician Demographic Characteristics and Experience Questionnaire

CPT: 

Cognitive processing therapy

CQI: 

Continuous quality improvement

CROMIS: 

Client Reported Outcome Monitoring Information System

DOD: 

Department of Defense

DSF: 

Dynamic Sustainability Framework

EBP: 

Evidence-based practice

EBPAS: 

Evidence-based practice attitudes scale

FID: 

Fidelity

ICA: 

Implementation climate assessment

ICC: 

Intraclass correlation

ILS: 

Implementation Leadership Scale

IS: 

Implementation strategy

LC: 

Learning collaborative

LOI: 

Learning Organization Inventory

LOS: 

Learning Organization Survey

MHS: 

Mental health system

OSC: 

Organizational social context

OSI: 

Operational stress injury clinic

PCIS: 

Perceived Characteristics of Intervention Scale

PCL: 

PTSD checklist

PDSA: 

Plan, Do, Study, Act

PE: 

Prolonged exposure

PTSD: 

Posttraumatic stress disorder

QRCPS: 

Quality-rated CPT provider status

TX: 

Texas

US: 

United States

VA: 

Veterans Affairs

VAC: 

Veterans Affairs Canada

Declarations

Acknowledgements

Not applicable.

Funding

This project is supported by the Canadian Institute of Health Research, funding reference number 137012 and the National Institute of Health, grant number 5R01MH106506-02.

Availability of data and materials

Data are being collected for this study. The information here is an abbreviated version of the full protocol, which can be obtained from the first author. Upon completion of the study, resources produced through this research (e.g., manuals, toolkits) will be made available.

Authors’ contributions

The study was conceptualized by SWS, CM, EF, and NS with input from JC, LD, KK, JB, DG, MS, RHS, and CG. Suggestions for refinement of study procedures and assessments were made by TM, KM, VR, and MB. All authors reviewed the manuscript for intellectual content and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Not applicable.

Ethics approval and consent to participate

This study was approved by the institutional review board at Stanford University and VA Palo Alto Healthcare System, Palo Alto, CA; Ryerson University, Toronto, CA; University of Texas Health Sciences Center, San Antonio, TX; Boston University and VA Boston Healthcare System, Boston, MA; Suffolk University, Boston, MA; Yale University, New Haven, CT; and University of San Diego, San Diego, CA.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Authors’ Affiliations

(1)
National Center for PTSD and Stanford University Department of Psychiatry and Behavioral Sciences
(2)
The University of Texas Health Science Center at San Antonio, Department of Psychiatry and Medicine
(3)
Divisional Psychologist Occupational Health and Safety, Royal Canadian Mounted Police
(4)
Department of Psychiatry, Yale University
(5)
San Diego State University
(6)
Boston University School of Public Health, Department of Health Law, Policy and Management
(7)
Evidence-Based Practice Institute
(8)
Suffolk University
(9)
National Center for PTSD, VA Boston Healthcare System
(10)
Ryerson University
(11)
National Center for PTSD and Palo Alto Veterans Institute of Research
(12)
Center for Healthcare Organization and Implementation Research (CHOIR), Department of Veterans Affairs Boston Healthcare System
(13)
South Texas Veterans Health Care System
(14)
Boston University School of Medicine

References

  1. McHugh RK, Barlow DH. The dissemination and implementation of evidence-based psychological treatments: a review of current efforts. Am Psychol. 2010;65:73–84.View ArticlePubMedGoogle Scholar
  2. Karlin BE, Cross G. From the laboratory to the therapy room: national dissemination and implementation of evidence-based psychotherapies in the US Department of Veterans Affairs Health Care System. Am Psychol. 2014;69(1):19–33.View ArticlePubMedGoogle Scholar
  3. Karlin BE, Brown GK, Trockel M, Cunning D, Zeiss AM, Taylor CB. National dissemination of cognitive behavioral therapy for depression in the Department of Veterans Affairs Health Care System: therapist and patient level outcomes. J Consult Clin Psychol. 2012;80(5):707.View ArticlePubMedGoogle Scholar
  4. Chard KM, Ricksecker EG, Healy ET, Karlin BE, Resick PA. Dissemination and experience with cognitive processing therapy. J Rehabil Res Dev. 2012;49(5):667–78.View ArticlePubMedGoogle Scholar
  5. Eftekhari A, Ruzek JI, Crowley JJ, Rosen CS, Greenbaum MA, Karlin BE. Effectiveness of national implementation of prolonged exposure therapy in Veterans Affairs care. JAMA Psychiatry. 2013;70(9):949–55.View ArticlePubMedGoogle Scholar
  6. Shiner B, D’Avolio LW, Nguyen TM, Zayed MH, Young-Xu Y, Desai RA, Schnurr PP, Fiore LD, Watts BV. Measuring use of evidence based psychotherapy for posttraumatic stress disorder. Adm Policy Ment Health Ment Health Serv Res. 2013;40(4):311–8.View ArticleGoogle Scholar
  7. Lu MW, Plagge JM, Marsiglio MC, Dobscha SK. Clinician documentation on receipt of trauma-focused evidence-based psychotherapies in a VA PTSD clinic. J Behav Health Serv Res. 2016;43(1):71–87.View ArticlePubMedGoogle Scholar
  8. Osei-Bonsu PE, Bolton RE, Stirman SW, Eisen SV, Herz L, Pellowe ME. Mental health providers’ decision-making around the implementation of evidence-based treatment for PTSD. J Behav Health Serv Res. 2016;7:1–1.View ArticleGoogle Scholar
  9. Cook JM, Dinnen S, Simiola V, Thompson R, Schnurr PP. VA residential provider perceptions of dissuading factors to the use of two evidence-based PTSD treatments. Prof Psychol: Res Pract. 2014;45(2):136–42.View ArticleGoogle Scholar
  10. Cook JM, Dinnen S, Simiola V, Thompson R, Schnurr PP. VA residential provider perceptions of dissuading factors to use of two evidence-based PTSD treatments. Clin Psychol. In press.Google Scholar
  11. Osei-Bonsu PE, Bokhour BG, Glickman ME, Rodrigues S, Mueller NM, Dell NS, Zhao S, Eisen SV, Elwy AR. The role of coping in depression treatment utilization for VA primary care patients. Patient Educ Couns. 2014;94(3):396–402.View ArticlePubMedGoogle Scholar
  12. Dusenbury L, Brannigan R, Falco M, Lake A. A review of research on fidelity of implementation: implications for drug abuse prevention in school settings. Health Educ Res. 2003;18(2):237–56.View ArticlePubMedGoogle Scholar
  13. Wilk JE, West JC, Duffy FF, Herrell RK, Rae DS, Hoge CW. Use of evidence-based treatment for posttraumatic stress disorder in Army behavioral healthcare. Psychiatry. 2013;76(4):336–48.View ArticlePubMedGoogle Scholar
  14. Waller G, Stringer H, Meyer C. What cognitive behavioral techniques do therapists report using when delivering cognitive behavioral therapy for the eating disorders? J Consult Clin Psychol. 2012;80(1):171.View ArticlePubMedGoogle Scholar
  15. Stirman SW, Calloway A, Toder K, Miller CJ, DeVito AK, Meisel SN, Xhezo R, Evans AC, Beck AT, Crits-Christoph P. Community mental health provider modifications to cognitive therapy: implications for sustainability. Psychiatr Serv. 2013;64(10):1056–9.View ArticleGoogle Scholar
  16. Aarons GA, Miller EA, Green AE, Perrott JA, Bradway R. Adaptation happens: a qualitative case study of implementation of the incredible years evidence-based parent training program in a residential substance abuse treatment program. J Children’s Serv. 2012;7(4):233–45.View ArticleGoogle Scholar
  17. Castro FG, Barrera Jr M, Martinez Jr CR. The cultural adaptation of prevention interventions: resolving tensions between fidelity and fit. Prev Sci. 2004;5(1):41–5.View ArticlePubMedGoogle Scholar
  18. Griner D, Smith TB. Culturally adapted mental health intervention: a meta-analytic review. Psychother Theory Res Pract Train. 2006;43(4):531–48.View ArticleGoogle Scholar
  19. Bass JK, Annan J, McIvor Murray S, Kaysen D, Griffiths S, Cetinoglu T, Wachter K, Murray LK, Bolton PA. Controlled trial of psychotherapy for Congolese survivors of sexual violence. N Engl J Med. 2013;368(23):2182–91.View ArticlePubMedGoogle Scholar
  20. Schulz PM, Resick PA, Huber LC, Griffin MG. The effectiveness of cognitive processing therapy for PTSD with refugees in a community setting. Cogn Behav Pract. 2006;13(4):322.View ArticleGoogle Scholar
  21. Galovski TE, Blain LM, Mott JM, Elwood L, Houle T. Manualized therapy for PTSD: flexing the structure of cognitive processing therapy. J Consult Clin Psychol. 2012;80(6):968–81.View ArticlePubMedPubMed CentralGoogle Scholar
  22. Weisz JR, Chorpita BF, Palinkas LA, Schoenwald SK, Miranda J, Bearman SK, Daleiden EL, Ugueto AM, Ho A, Martin J, et al. Testing standard and modular designs for psychotherapy treating depression, anxiety, and conduct problems in youth: a randomized effectiveness trial. Arch Gen Psychiatry. 2012;69(3):274–82.View ArticlePubMedGoogle Scholar
  23. Von Ranson KM, Wallace LM, Stevenson A. Psychotherapies provided for eating disorders by community clinicians: infrequent use of evidence-based treatment. Psychother Res. 2013;23(3):333–43.View ArticleGoogle Scholar
  24. Schulte D, Künzel R, Pepping G, Schulte-Bahrenberg T. Tailor-made versus standardized therapy of phobic patients. Advanc Behav Res Ther. 1992;14(2):67.View ArticleGoogle Scholar
  25. Kennedy MG, Mizuno Y, Hoffman R, Baume C, Strand J. The effect of tailoring a model HIV prevention program for local adolescent target audiences. AIDS Educ Prev. 2000;12(3):225–38.PubMedGoogle Scholar
  26. Cook JM, Dinnen S, Thompson R, Simiola V, Schnurr PP. Changes in implementation of two evidence-based psychotherapies for PTSD in VA residential treatment programs: a national investigation. J Trauma Stress. 2014:n/a-n/a.Google Scholar
  27. Glisson C, Schoenwald SK, Kelleher K, Landsverk J, Hoagwood KE, Mayberg S, Green P. Therapist turnover and new program sustainability in mental health clinics as a function of organizational culture, climate, and service structure. Adm Policy Ment Health Ment Health Serv Res. 2008;35(1–2):124.View ArticleGoogle Scholar
  28. Ehlers A, Grey N, Wild J, Stott R, Liness S, Deale A, Handley R, Albert I, Cullen D, Hackmann A. Implementation of cognitive therapy for PTSD in routine clinical care: effectiveness and moderators of outcome in a consecutive sample. Behav Res Ther. 2013;51(11):742–52.View ArticlePubMedPubMed CentralGoogle Scholar
  29. Schoenwald SK, Garland AF, Chapman JE, Frazier SL, Sheidow AJ, Southam-Gerow MA. Toward the effective and efficient measurement of implementation fidelity. Adm Policy Ment Health. 2011;38(1):32–43.View ArticlePubMedPubMed CentralGoogle Scholar
  30. Surís A, Link‐Malcolm J, Chard K, Ahn C, North C. A randomized clinical trial of cognitive processing therapy for Veterans with PTSD related to military sexual trauma. J Trauma Stress. 2013;26(1):28–37.View ArticlePubMedGoogle Scholar
  31. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89:1322–7.View ArticlePubMedPubMed CentralGoogle Scholar
  32. Tanielian T, Jaycox L. Invisible wounds of war : psychological and cognitive injuries, their consequences, and services to assist recovery. Santa Monica: RAND Corporation; 2008.Google Scholar
  33. Smith MW, Barnett PG. The role of economics in the QUERI program: QUERI Series. Implement Sci. 2008;3:20.View ArticlePubMedPubMed CentralGoogle Scholar
  34. Aarons G, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health Ment Health Serv Res. 2011;38(1):4–23.View ArticleGoogle Scholar
  35. Brown PM, Cameron LD, Ramondt S. Sustainability of behavioral interventions: beyond cost-effectiveness analysis. Int J Behav Med. 2015;22(3):425–33.View ArticlePubMedGoogle Scholar
  36. Kessler RC. Posttraumatic stress disorder: the burden to the individual and to society. J Clin Psychiatry. 2000;61 suppl 5:4–12.PubMedGoogle Scholar
  37. Lunney CA, Schnurr PP. Domains of quality of life and symptoms in male Veterans treated for posttraumatic stress disorder. J Trauma Stress. 2007;20(6):955–64.View ArticlePubMedGoogle Scholar
  38. Ahrens J, Rexford L. Cognitive processing therapy for incarcerated adolescents with PTSD. J Aggress Maltreat Trauma. 2002;6(1):201–16.View ArticleGoogle Scholar
  39. Chard KM. An evaluation of cognitive processing therapy for the treatment of posttraumatic stress disorder related to childhood sexual abuse. J Consult Clin Psychol. 2005;73(5):965.View ArticlePubMedGoogle Scholar
  40. Resick PA, Nishith P, Weaver TL, Astin MC, Feuer CA. A comparison of cognitive-processing therapy with prolonged exposure and a waiting condition for the treatment of chronic posttraumatic stress disorder in female rape victims. J Consult Clin Psychol. 2002;70(4):867.View ArticlePubMedPubMed CentralGoogle Scholar
  41. Kaysen D, Lindgren K, Zangana GAS, Murray L, Bass J, Bolton P. Adaptation of cognitive processing therapy for treatment of torture victims: experience in Kurdistan, Iraq. Psychological Trauma: Theory, Research, Practice, and Policy. 2011;5(2):184.Google Scholar
  42. Kaysen D, Schumm J, Pedersen ER, Seim RW, Bedard-Gilligan M, Chard K. Cognitive processing therapy for veterans with comorbid PTSD and alcohol use disorders. Addict Behav. 2014;39(2):420–7.View ArticlePubMedGoogle Scholar
  43. Beidas R, Kendall PC. Training therapists in evidence-based practice: a critical review of studies from a systems-contextual perspective. Clin Psychol Sci Pract. 2010;17(1):1.View ArticleGoogle Scholar
  44. Herschell AD, Kolko DJ, Baumann BL, Davis AC. The role of therapist training in the implementation of psychosocial treatments: a review and critique with recommendations. Clin Psychol Rev. 2010;30(4):448–66.View ArticlePubMedPubMed CentralGoogle Scholar
  45. Rakovshik SG, McManus F. Establishing evidence-based training in cognitive behavioral therapy: a review of current empirical findings and theoretical guidance. Clin Psychol Rev. 2010;30(5):496–516.View ArticlePubMedGoogle Scholar
  46. Stirman SW, Kimberly JR, Calloway A, Cook N, Castro F, Charns MP. The sustainability of new programs and interventions: a review of the empirical literature and recommendations for future research. Implement Sci. 2012;7:17.View ArticleGoogle Scholar
  47. Bein E, Anderson T, Strupp HH, Henry WP, Schacht TE, Binder J, Butler S. The effects of training in time-limited dynamic psychotherapy: changes in therapeutic outcome. Psychother Res. 2000;10(2):119–32.View ArticlePubMedGoogle Scholar
  48. Fauth J, Mathisen A, Smith S. Reflections on a time-limited dynamic psychotherapy training project: new directions for brief psychotherapy training. In: North American Society of Psychotherapy Research 2006 Annual Conference 2006; Columbus, OH. 2006.Google Scholar
  49. Webb CA, Derubeis RJ, Barber JP. Therapist adherence/competence and treatment outcome: a meta-analytic review. J Consult Clin Psychol. 2010;78(2):200–11.View ArticlePubMedPubMed CentralGoogle Scholar
  50. Farmer CC, Mitchell KS, Parker-Guilbert K, Galovski TE. Fidelity to the cognitive processing therapy protocol: evaluation of critical elements. Behav Ther. 2016.Google Scholar
  51. Schoenwald SK, Sheidow AJ, Chapman JE. Clinical supervision in treatment transport: effects on adherence and outcomes. J Consult Clin Psychol. 2009;77(3):410–21.View ArticlePubMedPubMed CentralGoogle Scholar
  52. Watts BV, Shiner B, Zubkoff L, Carpenter-Song E, Ronconi JM, Coldwell CM. Implementation of evidence-based psychotherapies for posttraumatic stress disorder in VA specialty clinics. Psychiatr Serv. 2014;65(5):648–53.View ArticlePubMedGoogle Scholar
  53. Aarons GA, Sommerfeld DH, Hecht DB, Silovsky JF, Chaffin MJ. The impact of evidence-based practice implementation and fidelity monitoring on staff turnover: evidence for a protective effect. J Consult Clin Psychol. 2009;77(2):270.View ArticlePubMedPubMed CentralGoogle Scholar
  54. Harned MS, Dimeff LA, Woodcock EA, Contreras I. Predicting adoption of exposure therapy in a randomized controlled dissemination trial. J Anxiety Disord. 2013;27(8):754–62.View ArticlePubMedGoogle Scholar
  55. Bond GR, Drake RE, McHugo GJ, Peterson AE, Jones AM, Williams J. Long-term sustainability of evidence-based practices in community mental health agencies. Adm Policy Ment Health Ment Health Serv Res. 2014;41(2):228–36.View ArticleGoogle Scholar
  56. Creed TA, Stirman SW, Evans AC, Beck AT. A model for implementation of cognitive therapy in community mental health: The Beck Initiative. Behav Therapist. 2014;37(3):56-64.Google Scholar
  57. Finley EP, Garcia HA, Ketchum NS, McGeary DD, McGeary CA, Stirman SW, Peterson AL. Utilization of evidence-based psychotherapies in Veterans Affairs posttraumatic stress disorder outpatient clinics. Psychological services. 2015;12(1):73.View ArticlePubMedGoogle Scholar
  58. Stirman SW, Gutierrez-Colina A, Toder K, Castro F, Esposito G, Barg F, Beck AT, Crits-Christoph P. Clinicians’ perspectives on cognitive therapy in community mental health settings: implications for training and implementation. Adm Policy Ment Health Ment Health Serv Res. 2013;40(4):274–85.View ArticleGoogle Scholar
  59. Stirman SW, Shields N, Deloriea J, Landy MSH, Belus JM, Maslej MM, Monson CM. A randomized controlled dismantling trial of post-workshop consultation strategies to increase effectiveness and fidelity to an evidence-based psychotherapy for posttraumatic stress disorder. Implement Sci. 2013;8(82):1–8.Google Scholar
  60. Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainability amid ongoing change. Implement Sci. 2013;8(1):117.View ArticlePubMedPubMed CentralGoogle Scholar
  61. Haine-Schlagel R, Brookman-Frazee L, Janis B, Gordon J. Evaluating a learning collaborative to implement evidence-informed engagement strategies in community-based services for young children. Child Youth Care Forum. 2013;42(5):457–73.View ArticleGoogle Scholar
  62. Chovil N. One small step at a time: implementing continuous quality improvement in child and youth mental health services. Child & Youth Services. 2010;31(1–2):21–34.View ArticleGoogle Scholar
  63. Aarons GA, Green A, Palinkas LA, Self-Brown S, Whitaker DJ, Lutzker JR, Silovsky JF, Hecht DB, Chaffin MJ. Dynamic adaptation process to implement an evidence-based child maltreatment intervention. Implementation Sci. 2012;7(1):32.View ArticleGoogle Scholar
  64. LeBrasseur R, Whissell R, Ojha A. Organisational learning, transformational leadership and implementation of continuous quality improvement in Canadian hospitals. Aust J Manag. 2002;27(2):141–62.View ArticleGoogle Scholar
  65. Nadeem E, Olin SS, Hill LC, Hoagwood KE, Horwitz SM. A literature review of learning collaboratives in mental health care: used but untested. Psychiatr Serv. 2014;65(9):1088–99.View ArticlePubMedPubMed CentralGoogle Scholar
  66. Raghavan R: The role of economic evaluation in dissemination and implementation research. In: Brownson RC, Colditz G, Proctor E, editors. Dissemination and implementation research in health: translating science into practice. edn. Oxford University Press; 2012: 94–113.Google Scholar
  67. Raghavan R, Bright C, Shadoin A. Toward a policy ecology of implementation of evidence-based practices in public mental health settings. Implement Sci. 2008;3(1):26.View ArticlePubMedPubMed CentralGoogle Scholar
  68. Curran G, Bauer MS, Mittman BS, Pyne JM, Stetler CB. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012.Google Scholar
  69. Proctor E, Silmere H, Hovmand P, Aarons G, Bunger A, Griffey R, Hensley M. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health Ment Health Serv Res. 2011;38(2):65-76.Google Scholar
  70. Spillane J, Pareja A, Dorner L, Barnes C, May H, Huff J, Camburn E. Mixing methods in randomized controlled trials (RCTs): validation, contextualization, triangulation, and control. Educ Assess, Eval Account. 2010;22(1):5–28.View ArticleGoogle Scholar
  71. Weathers FW, Litz BT, Keane TM, Palmieri PA, Marx BP, Schnurr PP. The PTSD checklist for DSM–5 (PCL-5). In. Scale available from the National Center for PTSD. website: http://www.ptsd.va.gov/professional/assessment/adult-sr/ptsd-checklist.asp; 2010.
  72. Brown CH, Ten Have TR, Jo B, Dagne G, Wyman PA, Muthén B, Gibbons RD. Adaptive designs for randomized trials in public health. Annu Rev Public Health. 2009;30:1.View ArticlePubMedPubMed CentralGoogle Scholar
  73. Kanter JW, Tsai M, Holman G, Koerner K. Preliminary data from a randomized pilot study of web-based functional analytic psychotherapy therapist training. Psychotherapy. 2013;50(2):248.View ArticlePubMedGoogle Scholar
  74. Puspitasari A, Kanter JW, Murphy J, Crowe A, Koerner K. Developing an online, modular, active learning training program for behavioral activation. Psychotherapy. 2013;50(2):256.View ArticlePubMedGoogle Scholar
  75. Nadeem E, Olin SS, Hill LC, Hoagwood KE, Horwitz SM. Understanding the components of quality improvement collaboratives: a systematic literature review. Milbank Q. 2013;91(2):354–94.View ArticlePubMedPubMed CentralGoogle Scholar
  76. Stirman SW, Shields N, Deloriea J, Landy MS, Belus JM, Maslej MM, Lane J, Monson CM. A randomized comparison of three post-workshop consultation strategies. In: Association for Behavioral and Cognitive Therapies: 2014; Philadelphia, PA. 2014.Google Scholar
  77. Institute for Healthcare Improvement. The Breakthrough Series: IHI’s Collaborative Model for Achieving Breakthrough Improvement. IHI Innovation Series white paper. Boston: Institute for Healthcare Improvement; 2003.Google Scholar
  78. Gustafson DH, Quanbeck AR, Robinson JM, Ford JH, Pulvermacher A, French MT, McConnell KJ, Batalden PB, Hoffman KA, McCarty D. Which elements of improvement collaboratives are most effective? A cluster‐randomized trial. Addiction. 2013;108(6):1145–57.View ArticlePubMedPubMed CentralGoogle Scholar
  79. Stirman SW, Calloway A, Rasmusson AM, McDonald A, Monson CM, Resick PA. The relationship between treatment fidelity and clinical outcomes in the implementation of CPT in VA settings. In: Association for Behavioral and Cognitive Therapies: 2011; Toronto, Canada. 2011.Google Scholar
  80. Garvin DA, Edmondson AE, Gino F. Is yours a learning organization? Harvard Business Review. 2008;86(3):109.Google Scholar
  81. OQ Analyst. In. Salt Lake City, UT: www.OQmeasures.com.
  82. Lambert MJ, Finch AE. The outcome questionnaire. 1999.Google Scholar
  83. Lambert MJ. Administration and scoring manual for the OQ-45.2 (Outcome Questionnaire). OQ Measures, LLC; 2004.Google Scholar
  84. Kroenke K, Spitzer RL, Williams JBW. The PHQ-9. J Gen Intern Med. 2001;16:606–13. doi:10.1046/j.1525-1497.2001.016009606.x.View ArticlePubMedPubMed CentralGoogle Scholar
  85. Larsen DL, Attkisson CC, Hargreaves WA, Nguyen TD. Assessment of client/patient satisfaction: development of a general scale. Eval Program Plann. 1979;2(3):197–207.View ArticlePubMedGoogle Scholar
  86. Mauskopf JA, Sullivan SD, Annemans L, Caro J, Mullins CD, Nuijten M, Orlewska E, Watkins J, Trueman P. Principles of good practice for budget impact analysis: report of the ISPOR task force on good research practices—budget impact analysis. Value Health. 2007;10(5):336–47.View ArticlePubMedGoogle Scholar
  87. Vallis TM, Shaw BF, Dobson K. The cognitive therapy scale: psychometric properties. J Consult Clin Psychol. 1986;54:381–5.View ArticlePubMedGoogle Scholar
  88. Stirman SW, Miller CJ, Toder K, Calloway A. Development of a framework and coding system for modifications and adaptations of evidence-based interventions. Implementation Sci. 2013;8(65).Google Scholar
  89. Ehrhart MG, Aarons GA, Farahnak LR. Assessing the organizational context for EBP implementation: the development and validity testing of the Implementation Climate Scale (ICS). Implementation Science. 2014;9(1):157.View ArticlePubMedPubMed CentralGoogle Scholar
  90. Beidas R, Marcus S, Aarons GA, Hoagwood KE, Schoenwald SK, Evans AC, Hurford MO, Hadley T, Barg FK, Walsh LM, et al. Predictors of community therapists’ use of therapy techniques in a large public mental health system. JAMA Pediatr. 2015;169(4):374–82.View ArticlePubMedPubMed CentralGoogle Scholar
  91. Glisson C, Landsverk J, Schoenwald S, Kelleher K, Hoagwood K, Mayberg S, Green P. Assessing the organizational social context (OSC) of mental health services: implications for research and practice. Adm Policy Ment Health Ment Health Serv Res. 2008;35(1):98–113.View ArticleGoogle Scholar
  92. Aarons G, Ehrhart M, Farahnak L. The implementation leadership scale (ILS): development of a brief measure of unit level implementation leadership. Implement Sci. 2014;9(1):45.View ArticlePubMedPubMed CentralGoogle Scholar
  93. Cook JM, Thompson R, Schnurr PP. Perceived Characteristics of Intervention Scale: Development and Psychometric Properties. Assessment. 2015;22(6):704–14.View ArticlePubMedGoogle Scholar
  94. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50.View ArticlePubMedPubMed CentralGoogle Scholar
  95. Blackwelder W. Current issues in clinical equivalence trials. J Dent Res. 2004;83 suppl 1:C113–5.View ArticlePubMedGoogle Scholar
  96. Schafer JL, Graham JW. Missing data: our view of the state of the art. Psychol Methods. 2002;7(2):147–77.View ArticlePubMedGoogle Scholar
  97. Foa EB, Yusko DA, McLean CP, et al. Concurrent naltrexone and prolonged exposure therapy for patients with comorbid alcohol dependence and PTSD: a randomized clinical trial. J Am Med Assoc. 2013;310(5):488–95.View ArticleGoogle Scholar
  98. Zalta AK, Gillihan SJ, Fisher AJ, Mintz J, McLean CP, Yehuda R, Foa EB. Change in negative cognitions associated with PTSD predicts symptom reduction in prolonged exposure. J Consult Clin Psychol. 2014;82(1):171.View ArticlePubMedGoogle Scholar
  99. Granger CW. Investigating causal relations by econometric models and cross-spectral methods. Econometrica: J Econometric Soc. 1969;37(3):424-38.Google Scholar
  100. Baldwin SA, Imel ZE, Braithwaite SR, Atkins DC. Analyzing multiple outcomes in clinical research using multivariate multilevel models. J Consult Clin Psychol. 2014;82(5):920.View ArticlePubMedPubMed CentralGoogle Scholar
  101. Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale: Erlbaum; 1988.Google Scholar
  102. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159–74.View ArticlePubMedGoogle Scholar
  103. Raudenbush SW. Statistical analysis and optimal design for cluster randomized trials. Psychol Methods. 1997;2(2):173.View ArticleGoogle Scholar
  104. Snijders TA. Power and sample size in multilevel linear models. Encyclopedia of Statistics in Behavioral Science. 2005;3:1570–73.Google Scholar
  105. Nadeem E, Gleacher A, Pimentel S, Hill LC, McHugh M, Hoagwood KE. The role of consultation calls for clinic supervisors in supporting large-scale dissemination of evidence-based treatments for children. Administration and Policy in Mental Health and Mental Health Services Research. 2013;40(6):530-40.Google Scholar
  106. Dennhag I, Gibbons MBC, Barber JP, Gallop R, Crits-Christoph P. How many treatment sessions and patients are needed to create a stable score of adherence and competence in the treatment of cocaine dependence? Psychother Res. 2012;22(4):475–88.View ArticlePubMedPubMed CentralGoogle Scholar
  107. Garattini L, van de Vooren K. Budget impact analysis in economic evaluation: a proposal for a clearer definition. Eur J Health Econ. 2011;12(6):499–502.View ArticlePubMedGoogle Scholar
  108. Palinkas L, Aarons G, Horwitz S, Chamberlain P, Hurlburt M, Landsverk J. Mixed method designs in implementation Research. Adm Policy Ment Health Ment Health Serv Res. 2010;38(1):44–53.View ArticleGoogle Scholar

Copyright

© The Author(s). 2017

Advertisement