Skip to main content
  • Study protocol
  • Open access
  • Published:

Stepped implementation-to-target: a study protocol of an adaptive trial to expand access to addiction medications



In response to the US opioid epidemic, significant national campaigns have been launched to expand access to `opioid use disorder (MOUD). While adoption has increased in general medical care settings, specialty addiction programs have lagged in both reach and adoption. Elevating the quality of implementation strategy, research requires more precise methods in tailoring strategies rather than a one-size-fits-all-approach, documenting participant engagement and fidelity to the delivery of the strategy, and conducting an economic analysis to inform decision making and policy. Research has yet to incorporate all three of these recommendations to address the challenges of implementing and sustaining MOUD in specialty addiction programs.


This project seeks to recruit 72 specialty addiction programs in partnership with the Washington State Health Care Authority and employs a measurement-based stepped implementation-to-target approach within an adaptive trial design. Programs will be exposed to a sequence of implementation strategies of increasing intensity and cost: (1) enhanced monitoring and feedback (EMF), (2) 2-day workshop, and then, if outcome targets are not achieved, randomization to either internal facilitation or external facilitation. The study has three aims: (1) evaluate the sequential impact of implementation strategies on target outcomes, (2) examine contextual moderators and mediators of outcomes in response to the strategies, and (3) document and model costs per implementation strategy. Target outcomes are organized by the RE-AIM framework and the Addiction Care Cascade.


This implementation project includes elements of a sequential multiple assignment randomized trial (SMART) design and a criterion-based design. An innovative and efficient approach, participating programs only receive the implementation strategies they need to achieve target outcomes. Findings have the potential to inform implementation research and provide key decision-makers with evidence on how to address the opioid epidemic at a systems level.

Trial registration

This trial was registered at (NCT05343793) on April 25, 2022.


The 21st Century Cures Act allocated unprecedented funding to combat the US opioid epidemic. Its mission is to reduce opioid over-prescribing, improve access to overdose rescue medications, and increase the adoption and delivery of medications for opioid use disorder (MOUD) [1, 2]. Efforts have been primarily focused on expanding access to MOUD. The Substance Abuse and Mental Health Services Administration (SAMHSA) State Targeted Response and the State Opioid Response grants to 57 US states and territories sought to expand access to methadone, buprenorphine, and naltrexone across a variety of settings and systems [2,3,4].

While indications of improved access across a variety of settings exist, an ironic gap persists [5,6,7,8,9,10]. Primary care practices, emergency departments, and the criminal justice system have increasingly adopted MOUD, but traditional specialty addiction treatment programs have not progressed. Currently, only 35.5% of addiction treatment programs offer MOUD [6]. As such, only 15% of patients in specialty addiction treatment with opioid use disorder (OUD) are receiving MOUD [5, 11]. In fact, general medical practice settings are now twice as likely to offer MOUD than specialty addiction treatment programs [12,13,14].

Specialty addiction treatment programs are under pressure to implement MOUD but face many barriers [7, 10, 15,16,17,18]. Public health systems now support MOUD in policy and financing, but multiple contextual barriers persist, ranging from an abstinence-based philosophy that conflicts with pharmacological approaches, a lack of network connectivity with other health care organizations, and program structure and workflows based entirely on psychosocial interventions and peer recovery supports [19,20,21]. Although there has been progress among some specialty programs, most lag behind in their efforts to offer MOUD to their clients [11].

Implementation research to improve MOUD access

This study builds on our prior implementation research. In specialty addiction programs, a cluster randomized controlled trial with addiction treatment organizations located in the State of Washington utilized a multi-level strategy—NIATx (Network for the Improvement of Addiction Treatment) [22,23,24,25,26] delivered through external facilitation (NIATx-EF) to implement integrated mental health services for persons with co-occurring psychiatric and substance use disorders [27]. Overall, the NIATx-EF implementation strategy had a significant effect on improving integrated services [28] and increasing patient access to substance use and psychotropic medication, as well as decreasing wait times from diagnosis to medication receipt [29, 30]. We also found that NIATx-EF was effective, more effective with NIATx-EF adherence, and surprisingly, that 10% of organizations achieved target implementation outcomes early on with only enhanced feedback using a standardized quality measure, Dual Diagnosis Capability in Addiction Treatment (DDCAT).

Four system-level, naturalistic cohort evaluation studies focused on implementing MOUD in public health care systems. In Vermont, a learning collaborative implementation strategy showed increased guideline adherence and reduced variation but no change in reach or adoption [31]. Two California-based studies examined how exposure to different implementation strategies impacted reach (number of patients receiving MOUD) and adoption (number of eligible providers prescribing MOUD). While both studies showed a significant change in reach and adoption, the influence of specific implementation strategies was mixed [32,33,34]. The final study found that access to implementation support: external facilitation, learning sessions, didactic webinars, and enhanced monitoring and feedback (EMF), resulted in significant improvement in implementation quality [35]. Although mitigated by the lack of controlled experimental designs, two major takeaways were that some practices make changes with a “soft touch” implementation strategy support—such as EMF—using standardized measures like the DDCAT and target performance measures, and offering a buffet of implementation support options is unnecessarily burdensome, inefficient, and costly.

Multi-level implementation strategies to improve MOUD access

Implementation science continues to enhance scientific rigor by systematically examining implementation strategies as the “interventions” of implementation. More recently, experts have called for implementation strategies to be examined by (1) their discrete and combined components, (2) conceptualized mechanisms of action, (3) how they may be tailored or adapted to context or other factors, (4) how they may be delivered with fidelity, (5) how enrolled participants engage and finish, (6) how alternative strategies compare with one another on implementation outcomes, and (7) how much they cost [36,37,38,39]. Although exemplar studies featuring some of these expert recommendations for implementation strategies do exist, none to date features all, and none is in the critical field of implementing MOUD.

The new paradigm for implementation research (Fig. 1) in addiction health services must extend beyond just opening the “black box” of implementation context and outcomes [33]. Specifically, the quality of implementation strategy research must examine the contents of the “black box” as shown in the final level as depicted in Fig. 1 to better understand how and why an implementation endeavor succeeds or fails. In response to expert recommendations, more precise methods are necessary to tailor implementation strategies rather than a one-size-fits-all-approach, document participant engagement and fidelity to the delivery of the strategy, and conduct economic evaluations to inform decision making and policy. Research has yet to incorporate all three of these recommendations to address the challenges of implementing and sustaining MOUD in specialty addiction programs. The Stagewise Implementation-To-Target–Medications for Addiction Treatment (SITT-MAT) study will address both the public health care and the scientific quality gaps.

Fig. 1
figure 1

Opening the “black box” of implementation


Aims and objectives

The overarching goals of the SITT-MAT project are to equally advance implementation science and to solve a persistent public health care problem, access to addiction medications in specialty addiction treatment programs. The study aims to (1) evaluate the relative impact of a sequence of four implementation strategies on target outcome criteria within a stepped measurement approach which includes a randomized implementation trial, (2) examine contextual moderators and mediators of performance on target outcomes as a function of implementation strategy step, and (3) document the costs associated with participating in and delivering the sequence of implementation strategies and to model costs per implementation strategy to achieve target outcome criteria.

SITT-MAT conceptual model

Advances in the characterization of context, and the evaluation of outcomes, demonstrate methodological progress for implementation science. Our dynamic non-linear conceptual model is based upon matching implementation strategies to levels of contextual determinants (i.e., barriers or facilitators) to produce desired outcomes (Fig. 2) [38, 40, 41].

Fig. 2
figure 2

SITT-MAT conceptual model: aligning multi-level strategies with contextual determinants

The characterization and measurement of the contextual determinants can be organized at three levels: outer (systems and community), inner (the organization or setting), and individual (providers and staff) perspectives along with perceptions of the implemented intervention are determinants of successful or unsuccessful implementation [40, 42]. These determinants can be assessed through the Consolidated Framework for Implementation Research (CFIR) which provides an inventory of barriers and facilitators (Table 1) across the three different levels [40].

Table 1 System, organizational, and individual contextual barriers to MOUD

Implementation strategies might address potential barriers at each level to implement MOUD, but presently, no solid empirical bases for selection and tailoring based on contextual determinants exist [43, 44]. Instead, we were guided by four criteria in our selection of implementation strategies: (1) the evidence for each strategy, (2) the standard practice of using the strategy to address typical implementation problems at the determinant level, (3) our own research, and (4) the incremental effort and cost associated with each strategy.

Study design

We will deploy an adaptive implementation strategy design that incorporates a nonrandomized evaluation and a randomized implementation trial (Fig. 3). The stagewise implementation-to-target, stepped approach to implementation, with adaptation based on actual performance, is soundly based on our prior research and a reasonable conceptual model. The stagewise implementation design is like the idea of offering a “light touch” implementation strategy to programs that require minimal support to achieve targets, and progressing towards additional strategies that are more intense, resource-demanding, and costly, but only if needed. Four implementation strategies are staged sequentially, based on the program-level response (Table 2).

Fig. 3
figure 3

SITT-MAT adaptative implementation strategy design

Table 2 SITT-MAT implementation strategies

Implementation strategies

The first implementation strategy is enhanced monitoring and feedback (EMF), which has some elements of “audit and feedback” widely used in the Veterans Health Administration [44,45,46]. This strategy is provided to all participating programs. The EMF incorporates the use of performance data, compiled, and reflected back to programs with normative status. It targets contextual determinants at the system and organizational level. As we found in our prior studies to improve co-occurring capacity [28, 47], we hypothesize that an EMF condition, providing dashboard feedback on reach and adoption plus a measure of implementation quality, will produce quick outcomes for some programs. We predict that this strategy will most likely work with programs with fewer barriers that do not need as much support to meet target outcomes.

The second implementation strategy is the NIATx/MAT Academy that is provided if EMF is not immediately successful. The academy is a 2-day training/workshop that will bring together members of specialty addiction programs from across the state, debunk myths about MOUD and abstinence-based philosophy, and instruct on MOUD content and NIATx implementation processes. This strategy aligns with systems and organizational determinants. Our prior research found a positive impact of learning sessions on reach and adoption, as well as implementation quality [31, 33, 35].

All programs that do not achieve or sustain target outcome criteria by the end of Phase 3 (12 months from baseline) will be randomized to either of the remaining two implementation strategies—NIATx Internal Facilitation (NIATx-IF) and NIATx External Facilitation (NIATx-EF). The NIATx-IF strategy features an internal change leader (employee) who will help other staff gain skills and efficacy to implement changes and address barriers in attitude and beliefs. Thus, determinants at the individual and organizational levels are the focus of the NIATx-IF implementation strategy (Fig. 2). NIATx-EF is an evidence-based implementation strategy that is widely used and studied [45]. NIATx-EF features an external expert or coach. The coach leverages their expertise and knowledge to guide staff in implementing changes, help employees gain confidence and skills, and address potential obstacles including stigma and attitudes [22, 48]. For the same reasons as NIATx-IF, NIATx-EF is tailored to issues at the organization and individual levels. IF and EF models have been systematically studied and compared [46, 49, 50]. Although there is evidence for both IF and EF, with better outcomes for EF, the cost for EF is likely greater than for IF [49]. Using the procedures employed in a previous implementation trial, we will balance the two groups before randomization [51]. We will then use a randomized cluster group trial design to compare NIATx-IF to NIATx-EF.

In our study design, programs that “graduate” to the sustainment/follow-up arm but then fail to maintain the criteria of success for two consecutive quarters will have two opportunities to rejoin the active implementation strategy phases (Fig. 3). The first opportunity where a program on Path 1 could re-enter is at the end of Phase 3 if they do not sustain improvements after receiving the EMF strategy. The second opportunity for a program on Path 2 to re-enter would be at the end of Phase 4 if they do not sustain improvements after receipt of the EMF and NIATx/MAT Academy implementation strategies. There are thus five possible paths a program might take in this study. Months in the active implementation stage range from 6 (Path 1) to 30 (Path 5). The sustainment period ranges from 12 months (Path 5) to 36 months (Path 1). The path followed by a program depends on when it meets (and sustains) implementation targets.

This study was reviewed by the Stanford University Institutional Review Board and approved as a minimal risk study (eProtocol # 66,398). The project is registered at (NCT05343793). The Standards for Reporting Implementation Studies (StaRI) reporting standards and checklist (Additional File 1) was used [52].


We will enroll 72 addiction treatment programs from the State of Washington. Programs that are either residential (detoxification or rehabilitation) or outpatient (intensive outpatient or outpatient) levels of care are eligible. The community addiction treatment programs in Washington are comparable to addiction treatment programs nationwide in MOUD implementation. Specifically of all addiction treatment programs in the state, 72% do not offer MOUD. Therefore, despite the potential for limiting inference outside of one system, findings should be generalizable. Opioid treatment programs (i.e., methadone clinics) are not eligible because they are already engaged in MOUD.

Near the conclusion of the project, efforts will be made to examine aggregate administrative data obtained from the State of Washington Health Care Authority, regarding MOUD activity (reach, adoption), the population of addiction treatment programs across the state. This will enable a naturalistic comparator to the volunteer effects of study participation and also the influence of historical factors (e.g., major changes in policy, financing, or public health circumstances [viral epidemics]) that might have impacted the entire state.

A qualitative approach to understanding the participating organizations’ experience with contextual factors, the sequence of strategies, and changes in the culture and practice would be of great methodological value. To achieve this goal, we will conduct key informant interviews with 10% of participating organizations, stratified to represent the range of participant characteristics, e.g., start-up vs. scale up, urban/rural, level of care (residential/inpatient; outpatient/intensive outpatient).

Outcomes and data collection

Three categories of measures are collected in the SITT-MAT study: (1) primary implementation outcomes (Aim 1), (2) contextual determinants (Aim 2), and (3) implementation strategy participation, fidelity, and cost measures (Aim 3). Although all measures interrelate to address the study aims, each category is principally associated with an aim as specified below.

Implementation outcomes (Aim 1)

Primary outcomes are organized by the RE-AIM taxonomy and the Addiction Care Cascade [53,54,55]. Reach, Effectiveness, Adoption, Implementation (quality of MOUD), and Maintenance (sustainment) are augmented by timely access and retention in care—the latter two are elements of the Addiction Care Cascade (Table 3). The Reach, Adoption, and Effectiveness outcomes have been similarly collected using the identical procedure in our recently completed study [35].

Table 3 Primary outcome definitions

Implementation quality will be measured using the Integrating Medications for Addiction Treatment (IMAT) Index. The IMAT is a newer measure built upon the same framework and methodology as its widely adopted predecessors [26, 56, 57]. The IMAT, comprised of 45 items clustered into seven dimensions (Additional File 2), integrates MOUD guidelines, expert consensus recommendations, the OUD care cascade, and best practice information into a team-assessment benchmark measure of current and future state MOUD capability and practice [58]. Confirmatory factor analyses support the validity of the IMAT Total Score, as well as the validity of each dimension. Reliability scores for each dimension range between 0.70 and 0.95. The IMAT has been useful to systems, organizations, and teams as an objective measure of current state MOUD practice and as a blueprint for measurable practice change. Like the DDCAT, the IMAT is transparent in depicting what is needed to score a 3 or 5 for practices seeking to improve implementation quality.

Three primary outcomes-Reach, Adoption, and Implementation (IMAT) serve as implementation targets to determine the stagewise path of participating organizations. A program will meet the SITT-MAT adaptative assessment criteria if (1) greater than or equal to 50% of their patients with OUD are receiving a MOUD (Reach), (2) the program has an integrated MOUD prescriber who is employed or contracted by the program (Adoption), and (3) the IMAT Total Score is greater than or equal to 3 (Implementation). By including a measure of intervention quality (IMAT), our study will overcome the threat of implementation without fidelity being no better or even harmful [59, 60]. The Reach and Adoption primary outcome measures will be gathered at baseline and quarterly for the duration of the trial, to the end of the sustainment phase and the IMAT-SC will be assessed at baseline and at the end of each phase (Fig. 3).

Contextual determinants (Aim 2)

Contextual determinants are levels of the system, organization, or the individuals working within the organization that influence implementation outcomes. In addition, there are physical characteristics of the organization itself, such as size, location (urban/rural), or level of care that may influence implementation outcomes. We selected three commonly used measures associated with two major contextual determinant frameworks: Consolidated Framework for Implementation Research (CFIR) and the Exploration, Preparation, Implementation and Sustainment (EPIS) [42].

In this study, we developed the contextual determinant inventory (CDI) following the CFIR, EPIS, and Health Equity Implementation Framework [61]. The CDI’s 42 items are organized in four levels: system, program, staff, and patient. The rating scale ranges from Strongly Disagree to Strongly Agree and includes a Does not apply option. A 43rd item is an open-ended question asking about other contextual determinants that may affect implementation but were not included in the CDI. Scores can be tallied overall, for the 42 items, by individual items, or by dimension. Like the IMAT, the CDI is a team-assessment measure completed by key staff in MOUD implementation.

Based on the EPIS, the Implementation Climate Scale (ICS) includes 6 subscales and 18 items scored on a 5-point scale from 0 – not at all, to 4 – very great extent [62]. The Implementation Leadership Scale (ILS) is composed of 12 items divided into four subscales [63, 64]. The items are rated on a 5-point scale from 0 – not at all, to 4 – very great extent. The ICS and ILS are surveys completed by at least 5 staff members at each organization. Each of these measures will be collected at baseline, at the 4 implementation intervals, and at any 1-year post-implementation support.

Implementation strategy participation, fidelity, and costs (Aim 3)

Advanced measures of implementation strategy participation, fidelity, and cost are the key study features that increase precision and scientific rigor. We will use four measures to detail the contents of the “black box” of implementation strategies: NIATx Fidelity Scale, Stages of Implementation Completion (SIC), Costs of Implementing New Strategies (COINS), and an Extended CONSORT diagram to describe how programs stepped through the different implementation strategies.

The NIATx Fidelity Scale is a 19-item observational measure of adherence to the NIATx model, rating activities on a 5-point scale from 1-No evidence to 5-Extensive evidence. Scoring reflects information obtained from IF and EF perspectives using a composite of interviews, review of walk-through results, change project forms, EF or IF notes, and sustainability plans. The research team will monitor NIATx delivery and formally assesses fidelity by utilizing an online data collection tool with input from external facilitators (e.g., coaches) after each strategy of the active NIATx-IF and NIATx-EF implementation phase. Where significant deviations are identified at the first fidelity check, corrective adjustments can be made.

The SITT-MAT Adaptative Implementation Strategy Design involves four strategies (EMF, NIATx/MAT Academy, NIATx-EF, NIATx-IF) and five possible pathways for programs that participate. Therefore, the current project will include assessments of strategy adherence across the five possible paths using the Stages of Implementation Completion (SIC) and the resources required using the Cost of Implementing New strategies (COINS).

The SIC is an 8-stage assessment tool that is psychometrically valid and reliable [65,66,67,68,69]. The SIC is designed to measure and compare implementation strategies for scaling up proven interventions [51]. Stages range from engagement (Stage 1) to achievement of program delivery with competency (Stage 8) [67]. SIC data include a log of activities that operationalize the implementation process necessary to move toward successful program start-up and sustainment and their completion dates. Two scores are calculated for each SIC stage. The Proportion score calculates the proportion of key activities completed within a stage. The Duration score is calculated by the date of entry through the date of the final activity completed. The Duration score can account for activities not completed sequentially and for being in multiple stages at a given time. A third Final Stage score indicates the final stage achieved in the implementation process (Stages 1–8). The SIC has been adapted or customized for multiple evidence-based practices (EBPs) [70,71,72,73,74,75] and evaluated across different system settings [51, 76, 77].

The SIC has the ingredients of a map for costing implementation strategies. As a companion to the SIC, The Cost of Implementing New Strategies (COINS) was developed as a cost mapping procedure to collect implementation resource information and to assist in disentangling implementation from intervention costs. Since staff spend a considerable number of hours involved in implementation activities, the COINs was developed to calculate the fees, expenses, and person hours necessary to complete each stage [78]. Its application will identify cost and resources differences between implementation strategies for the proposed cost analyses in Aim 3 [79, 80].

For the SITT-MAT study, the SIC and COINS will be customized using a standardized adaptation approach [72]. Each of the five possible paths of implementation strategies will be operationalized. This will allow for a comparison of implementation strategy approaches and costs across programs and possible paths. If a program stops implementation strategy support at any step, their SIC measurement will discontinue. If the next step implementation strategy is initiated, a new SIC will be used to monitor the programs’ associated efforts and costs. Online data collection systems will be created to continuously collect information about the activities and costs necessary to complete the SIC and COINS.

Data analysis

Specific aim 1

The analysis for Aim 1 will compare the randomized NIATx-IF versus NIATx-EF on our primary implementation outcome of the proportion of MOUD patients receiving medication within 72 h of OUD diagnosis over time. For the changes over time in the proportion of patients initiating MOUD within 72 h of diagnosis, we expect 48 new patients with OUD each quarter for each of the 54 expected sites that are randomized to either IF or EF. We will use 2-level latent growth models of 10 quarters of outcome data with individual-level binary outcomes of MOUD within 72 h. Specifically, we assume that the number of patients who start MOUD at the ith site and tth time point, i = 1,…,54 and t = 1,…,10, are all independent binomials with success probability pit on 48 individual patient trials. Here logit (pit) = ai + bi t + εit with random intercepts ai and random slopes bi = β0 + β1 Fi + εi and Fi is the indicator of facilitator condition and the effect is tested through coefficient β1.

To account for multiple comparisons in secondary analyses, we will use a false discovery rate to reflect how many subscales are assessed for each measure. In addition, we will use exploratory graphical measures, such as empirical q-q plots, to compare all the subscales’ differences by intervention condition [81]. For the analyses of the stagewise implementation to target outcomes, we will assess the degree to which all the sites eventually achieve the criterion. This criterion is defined as an IMAT Total Score ≥ 3, the presence of an integrated prescriber for MAT, and at least 50% of patients with OUD on MOUD. This is accounted for by computing the proportion of the study sites that achieve criterion in their respective phase and computing two different proportions, one for each facilitation condition, much the same way that SMART designs are evaluated for overall optimal impact.

Specific aim 2

We will conduct mediation and moderation analyses in the randomized trial component. For the moderation analysis, we will assess whether the size of the patient population (i.e., bed size or annual admissions) moderates the impact by testing the interaction term of size by implementation strategy. A core mediator of access to MOUD within 72-h of OUD diagnosis is the Phase 3 IMAT score which will be used to assess whether and how rapidly implementation occurs. For moderation, we will include Phase 3 IMAT score as an intermediary outcome and assess the indirect effect of the implementation strategy condition affecting the delivery of services through Phase 3 IMAT score on the MOUD rate. We will use the “product of coefficients” method for analyzing impact with a Poisson model for the MOUD rate outcome as we have shown this method is superior to the “difference of coefficients” [82]. Contextual determinant measures (CDI, ICS, ILS) and the organizations’ characteristics will also be examined for mediation and moderation of implementation outcomes and in response to specific strategies.

Specific aim 3

We will estimate the costs for each of the five implementation strategy paths. We will focus on obtaining a complete picture of relevant variable costs for the implementation strategies based on staff time and other variable cost such as supplies, staff travel, related contracts, ongoing IT costs, and staff space [83]. Using the activities, we will estimate labor costs using wage rates from the US Bureau of Labor Statistics that allow us to estimate costs for the local site, as well as a national average. Both are useful for replication. Local costs are informative to Washington State, while the national average is helpful for other systems or organizations interested in replicating this effort.

The costs will reflect the amount spent on implementation activities per time period. There is no empirical evidence on the optimal period of time, but there is a tension because narrow time periods require more data collection. While wider time periods require less data collection efforts, they yield cost estimates that are less precise (and perhaps less accurate). We will use quarters (90 days) as the time period. We will track implementation activities for the major organizational actors and external team members (e.g., EFs) using the COINS. We will estimate costs for the five implementation strategy paths and the stages of implementation, as measured by the SIC. Our method will allow us to document effort as well as the time it takes to move through the stages of implementation.

We will then use multivariate regression models to analyze effort and costs; although these outcomes are linked, often organizations want to understand the effect on each. For the cost analyses, we will consider different models and choose the best fitting model based upon the modified Park test, Box-Cox regression, and Hosmer and Lemeshow tests [84]. We will then link the cost per strategy to the outcomes measured in specific aim 1 (i.e., “implementation cost-effectiveness” not a cost-effectiveness analysis.

Power calculations

For power analyses, we used Monte Carlo methods. Starting at randomization with both conditions at 20%, we have 80% power to detect a change where 54% of the patients in sites in the better facilitation condition are provided MOUD by the last quarter, compared to 45% of the patients in the poorer performing arm. This power calculation allows for moderate variation among the sites’ rates at baseline, in the slopes, and at each measurement time. The approach to the effectiveness measure, 6-month retention rate, will proceed similarly.

We will utilize latent linear growth modeling to understand the impact on the secondary composite continuous measure outcomes (e.g., IMAT; SIC). The impact will be identified as the difference in slopes for the two implementation strategy conditions, fit using maximum marginal likelihood, and tested using a Wald-type test on the difference in slopes divided by the corresponding standard error. Monte Carlo simulation was used to examine how the model’s parameters affect statistical power. With type I error of 5%, and conservative measurement error of 20% at each time point, we have 80% power to detect an effect size of 0.77 (54 randomized sites) and conservatively 0.81 (49 randomized sites). We note that in a previous head-to-head randomized implementation trial of 51 sites, we were able to find significant differences in implementation outcomes.


We considered but rejected two alternative designs for this multi-level stagewise implementation strategy trial. First, we discussed a design that contrasted two packaged multi-component implementation strategies tested head-to-head throughout this study for all sites, much like we conducted in an implementation trial of 51 counties in two states [51]. Unlike this two-state study, it would be inefficient in the current trial to force organizations that did not need a full-blown strategy to receive one. Second, we discussed a SMART design that used multiple randomizations for sites not meeting criteria rather than the single one used in this trial. While such a design would in theory allow more insights, multiple randomizations would sacrifice the expected large sample size of 58 sites that powerfully compare NIATx-IF versus NIATx-EF in this trial. We concluded that a study of a stepped and stagewise approach to implementation, within an innovative adaptive implementation design, although unprecedented, has the risk-reward potential for higher relevance and impact.

We also wrestled with a variety of imperfect approaches to evaluating outcomes, both as “measurement-based implementation targets” and as primary outcomes. We selected reach because it is standard in MOUD research and practice. The proportion of 50% on MOUD is based on the current state (~ 15%), expert recommendations, options for patient preference, and benchmarks for system-level management of chronic conditions [85]. Since 50% may prove to be a minimum threshold, we may, as a contingency, adjust this proportion or instead examine reach as a continuous variable. The criterion for adoption values the integrated prescriber over linked or coordinated access. This increases the likelihood that patients will not fall through the cracks between provider locations. Third, for implementation quality, we decided on an IMAT Total Score of 3 or more based on previous systems-level research with other organizational measures, and our recent work with the IMAT. A score of 3 represents an adequate capability. Higher scores, up to 5, reflect enhanced practice, and as a continuous measure, we expect to see IMAT scores rise through the maintenance phase. As a proxy for effectiveness, the Addiction Care Cascade provides metrics for reach, timely access (< 72 h to start medication), and retention (continuous care for 6 months). New data on 6-month retention outcomes reveal this as a bare minimum for MOUD quality care [86]. For this study, the 6-month retention rate is feasible and remains a quality standard for MOUD services research.

Efforts to combat the US opioid epidemic have focused on expanding access to MOUD. While there are indications of improved reach and adoption, an ironic gap persists—only about one third of specialty addiction treatment organizations offer MOUD. The SITT-MAT study not only advances the science of implementation but will advance our empirical understanding of how to best respond to a substance-related epidemic by checking all the boxes of expert recommendations to specify, tailor, and track implementation strategies and provides generalizable knowledge useful for systems and organizations. Our protocol leverages the enhanced rigor of advanced implementation scientific approaches to the challenge of installing and sustaining MOUD in a real-world setting, specialty addiction programs. To advance implementation research, we not only open the “black box” of implementation strategies, but we also thoroughly study the contents using a highly unique approach. This study will deploy an adaptive implementation strategy design that has never been done before. The design embeds an evaluation of non-randomized steps as well as a randomized step of implementation strategies into the overall analysis. The approach has elements of both a sequential multiple assignment randomized trial (SMART) design [49, 50, 87,88,89] and a criterion-based design that employs a measurement-based stepped implementation-to-target approach within an adaptive trial design to improve access to MOUD. The findings have the potential to advance drug abuse treatment research by identifying an optimization of strategies to implement MOUD.

Availability of data and materials

Not applicable.



Consolidated Framework for Implementation Research


Costs of Implementing New Strategies


Dual Diagnosis Capability in Addiction Treatment


External facilitation


Enhanced monitoring and feedback


Exploration, Preparation, Implementation and Sustainment


Health Care Authority


Helping to end addiction long term


Implementation Climate Scale


Internal facilitation


Implementation Leadership Scale


Integrating Medications for Addiction Treatment


Integrating Medications for Addiction Treatment in Specialty Care


Medications for Addiction Treatment


Medications for opioid use disorder


Network for the Improvement of Addiction Treatment


National Institute on Drug Abuse


Opioid use disorder


Reach, Effectiveness, Adoption, Implementation and Maintenance


Stages of Implementation Completion


Stagewise Implementation-To-Target–Medications for Addiction Treatment


Sustainment Measurement System Scales


  1. NIH Categorical Spending -NIH Research Portfolio Online Reporting Tools (RePORT). Available from: Cited 2022 May 24.

  2. State Opioid Response Grants. Available from: Cited 2022 May 24.

  3. Collins FS, Koroshetz WJ, Volkow ND. Helping to end addiction over the long-term: the research plan for the NIH HEAL initiative. JAMA. 2018;320(2):129–30.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Watson DP, Andraka-Christou B, Clarke T, Wiegandt J. Introduction to the special issue on innovative interventions and approaches to expand medication assisted treatment: seizing research opportunities made available by the opioid STR program. J Subst Abuse Treat. 2020;108:1–3.

    Article  PubMed  Google Scholar 

  5. Huhn AS, Hobelmann JG, Strickland JC, Oyler GA, Bergeria CL, Umbricht A, et al. Differences in availability and use of medications for opioid use disorder in residential treatment settings in the United States. JAMA Netw Open. 2020;3(2):e1920843.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Abraham AJ, Andrews CM, Harris SJ, Friedmann PD. Availability of medications for the treatment of alcohol and opioid use disorder in the USA. Neurotherapeutics. 2020;17(1):55–69.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  7. Abraham AJ, Knudsen HK, Rieckmann T, Roman PM. Disparities in access to physicians and medications for the treatment of substance use disorders between publicly and privately funded treatment programs in the United States. J Stud Alcohol Drugs. 2013;74(2):258–65.

  8. U.S. Department of Health and Human Services (HHS). HHS Awards Nearly $400 Million to Combat the Opioid Crisis. 2019. Available from: Cited 2022 May 24.

  9. HRSA. HRSA Fact Sheets. Available from: Cited 2022 May 24.

  10. Jones CM, Campopiano M, Baldwin G, McCance-Katz E. National and state treatment need and capacity for opioid agonist medication-assisted treatment. Am J Public Health. 2015;105(8):e55-63.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Knudsen HK, Abraham AJ, Roman PM. Adoption and implementation of medications in addiction treatment programs. J Addict Med. 2011;5(1):21–7.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Mojtabai R, Mauro C, Wall MM, Barry CL, Olfson M. Medication treatment for opioid use disorders in substance use treatment facilities. Health Aff (Millwood). 2019;38(1):14–23.

    Article  Google Scholar 

  13. Zur J, Tolbert J, Sharac J, Markus A. The role of community health centers in addressing the opioid epidemic. KFF. 2018. Available from: Cited 2022 May 24.

  14. National Academies of Sciences, Engineering, and Medicine. Medications for opioid use disorder save lives. Washington, DC: The National Academies Press; 2019. Available from: Cited 2022 May 24.

  15. Knudsen HK, Abraham AJ, Oser CB. Barriers to the implementation of medication-assisted treatment for substance use disorders: the importance of funding policies and medical infrastructure. Eval Program Plann. 2011;34(4):375–81.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Knudsen HK, Roman PM, Oser CB. Facilitating factors and barriers to the use of medications in publicly funded addiction treatment organizations. J Addict Med. 2010;4(2):99–107.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Knudsen HK, Havens JR, Lofwall MR, Studts JL, Walsh SL. Buprenorphine physician supply: relationship with state-level prescription opioid mortality. Drug Alcohol Depend. 2017;173(Suppl 1):S55-64.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Roman PM, Abraham AJ, Knudsen HK. Using medication-assisted treatment for substance use disorders: evidence of barriers and facilitators of implementation. Addict Behav. 2011;36(6):584–9.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Louie DL, Assefa MT, McGovern MP. Attitudes of primary care physicians toward prescribing buprenorphine: a narrative review. BMC Fam Pract. 2019;20(1):157.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Huhn AS, Dunn KE. Why aren’t physicians prescribing more buprenorphine? J Subst Abuse Treat. 2017;78:1–7.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Hutchinson E, Catlin M, Andrilla CHA, Baldwin LM, Rosenblatt RA. Barriers to primary care physicians prescribing buprenorphine. Ann Fam Med. 2014;12(2):128–33.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Gustafson DH, Quanbeck AR, Robinson JM, Ford JH, Pulvermacher A, French MT, et al. Which elements of improvement collaboratives are most effective? A cluster-randomized trial. Addict Abingdon Engl. 2013;108(6):1145–57.

    Article  Google Scholar 

  23. Quanbeck AR, Gustafson DH, Ford JH, Pulvermacher A, French MT, McConnell KJ, et al. Disseminating quality improvement: study protocol for a large cluster-randomized trial. Implement Sci IS. 2011;27(6):44.

    Article  Google Scholar 

  24. Ford JH, Stumbo SP, Robinson JM. Assessing long-term sustainment of clinic participation in NIATx200: results and a new methodological approach. J Subst Abuse Treat. 2018;92:51–63.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Roosa M, Scripa JS, Zastowny TR, Ford JH. Using a NIATx based local learning collaborative for performance improvement. Eval Program Plann. 2011;34(4):390–8.

    Article  PubMed  PubMed Central  Google Scholar 

  26. McGovern MP, Lambert-Harris C, McHugo GJ, Giard J, Mangrum L. Improving the dual diagnosis capability of addiction and mental health treatment services: implementation factors associated with program level changes. J Dual Diagn. 2010;6(3–4):237–50.

    Article  Google Scholar 

  27. Ford JH, Osborne EL, Assefa MT, McIlvaine AM, King AM, Campbell K, et al. Using NIATx strategies to implement integrated services in routine care: a study protocol. BMC Health Serv Res. 2018;18(1):431.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Assefa MT, Ford JH, Osborne E, McIlvaine A, King A, Campbell K, et al. Implementing integrated services in routine behavioral health care: primary outcomes from a cluster randomized controlled trial. BMC Health Serv Res. 2019;19(1):749.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Ford JH 2nd, Kaur A, Rao D, Gilson A, Bolt DM, Garneau HC, et al. Improving medication access within integrated treatment for individuals with co-occurring disorders in substance use treatment agencies. Implement Res Pract. 2021;2:263348952110336.

    Article  Google Scholar 

  30. Ford JH 2nd, Rao D, Gilson A, Kaur A, Chokron Garneau H, Saldana L, et al. Wait no longer: reducing medication wait-times for individuals with co-occurring disorders. J Dual Diagn. 2022;18(2):101–10.

    Article  PubMed  Google Scholar 

  31. Nordstrom BR, Saunders EC, McLeman B, Meier A, Xie H, Lambert-Harris C, et al. Using a learning collaborative strategy with office-based practices to increase access and improve quality of care for patients with opioid use disorders. J Addict Med. 2016;10(2):117–23.

    Article  PubMed  Google Scholar 

  32. Caton L, Assefa M, Chen I, McGovern MP. Expanding access to addiction medications for patients in primary care with opioid use disorders: an examination of the relative impact of common implementation strategies. J Addict Res Ther. 2020;11:6.

    Article  Google Scholar 

  33. Caton L, Shen H, Miele GM, Darfler K, Sandoval JR, Urada D, et al. Opening the “black box”: four common implementation strategies for expanding the use of medications for opioid use disorder in primary care. Implement Res Pract. 2021;2:263348952110058.

    Article  Google Scholar 

  34. Miele GM, Caton L, Freese TE, McGovern M, Darfler K, Antonini VP, et al. Implementation of the hub and spoke model for opioid use disorders in California: rationale, design and anticipated impact. J Subst Abuse Treat. 2020;108:20–5.

    Article  PubMed  Google Scholar 

  35. Cheng, H, McGovern, MP, Garneau, H, Hurley, B, Fisher, T, Copeland, M, Almirall, D. Expanding access to medications for opioid use disorder in primary care clinics: an evaluation of common implementation strategies and outcomes. Implement Sci Commun. 2022;3(1).

  36. Powell BJ, Fernandez ME, Williams NJ, Aarons GA, Beidas RS, Lewis CC, et al. Enhancing the impact of implementation strategies in healthcare: a research agenda. Front Public Health. 2019;7:3.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Kirchner JE, Smith JL, Powell BJ, Waltz TJ, Proctor EK. Getting a clinical innovation into practice: an introduction to implementation strategies. Psychiatry Res. 2020;283:112467.

    Article  PubMed  Google Scholar 

  38. Lewis CC, Klasnja P, Powell BJ, Lyon AR, Tuzzio L, Jones S, et al. From classification to causality: advancing understanding of mechanisms of change in implementation science. Front Public Health. 2018;6.

  39. Powell BJ, Haley AD, Patel SV, Amaya-Jackson L, Glienke B, Blythe M, et al. Improving the implementation and sustainment of evidence-based practices in community mental health organizations: a study protocol for a matched-pair cluster randomized pilot study of the Collaborative Organizational Approach to Selecting and Tailoring Implementation Strategies (COAST-IS). Implement Sci Commun. 2020;1(1).

  40. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci IS. 2009;7(4):50.

    Article  Google Scholar 

  41. Powell BJ, Beidas RS, Lewis CC, Aarons GA, McMillen JC, Proctor EK, et al. Methods to improve the selection and tailoring of implementation strategies. J Behav Health Serv Res. 2017;44(2):177–94.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health. 2011;38(1):4–23.

    Article  PubMed  Google Scholar 

  43. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci IS. 2015;12(10):21.

    Article  Google Scholar 

  44. Kirchner JE, Waltz TJ, Powell BJ, Smith JL, Proctor EK. Implementation Strategies. In: Brownson RC, Colditz GA, Proctor EK, editors. Dissemination and implementation research in health: translating science to practice. Oxford University Press; 2017. p. 245–66.

  45. Ritchie MJ, Parker LE, Edlund CN, Kirchner JE. Using implementation facilitation to foster clinical practice quality and adherence to evidence in challenged settings: a qualitative study. BMC Health Serv Res. 2017;17(1):294.

    Article  PubMed  PubMed Central  Google Scholar 

  46. Kilbourne AM, Elwy AR, Sales AE, Atkins D. Accelerating research impact in a learning health care system: VA’s quality enhancement research initiative in the choice act era. Med Care. 2017;55 Suppl 7 Suppl 1:S4-12.

    Article  PubMed  Google Scholar 

  47. Chokron Garneau H, Assefa MT, Jo B, Ford JH 2nd, Saldana L, McGovern MP. Sustainment of integrated care in addiction treatment settings: primary outcomes from a cluster-randomized controlled trial. Psychiatr Serv. 2022;73(3):280–6.

    Article  PubMed  Google Scholar 

  48. Baskerville NB, Liddy C, Hogg W. Systematic review and meta-analysis of practice facilitation within primary care settings. Ann Fam Med. 2012;10(1):63–74.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Kilbourne AM, Almirall D, Eisenberg D, Waxmonsky J, Goodrich DE, Fortney JC, et al. Protocol: Adaptive Implementation of Effective Programs Trial (ADEPT): cluster randomized SMART trial comparing a standard versus enhanced implementation strategy to improve outcomes of a mood disorders program. Implement Sci IS. 2014;30(9):132.

    Article  Google Scholar 

  50. Kilbourne AM, Smith SN, Choi SY, Koschmann E, Liebrecht C, Rusch A, et al. Adaptive school-based Implementation of CBT (ASIC): clustered-SMART for building an optimized adaptive implementation intervention to improve uptake of mental health interventions in schools. Implement Sci IS. 2018;13(1):119.

    Article  PubMed  Google Scholar 

  51. Brown CH, Chamberlain P, Saldana L, Padgett C, Wang W, Cruden G. Evaluation of two implementation strategies in 51 child county public service systems in two states: results of a cluster randomized head-to-head implementation trial. Implement Sci IS. 2014;14(9):134.

    Article  Google Scholar 

  52. Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, et al. Standards for Reporting Implementation Studies (StaRI) statement. BMJ. 2017;6(356):i6795.

    Article  Google Scholar 

  53. Williams AR, Nunes EV, Bisaga A, Pincus HA, Johnson KA, Campbell AN, et al. Developing an opioid use disorder treatment cascade: a review of quality measures. J Subst Abuse Treat. 2018;91:57–68.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322–7.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  55. Williams AR, Nunes EV, Bisaga A, Levin FR, Olfson M. Development of a cascade of care for responding to the opioid epidemic. Am J Drug Alcohol Abuse. 2019;45(1):1–10.

    Article  PubMed  PubMed Central  Google Scholar 

  56. McGovern MP, Urada D, Lambert-Harris C, Sullivan ST, Mazade NA. Development and initial feasibility of an organizational measure of behavioral health integration in medical care settings. J Subst Abuse Treat. 2012;43(4):402–9.

    Article  PubMed  Google Scholar 

  57. McGovern MP, Matzkin AL, Giard J. Assessing the dual diagnosis capability of addiction treatment services: the dual diagnosis capability in addiction treatment (DDCAT) index. J Dual Diagn. 2007;3(2):111–23.

    Article  Google Scholar 

  58. Chokron Garneau H, Hurley B, Fisher T, Newman S, Copeland M, Caton L, et al. The Integrating Medications for Addiction Treatment (IMAT) Index: a measure of capability at the organizational level. J Subst Abuse Treat. 2021;1(126):108395.

    Article  CAS  Google Scholar 

  59. Woolf SH, Johnson RE. The break-even point: when medical advances are less important than improving the fidelity with which they are delivered. Ann Fam Med. 2005;3(6):545–52.

    Article  PubMed  PubMed Central  Google Scholar 

  60. Oxman TE, Schulberg HC, Greenberg RL, Dietrich AJ, Williams JW, Nutting PA, et al. A fidelity measure for integrated management of depression in primary care. Med Care. 2006;44(11):1030–7.

    Article  PubMed  Google Scholar 

  61. Woodward EN, Matthieu MM, Uchendu US, Rogal S, Kirchner JE. The health equity implementation framework: proposal and preliminary study of hepatitis C virus treatment. Implement Sci. 2019;14(1):26.

    Article  PubMed  PubMed Central  Google Scholar 

  62. Ehrhart MG, Aarons GA, Farahnak LR. Assessing the organizational context for EBP implementation: the development and validity testing of the Implementation Climate Scale (ICS). Implement Sci IS. 2014;23(9):157.

    Article  Google Scholar 

  63. Aarons GA, Ehrhart MG, Torres EM, Finn NK, Roesch SC. Validation of the Implementation Leadership Scale (ILS) in substance use disorder treatment organizations. J Subst Abuse Treat. 2016;68:31–5.

    Article  PubMed  PubMed Central  Google Scholar 

  64. Aarons GA, Ehrhart MG, Farahnak LR. The Implementation Leadership Scale (ILS): development of a brief measure of unit level implementation leadership. Implement Sci IS. 2014;9(1):45.

    Article  PubMed  Google Scholar 

  65. Chamberlain P, Brown CH, Saldana L. Observational measure of implementation progress in community based settings: the stages of implementation completion (SIC). Implement Sci. 2011;6(1):116.

    Article  PubMed  PubMed Central  Google Scholar 

  66. Chamberlain P, Roberts R, Jones H, Marsenich L, Sosna T, Price JM. Three collaborative models for scaling up evidence-based practices. Adm Policy Ment Health Ment Health Serv Res. 2012;39(4):278–90.

    Article  Google Scholar 

  67. Saldana L. The stages of implementation completion for evidence-based practice: protocol for a mixed methods study. Implement Sci IS. 2014;9(1):43.

    Article  PubMed  Google Scholar 

  68. Saldana L, Chamberlain P. Supporting implementation: the role of community development teams to build infrastructure. Am J Community Psychol. 2012;50(3–4):334–46.

    Article  PubMed  PubMed Central  Google Scholar 

  69. Saldana L, Schaper H, Campbell M, Chapman J. Standardized measurement of implementation: the universal SIC. Implement Sci. 2015;10(S1).

  70. Dubowitz H, Saldana L, Magder LA, Palinkas LA, Landsverk JA, Belanger RL, et al. Protocol for comparing two training approaches for primary care professionals implementing the Safe Environment for Every Kid (SEEK) model. Implement Sci Commun. 2020;1(1):78.

    Article  PubMed  PubMed Central  Google Scholar 

  71. Nadeem E, Saldana L, Chapman J, Schaper H. A mixed methods study of the stages of implementation for an evidence-based trauma intervention in schools. Behav Ther. 2018;49(4):509–24.

    Article  PubMed  Google Scholar 

  72. Saldana L, Bennett I, Powers D, Vredevoogd M, Grover T, Schaper H, et al. Scaling implementation of collaborative care for depression: adaptation of the Stages of Implementation Completion (SIC). Adm Policy Ment Health. 2020;47(2):188–96.

    Article  PubMed  PubMed Central  Google Scholar 

  73. Saldana L, Chamberlain P, Wang W, Brown CH. Predicting program start-up using the stages of implementation measure. Adm Policy Ment Health Ment Health Serv Res. 2012;39(6):419–25.

    Article  Google Scholar 

  74. Sterrett-Hong EM, Saldana L, Burek J, Schaper H, Karam E, Verbist AN, et al. An exploratory study of a training team-coordinated approach to implementation. Glob Implement Res Appl. 2021;1(1):17–29.

    Article  PubMed  PubMed Central  Google Scholar 

  75. Watson DP, Snow-Hill N, Saldana L, Walden AL, Staton M, Kong A, et al. A longitudinal mixed method approach for assessing implementation context and process factors: comparison of three sites from a Housing First implementation strategy pilot. Implement Res Pract. 2020;1:263348952097497.

    Article  Google Scholar 

  76. Aalsma MC, Dir AL, Zapolski TC, Hulvershorn LA, Monahan PO, Saldana L, et al. Implementing risk stratification to the treatment of adolescent substance use among youth involved in the juvenile justice system: protocol of a hybrid type I trial. Addict Sci Clin Pract. 2019;14(1):1–11.

    Article  Google Scholar 

  77. Palinkas LA, Saldana L, Chou C-P, Chamberlain P. Use of research evidence and implementation of evidence-based practices in youth-serving systems. Child Youth Serv Rev. 2017;83:242–7.

    Article  PubMed  PubMed Central  Google Scholar 

  78. Saldana L, Chamberlain P, Bradford WD, Campbell M, Landsverk J. The cost of implementing new strategies (COINS): a method for mapping implementation resources using the stages of Implementation Completion. Child Youth Serv Rev. 2014;39:177–82.

    Article  PubMed  Google Scholar 

  79. Park LS, Chih M-Y, Stephenson C, Schumacher N, Brown R, Gustafson D, et al. Testing an mHealth system for individuals with mild to moderate alcohol use disorders: protocol for a type 1 hybrid effectiveness-implementation trial. JMIR Res Protoc. 2022;11(2): e31109.

    Article  PubMed  PubMed Central  Google Scholar 

  80. Saldana L, Ritzwoller DP, Campbell M, Block EP. Using economic evaluations in implementation science to increase transparency in costs and outcomes for organizational decision-makers. Implement Sci Commun. 2022;3(1):40.

    Article  PubMed  PubMed Central  Google Scholar 

  81. Brown CH. Statistical methods for preventive trials in mental health. Stat Med. 1993;12(3–4):289–300.

    Article  CAS  Google Scholar 

  82. MacKinnon DP, Lockwood CM, Brown CH, Wang W, Hoffman JM. The intermediate endpoint effect in logistic and probit regression. Clin Trials Lond Engl. 2007;4(5):499–513.

    Article  CAS  Google Scholar 

  83. Wagner TH, Yoon J, Jacobs JC, So A, Kilbourne AM, Yu W, et al. Estimating costs of an implementation intervention. Med Decis Making. 2020;40(8):959–67.

    Article  PubMed  Google Scholar 

  84. Hosmer DW, Lemeshow S, Sturdivant RX. Applied logistic regression. 3rd ed. United Kingdom: Wiley; 2013.

    Book  Google Scholar 

  85. National Committee for Quality Assurance. Medicare special needs plans performance results: HEDIS 2016. Centers for Medicare & Medicaid Services Medicare Drug and Health Plan Contract Administration Group; 2017 Jan. Available from: Cited 2022 May 25.

  86. Williams AR, Samples H, Crystal S, Olfson M. Acute care, prescription opioid use, and overdose following discontinuation of long-term buprenorphine treatment for opioid use disorder. Am J Psychiatry. 2020;177(2):117–24.

    Article  PubMed  Google Scholar 

  87. Lei H, Nahum-Shani I, Lynch K, Oslin D, Murphy SA. A “SMART” design for building individualized treatment sequences. Annu Rev Clin Psychol. 2012;8:21–48.

    Article  CAS  PubMed  Google Scholar 

  88. Collins LM, Murphy SA, Strecher V. The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): new methods for more potent eHealth interventions. Am J Prev Med. 2007;32(5 Suppl):S112-118.

    Article  PubMed  PubMed Central  Google Scholar 

  89. Smith SN, Almirall D, Prenovost K, Liebrecht C, Kyle J, Eisenberg D, et al. Change in patient outcomes after augmenting a low-level implementation strategy in community practices that are slow to adopt a collaborative chronic care model: a cluster randomized implementation trial. Med Care. 2019;57(7):503–11.

    Article  PubMed  PubMed Central  Google Scholar 

Download references


We would like to thank the Washington State Health Care Authority—Division of Behavioral Health staff who assisted with efforts to recruit participating community addiction treatment programs. We are grateful to all the addiction treatment programs that volunteered to participate.


This manuscript was supported by funding from NIDA-funded research (R01DA052975-01A1, Drs. McGovern and Ford, principal investigators). NIDA had no role in the preparation of or the decision to submit this report for publication. The statements made here are those of the authors.

Author information

Authors and Affiliations



JF and MM conceived the study design. JF, MM, HCG, HC, MG, and HF developed the study procedures and protocols. HC, HF, EM, RK, MM, and JF facilitated participant recruitment efforts. JF and MG drafted the manuscript. The authors have read, edited, and approved the final manuscript.

Corresponding author

Correspondence to James H. Ford II.

Ethics declarations

Ethics approval and consent to participate

This study was reviewed and approved by the Stanford University Institutional Review board as a minimal risk study (eProtocol # 66398) on August 18, 2022.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1

. The Standards for Reporting Implementation Studies (StaRI) reporting standards and checklist.

Additional file 2.

Integrating Medications for Addiction Treatment Index.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ford, J.H., Cheng, H., Gassman, M. et al. Stepped implementation-to-target: a study protocol of an adaptive trial to expand access to addiction medications. Implementation Sci 17, 64 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: