Skip to main content

A facilitation model for implementing quality improvement practices to enhance outpatient substance use disorder treatment outcomes: a stepped-wedge randomized controlled trial study protocol



The misuse of and addiction to opioids is a national crisis that affects public health as well as social and economic welfare. There is an urgent need for strategies to improve opioid use disorder treatment quality (e.g., 6-month retention). Substance use disorder treatment programs are challenged by limited resources and a workforce that does not have the requisite experience or education in quality improvement methods. The purpose of this study is to test a multicomponent clinic-level intervention designed to aid substance use disorder treatment clinics in implementing quality improvement processes to improve high-priority indicators of treatment quality for opioid use disorder.


A stepped-wedge randomized controlled trial with 30 outpatient treatment clinics serving approximately 2000 clients with opioid use disorder each year will test whether a clinic-level measurement-driven, quality improvement intervention, called Coaching for Addiction Recovery Enhancement (CARE), improves (a) treatment process quality measures (use of medications for opioid use disorder, in-treatment symptom and therapeutic progress, treatment retention) and (b) recovery outcomes (substance use, health, and healthcare utilization). The CARE intervention will have the following components: (1) staff clinical training and tools, (2) quality improvement and change management training, (3) external facilitation to support implementation and sustainability of quality improvement processes, and (4) an electronic client-reported treatment progress tool to support data-driven decision making and clinic-level quality measurement. The study will utilize multiple sources of data to test study aims, including state administrative data, client-reported survey and treatment progress data, and staff interview and survey data.


This study will provide the field with a strong test of a multicomponent intervention to improve providers’ capacity to make systematic changes tied to quality metrics. The study will also result in training and materials that can be shared widely to increase quality improvement implementation and enhance clinical practice in the substance use disorder treatment system.

Trial registration

Trial #NCT04632238NCT04632238 registered at on 17 November 2020


The misuse of and addiction to opioids is a national crisis that affects public health as well as social and economic welfare. Data from 2018 shows that 128 people in the USA died every day from opioid overdoses and approximately 2 million people had opioid use disorder (OUD) [1]. The opioid epidemic has strained the healthcare and substance use disorder (SUD) treatment systems. For example, from 2010 to 2017, OUD-related hospitalizations increased by 54% and emergency department visits increased by 109% [2]. During this same time, admissions to SUD treatment programs for opioid use grew by 42% [3]. The current COVID-19 pandemic has caused additional concern due to anticipated increases in opioid use and overdose deaths as well as decreases in treatment access [4,5,6,7,8,9,10].

Multiple systematic reviews have concluded that SUD treatment in the USA has large gaps in quality of care and limited capacity for systems-level improvement [11,12,13,14,15,16]. These reviews and other studies highlight a number of systemic issues in the SUD treatment system, including (1) too few with an SUD have access to treatment, (2) treatment is often not evidence based, and (3) poor treatment completion and retention rates among those who do access treatment [17,18,19]. These quality gaps could be partially due to the SUD treatment workforce traditionally having limited professional education and training to provide scientifically supported practices or engage in quality improvement (QI) initiatives to change clinical outcomes [11, 14, 15]. Further, clients cite lack of flexibility and care that is not person-centered as reasons for dissatisfaction and lack of engagement in treatment [20,21,22,23]. Additionally, there have been few financial incentives to drive innovation among SUD treatment programs.

As a result of the issues described above, there have been consistent calls for new payment models that link payments to measures of quality instead of fee-for-service payment models that reward volume [11, 14]. In order to transition to such payment models, programs must incorporate or improve evidenced-based practice offerings, offer more person-centered approaches, become more data and outcome-driven, implement QI programs, and expand the use of technology [13, 24,25,26,27]. Yet, there is deep concern that SUD treatment programs cannot survive this transition and may need a great deal of support to improve their clinical and business operations [28, 29]. Therefore, the workforce could benefit from training on evidence-based practices and person-centered care, new tools to better monitor client treatment outcomes, and support to implement QI protocols.

There is limited research on how to best guide SUD treatment programs in implementing systematic QI processes [30, 31]. Challenges to implementing QI protocols in SUD treatment programs include non-prioritization of data collection, limited analytical capacity, poor IT systems, and lack of data-driven decision-making culture [32, 33]. Qualitative research has found that SUD treatment staff need basic training on data collection and use [34]. One of the most studied approaches to QI in SUD treatment, the Network for Improvement of Addiction Treatment (NIATx), emphasizes identifying key problems, engaging change leaders, and using a series of rapid cycle tests to make changes. While the NIATx approach has been shown to improve service provision in its targeted areas (i.e., wait times and retention), there are mixed results for other outcomes (e.g., increasing admissions) [34,35,36] as well as criticisms, including the level of commitment required by staff, scarce resources to implement burdensome processes, detailed data requirements, and issues with long-term sustainability [30, 34, 37]. New strategies are needed to guide SUD treatment programs through implementing sustainable QI processes that build on lessons learned from previous research and that are tailored to existing resource and workforce limitations.

Utilization of data and metrics as well as implementation of QI processes will require SUD clinics to make significant changes at various levels within their organizations. This study will use an external facilitation model guided by the Integrated Promoting Action on Research Implementation in Health Services (i-PARIHS) framework to support clinic implementation of QI processes [38, 39]. i-PARIHS posits that optimal implementation occurs when practice facilitation promotes the acceptance and use of a new innovation by tailoring it to the recipient’s specific needs [39, 40]. Facilitators are the active ingredient that help navigate teams through complex change processes by addressing (a) the fit within the existing clinic, (b) motivations, beliefs, goals, and resources of intervention recipients, and (c) the inner and outer implementation context. In QI projects, the goal of facilitation is typically to support sustained focus and achievement of specific goals; facilitators work to enable individuals and teams to analyze, reflect, and change their way of working [41].

Patient monitoring tools are central to managing quality of care for other behavioral health conditions yet are not often systematically used in SUD treatment programs. For example, a hallmark of collaborative care models for depression is the use of evidence-based assessment tools, like the PHQ-9 depression scale, for ongoing monitoring and treatment adjustments. We recently developed an 8-item tool for SUD clinic settings, the Treatment Progress Assessment-8 (TPA8), that monitors SUD symptoms and treatment progress indicators that are associated with clinical outcomes and relapse [42]. We have been in the process of developing an electronic version, the eTPA8, to ease administration to clients as well as to provide a robust data reporting system that SUD clinics can use in client care as well as QI efforts. The eTPA8 will be part of the intervention package in the current study to support QI practices.

In this paper, we describe a study protocol for a stepped-wedge randomized controlled trial (SW-RCT) stemming from a state-academic partnership that will test a multicomponent, clinic-level intervention, called Coaching for Addiction Recovery Enhancement (CARE). We will test whether CARE improves treatment process quality and recovery outcomes over treatment as usual, particularly for OUD. The SW-RCT design was selected mainly because the intervention is conducted at the clinic-level and, from a resource perspective, it would be impractical to provide the intervention to all clinics at once given the facilitation model being utilized. Further, the stepped-wedge design was seen as more acceptable and ethical for key program stakeholders (notably the single state agency that regulates SUD treatment [SSA]) due to the potential benefits from the resources being provided as part of the intervention package (e.g., training support from the SSA) compared to a parallel randomization design. The CARE intervention includes the following components: (1) staff clinical training and tools, (2) quality improvement and change management training, (3) external facilitation to support implementation and sustainability of quality improvement, and (4) the eTPA8 client-reported treatment progress tool to support data-driven decision making and clinic-level quality measurement. See Fig. 1 for our intervention conceptual model. We describe intervention components in more detail in the method section. Study aims and hypotheses are the following:

  • Aim 1: Test the effect of CARE on treatment process quality measures and recovery outcomes.

    • Hypothesis 1: CARE will improve clinic-level rates of treatment process quality measures (i.e., initiation of and adherence to OUD medications, in-treatment symptom and therapeutic progress, 6-month treatment retention) and recovery outcomes (i.e., substance use, overdoses, self-reported health status, hepatitis C infection [HCV], and emergency department [ED] visits).

  • Aim 2: Examine association of changes in the eTPA8 client-reported treatment progress tool with treatment retention and substance use outcomes.

    • Hypothesis 2: Positive change in the eTPA8 scale will be associated with longer retention and reduced substance use at follow-up.

  • Aim 3: Examine whether there are program admission changes in terms of client complexity (e.g., homelessness, psychiatric conditions) to test impact on client access (i.e., adverse selection) as clinics monitor quality measures.

    • Hypothesis 3: Clinic rates of severe OUD (i.e., daily use and/or injection use), homelessness and psychiatric conditions at admission during the year following the CARE intervention will decrease from baseline rates.

  • Aim 4: Examine staff attitudes, experiences, and behaviors related to (1) implementing the eTPA8 and clinical practice change, and (2) working with external facilitators to implement QI processes.

  • Aim 5: Estimate addiction program and state intervention costs of CARE and impact on Medicaid costs.

Fig. 1
figure 1

Intervention conceptual method


Design and participants

We will employ a SW-RCT with 30 outpatient SUD treatment clinics who serve approximately 2000 clients with OUD per year to test whether CARE improves treatment process quality and recovery outcomes over treatment as usual. The study will benefit from multiple sources of data to test study aims, including state administrative data, client-reported survey and eTPA8 data, and staff interview and survey data. Therefore, this will be a robust study of the implementation of a measures-driven QI approach for SUD treatment. Figure 2 shows the stepped-wedge design. Five clinics will start CARE every 6 months over 3 years; therefore, there are 6 steps, each with 5 clinics randomly assigned to 1 of the 6 steps.

Fig. 2
figure 2

Stepped-wedge design

Clinics are being recruited from various regions in New York State—Long Island, New York City and its suburbs, the Hudson Valley, and central New York—that have diversity in population density, resources, and racial/ethnic representation. Inclusion criteria will be outpatient treatment programs licensed by the NYS Office of Addiction Services and Supports (NYS OASAS) with a minimum of 50 OUD clients per year. For-profit clinics will be excluded because of insufficient numbers within New York and differences in organizational culture. To recruit clinics, we first identified 77 eligible clinics based on our inclusion criteria using state administrative data. The study PIs and project director then invited these identified clinics via email to informational webinars about the study, emailed them information sheets describing the study, and reached out via personal emails to clinic leaders. NYS OASAS representatives from the study regions also sent informational emails to eligible clinics. To finalize recruitment, clinics who agree to participate will sign a clinic participation agreement form. Clinic randomization will be stratified by downstate (Long Island, New York City, and its suburbs) and rest-of-state (Hudson Valley and central New York). We will use the SAS PROC PLAN procedure to generate the randomization schedule; a data analyst at NYS OASAS not associated with the study will conduct this procedure to minimize bias.

Intervention planning and pilot

This study was funded under a phased NIH award such that there was a 1-year developmental phase that was recently successfully completed, allowing for a second phase of funding for the SW-RCT described in this paper. During this 1-year developmental phase in 2019–2020, we conducted two stakeholder meetings, piloted CARE study components, tested and finalized the eTPA8 client-reported treatment progress tool, and created study materials and protocols. The stakeholder meetings were conducted to get input and feedback about the CARE intervention and included attendees from New York City, State and County agencies, clients with lived experience in SUD treatment, insurers, academic partners, and SUD treatment providers. A 3-month pilot was conducted in an outpatient clinic during which we tested components of the CARE intervention for feasibility, including engaging executive leadership, providing clinical and QI trainings, and using an external facilitator to engage clinic champions and a change team in a QI cycle. Focus groups were conducted to understand staff experiences during the CARE pilot, and intervention materials and protocols were finalized. Finally, we tested and finalized the eTPA8 client outcome monitoring tool by using cognitive testing procedures with pilot clinic clients and testing the tool on a web-based usability program [43,44,45]. We also asked pilot clinic staff for feedback on the eTPA8.

Description of the CARE intervention

The overall goal of the multicomponent, clinic-level CARE intervention is to help SUD clinics make improvements in rates of OUD medication initiation and adherence, rates of 6-month retention, and recovery outcomes by providing training and implementing QI procedures with support from an external facilitator. The CARE intervention will be 1-year-long for each participating clinic. Guided by phased implementation models [46], ‘active’ intervention by the external facilitator will take place during the first 6 months (see Table 1). The second 6 months will be a sustainment phase in which clinics continue QI activities with limited facilitator support (e.g., troubleshooting). Guided by the i-PARIHS model, facilitators will support clinics in making adjustments to clinical and program procedures. They will tailor their approach to fit with each clinic’s workflow while addressing staff motivations and resources and attending to the inner and outer context of each clinic.

Table 1 CARE intervention components during 6-month active phase

Measures and materials

Data sources

The NYS OASAS registry of all admissions and discharges (irrespective of payer) to licensed addiction treatment facilities—the Client Data System (CDS)—forms the basis of our administrative data set. The CDS will record all treatment episodes that occur in the 30 clinics we include in this study, providing a rich source of data. All licensed providers of SUD treatment in NYS are mandated to submit admission and discharge data into the CDS. NYS OASAS tracks each individual by unique identifiers. When clients enter treatment, providers enter information on demographics, level of functioning (e.g., health status, comorbidities), criminal justice status, recent history and frequency of substance use, and recent SUD treatment history. Upon discharge, providers enter information on functional status (e.g., employment, housing), source of payment for treatment (e.g., self-pay, Medicaid, private insurance), criminal justice involvement during treatment, hospitalizations, and use of ED services. Additionally, the discharge information has type, frequency (i.e., number of sessions), and intensity of SUD treatment as well as discharge status (discharge disposition and goal achievement). NYS Medicaid claims will supplement the administrative dataset by providing data on OUD medications and health services use (e.g., ED visits). We will link clients in Medicaid to the CDS using the NYS OASAS identifier augmented, in cases where there were no exact matches, by searching for same treatment events that were recorded in each database. Additionally, we will employ probabilistic matching techniques to account for data entry errors on elements of the OASAS identifier. Our link rate in previous studies among OUD individuals in Medicaid is 90% [47].

Two additional data sources will represent client-reported outcomes. First, the 8-item eTPA8 will be completed by clients in participating clinics every 2 weeks; data will be retained in the eTPA8 database for analysis. Second, a sample of 40 clients from each clinic will be recruited to complete a survey: 20 clients at the beginning of the intervention and another 20 6 months after the intervention start date at their clinic. Each participant will be assessed three times: (1) at treatment entry, (2) 3 months post-entry, and (3) 6 months post-entry. We will use a mobile platform to collect this data. To recruit clients, clinic staff will present a recruitment flyer to randomly selected newly admitted clients with a primary or secondary OUD diagnosis. The flyer will describe the study and provide a number where they can text to participate. Participants will digitally consent and complete the baseline assessment on their mobile phone. If they do not complete the assessment, they will be sent reminder text messages at 3 and 7 days. Clients will receive a text message with a link to the assessment at each follow-up point, as well as text reminders if they do not complete the follow-ups. Clients will be compensated with a $20 gift card for each assessment.

We will examine staff attitudes, experiences, and behaviors related to implementing CARE with the external facilitator. There will be three data sources: (1) surveys, (2) semi-structured interviews, and (3) external facilitator notes, surveys, and checklists [48]. Online surveys of clinic staff will be administered at baseline and 6 and 12 months post-baseline. All part- and full-time staff will be eligible. Semi-structured interviews will be conducted via phone with at least two clinical and one administrative staff (i.e., clinic director or clinical supervisor) at each of the 30 participating clinics approximately 6 months after starting the CARE intervention or until saturation is reached (no new themes or insights occurring). We will use purposeful, criteria sampling for the interviews to select staff who participated in the change team and/or implementation activities as part of CARE. We will follow best practices for phone-based qualitative interviews in health services [49, 50]. Staff members will provide verbal consent for interviews and digital consent for surveys. In order to inform replication and dissemination of this intervention model as well as to inform on CARE experiences, the external facilitator will complete surveys after each site visit/interaction, monthly narrative reports, and tracking of clinic participation in activities modeled on other facilitation studies [48].


Based on treatment episode and client admission information in the CDS and claims information in Medicaid, clinic level covariates, quality, and outcomes data will be computed in 6-month assessment windows. Client level covariates from the CDS will include frequency of use, injection drug use, age, gender, race/ethnicity, education, residential stability/homelessness, and psychiatric comorbidities at admission. Medicaid claims will be used to identify OUD medications, various health conditions based on ICD-10 diagnosis fields; healthcare episodes based on procedure, rate and NDC codes; and cost based on Medicaid payments for claims and encounters. Clinic rates of 6-month retention will be computed from admission and discharge dates in the CDS. OUD medication initiation and adherence will use data from Medicaid pharmacy claims during an episode. Initiation will be a new OUD medication pharmacy claim within 30 days and adherence will be computed using the NQF 3175 definition [51,52,53]. Overdoses will be identified by CDC defined combinations of diagnoses and acute care procedure codes [54, 55]. All-cause ED visits will be defined by procedure and rate codes. HCV infections will be defined by diagnoses during any hospitalizations or on two outpatient claims [56, 57]. Costs of healthcare services will be drawn from Medicaid claims and encounter data.

Provider-level covariates will also be calculated. SUD program characteristics will be drawn from computing provider-level statistics from claims as well as from state databases that contain addresses, affiliations, and licensing information. Provider variables will include geographical location, annual census, and hospital affiliation. Geographical information of provider location will be drawn from the Area Health Resources File: poverty, population density, ethnic/racial population, and health professional availability.

Client-reported treatment process improvement will be computed from change scores on the eTPA8 from baseline to latest administration. Further, the client-self reported mobile survey will include the following measures: (1) past 2-week use of opioids and other substances based on the Addiction Severity Index (ASI) [58], (2) substance use consequences based on the Short Inventory of Problems-Revised (SIP-R) [59], (3) health-related quality of life based on the first item of the SF-36 [60], (4) functional outcomes using the Treatment Effectiveness Assessment [61], past 30-day employment, incarceration, housing status based on the ASI, (5) a visual analog scale measuring cravings to use substances [62], and (6) demographics.

Staff surveys will include measures on practice change capacity (i.e., the change process capability questionnaire) [63, 64], the Opinions About MAT Survey [65], knowledge and perceptions about person-centered care and shared-decision making, and use of data and quality metrics. An interview guide for staff qualitative interviews will be created to reflect the dimensions of the i-PARIHS model (e.g., inner context) [66].

The marginal cost of the CARE intervention will be estimated to inform future intervention development and implementation. We will collect detailed data on staffing and other resources to inform micro-costing computations and estimates of the marginal costs of deploying CARE under real-world conditions [67,68,69]. Cost data collection will follow generally accepted practice in cost-effectiveness research [67, 68]. In addition, the cost collection procedures and estimations will borrow from methods derived from process engineering [70] and managerial accounting [71,72,73,74]. Input prices (i.e., cost of each unit of resource input) will be derived from market rates when available, program budgets, or government pricing schedules (e.g., Bureau of Labor Statistics data on personnel costs for the Northeast region). The first step in the cost tracking will entail developing detailed process maps in conjunction with intervention developers and clinicians to identify all key processes [75]. The cost tracking system will track resources used to deliver the intervention as well as identify research protocol driven costs that would not be part of a real-world implementation [76]. The cost tracking system will identify intervention development costs as well as operational costs associated with running the intervention.

Analysis and power

The primary aim of this study is to test whether CARE is associated with greater use of pharmacotherapy for OUD, longer retention in care, and improved recovery outcomes (e.g., reduced drug use). We will employ multiple convergent analytical methods to (a) examine CARE implementation and (b) test association between CARE and OUD treatment outcomes. Analyses will begin with examination of distributions of variables and transformations where appropriate, examination of patterns of missingness, and application of formal imputation methods [77,78,79] if deemed essential to project aims. Analyses for Aims 1, 2, and 3 will utilize general linear mixed models (GLMM) for stepped-wedge designs [80,81,82,83,84,85]. For example, to test the effect of CARE on treatment process quality measures (i.e., initiation and adherence to OUD medications, in-treatment symptom and therapeutic progress, 6-month treatment retention) and recovery outcomes (substance use, overdoses, self-reported health status, HCV infection, and ED visits), we will employ stacked difference-in-difference models that test for within and across clinic changes while adjusting for secular trends across time [80, 86, 87]. The models will take the form: Yikt = μ + Xktθ + βt + Zikt + αk + eikt, where βt is a fixed effect for time, Zijk is a matrix of covariates measuring individual, geographical, and clinic characteristics for person i in clinic k at time t, Xkt is an indicator for intervention in clinic k at time t (coded 0 for baseline then 1 from when intervention begins in clinic through end of study), and θ is the treatment effect. Yikt represents a matrix of binary variables: treatment process quality measures (i.e., pharmacotherapy initiation and adherence, in-treatment symptom and therapeutic progress, treatment retention) and recovery outcomes (i.e., substance use, overdoses, self-reported status, HCV infection, and ED visits). We will estimate the effect of CARE (θ) while controlling for secular trends (β) and adjusting for clustering within clinic (αk) using robust standard errors. We will also conduct a set of falsification tests. These will include (1) testing for variation in program and client characteristics across time of enrollment; (2) examination of variation across study time in the study outcomes for clinics not participating in the study; and (3) examination of variation in intervention effect based on clinics’ baseline levels of quality measures (e.g., comparing top third to bottom third in quality of care at baseline).

To examine Aim 4, we will use a concurrent triangulation mixed-methods design such that staff quantitative and qualitative data collection will occur concurrently and results will be integrated after analysis of each [88]. The semi-structured interviews will be transcribed and analyzed using conventional qualitative content analysis [89] following the process outlined in Erlingsson and Brysiewicz [90]. Atlas.ti will be used to manage the data. A codebook will initially be created based on i-PARIHS constructs and will be modified and amended as coding and analysis continues. For the quantitative results, multilevel modeling will be used to examine change over time in survey scores given the nested structure of the data (staff nested within clinics) [91].

For Aim 5, the cost analysis will focus on the provider and state agency perspective as the main decision-maker; consequently, other societal costs are not included in this preliminary analysis. Decision analytical models will be created using measures of central tendency (e.g., means, medians) of observed resource inputs as well as measured estimates of variances for sensitivity analyses. Using TreeAge Pro [92], we will estimate average costs as well as confidence intervals based on simulations drawing from variances of model inputs. We will conduct a series of one-way and two-way sensitivity analyses across model inputs to determine the impact of variability of individual model parameters. Additionally, we will use Monte Carlo simulation to build confidence intervals over which the cost of the intervention may vary. Costs will be presented as ratios: per clinic, per participant, and per unit of each quality measure.

We will also examine the association between CARE and total Medicaid costs. Costs will be modeled using generalized gamma models (GGM), which offer flexibility in modeling non-Gaussian outcomes by computing three parameters to fit the observed distribution: location, scale, and shape [93, 94]. One advantage of GGM is that it is not subject to the retransformation bias stemming from heteroscedasticity in the log-scale often associated with the more common OLS regression on log-transformed healthcare costs [94,95,96]. We will consider other GLM approaches to analyzing costs and subject these to goodness of fit tests as outlined by Manning [97].

To estimate study power for select outcomes, we drew from methods developed by Hussey and Hughes [80] as implemented in a Stata command, stepped wedge [98]. Using administrative data to estimate sample sizes and baseline rates (μ), we computed detectable differences for a β = 0.80, α = 0.01, ICC/ρ = 0.20, and SW-RCT with 6 crossover points, and 5 clinics randomized at each point. For 6-month retention, assuming step samples sizes (mk) of 132 and base rate (μ) of 25%, we will be able to find a 2.8pp change in retention (i.e., 27.8%) to be statistically significant and clinically meaningful. We also conducted sensitivity analyses varying ρ (0.10–0.30) and β (0.90) and found detectable differences ranging from 2.6 to 3.2%. For OUD medication initiation, we assume mk = 40 Medicaid clients with OUD, μ = 0.40, and find a detectable difference of 6.7%. For ED visits, we assume mk = 79 Medicaid clients and μ = 0.40, and find a detectable difference of 3.9%. Consequently, with 30 clinics and 6 steps in our SW-RCT, we will have power to detect moderate and clinically meaningful differences in outcomes.


This study will provide a robust examination of the implementation of a measures-driven QI approach for SUD treatment. We will be able to provide insight into whether a multicomponent intervention that includes clinical training, training and support on QI processes, external facilitation, and an electronic client outcomes monitoring system improves quality and recovery outcomes. The study benefits from a strong academic-state partnership, an external facilitation model guided by the i-PARIHS implementation science framework, multiple sources of data to examine impacts of the intervention, as well as tools and trainings developed with input from a variety of stakeholders and a pilot during the first phase of this project.

The study team is currently finalizing clinic recruitment and randomization with plans to start the first five clinics in the intervention in December 2020 and January 2021. The ongoing COVID-19 pandemic has forced a shift in SUD treatment service delivery, which has traditionally relied heavily upon in-person services. Outpatient treatment clinics have had to rapidly and fundamentally change their service delivery models due to physical distancing mandates and new COVID-19 safety protocols. For example, programs have needed to change their work flows to accommodate telepractice for induction and management of medications for OUD, provide individual and group counseling via telemedicine, reduce reliance on drug screening (i.e., toxicology testing), and employ new strategies for treatment engagement. Therefore, while the pandemic continues, this study will not only have to adjust some of its facilitation and training activities to be done remotely rather than in-person, but we will also tailor facilitation to assist clinics in improving quality of care in the context of COVID-19 shifts in service delivery and workflows. We will carefully document changes to our protocols and clinic procedures as the COVID-19 pandemic continues to affect NYS and our participating project sites.

While there are a number of strengths in this study, we acknowledge the following possible exogenous and endogenous threats to validity. First, administrative data is incomplete. We rely significantly on administrative data to increase the efficiency of our research study that tests the impact of CARE in 30 clinics on treatment quality measures (e.g., pharmacotherapy use, retention) and outcomes (e.g., relapse). While there is strong precedent for using administrative data for research [99,100,101,102,103,104,105,106], these data will not capture all aspects of client experience (e.g., substance use, social consequences). To address the limitation, we are using a mobile platform that allows for collection of research data from a subsample of participants. These data will allow us to further study the intervention impact on recovery outcomes not captured in the administrative data.

In addition, there may be variability in the effect of the intervention due to variation in individual clinic capacity to integrate QI practices within their organizational structure. It will be critically important for future dissemination efforts to understand factors implicated in the implementation of CARE. While the study has adequate statistical power to detect a main effect, it will be important to understand the variability in effect across clinics. To address this, we will incorporate the embedded mixed-methods component of the study examining staff experiences. Additionally, we will explore potential intervention-related factors by conducting a series of post hoc analyses.

Finally, though the SW-RCT design offers a number of benefits, clinics will be randomized to step at the start of the trial with start dates staggered across 3 years. Therefore, it may be challenging to retain recruited clinics in the study while they wait to start the intervention. To reduce clinic attrition, we will employ strategies recommended in the stepped-wedge trial literature (e.g., keep recruited sites informed of progress, hold regular pre-intervention meetings) [107]. Also, recruited clinics will sign a study agreement indicating they understand the randomization procedures and timeline.

In summary, SUD clinics are under pressure to transition into a system that values quality metrics and outcomes. However, the workforce has been under-resourced and undertrained in processes that could assist them in making systemic changes to improve clinic functioning and client outcomes. This study will provide the field with a strong test of a multicomponent intervention to improve providers’ capacity to make systematic changes tied to quality metrics. The study will also result in training and materials that can be shared widely to improve QI implementation and clinical practice in the SUD treatment system.

Availability of data and materials

The full protocol can be accessed by contacting the corresponding author.



Coaching for Addiction Recovery Enhancement


Emergency department


Generalized gamma models


General linear mixed models


Hepatitis C virus infection


Integrated Promoting Action on Research Implementation in Health Services


Network for Improvement of Addiction Treatment


New York State Office of Addiction Services and Supports


Opioid use disorder


Quality improvement


Substance use disorder


Stepped-wedge randomized controlled trial


Treatment Progress Assessment-8


  1. SAMHSA. Key substance use and mental health indicators in the United States: Results from the 2018 National Survey on Drug Use and Health Rockville, MD: Center for Behavioral Statistics and Qaulity, Substance Abuse and Mental Health Services Administration 2019. Report No.: HHS Publication No. PEP19-5068, NSDUH Series H-54.

  2. AHRQ. HCUP Fast Stats-Opioid Related Hospital Use 2020 [Available from:

  3. SAMHSA. Treatment Episode Data Set (TEDS) 2017: Admissions and Discharges from Publicly-Funded Substance Use Treatment Rockville, MD 2019.

  4. Czeisler M, Lane RI, Petrosky E, et al. Mental health, substance use, and suicidal ideation during the COVID-19 pandemic—United States, June 24–30, 2020. MMWR Morb Mortal Wkly Rep. 2020;69:1049–57.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  5. Volkow ND. Collision of the COVID-19 and addiction epidemics. Annals of Internal Medicine. 2020;173(1):61–2.

    Article  PubMed  Google Scholar 

  6. Slavova S, Rock P, Bush HM, Quesinberry D, Walsh SL. Signal of increased opioid overdose during COVID-19 from emergency medical services data. Drug and Alcohol Dependence. 2020;214:108176.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  7. Alexander GC, Stoller KB, Haffajee RL, Saloner B. An epidemic in the midst of a pandemic: opioid use disorder and COVID-19. Annals of Internal Medicine. 2020;173(1):57–8.

    Article  PubMed  Google Scholar 

  8. Murphy SM, Yoder J, Pathak J, Avery J. Healthcare utilization patterns among persons who use drugs during the COVID-19 pandemic. J Substance Abuse Treatment. 2020;108177.

  9. Linas BP, Savinkina A, Barbosa C, Mueller PP, Cerdá M, Keyes K, et al. A clash of epidemics: Impact of the COVID-19 pandemic response on opioid overdose. J Substance Abuse Treatment. 2021;120:108158.

    Article  Google Scholar 

  10. Ahmad FBR, L.M., Sutton, P. . Provisional drug overdose death counts. National Center for Health Statistics; 2020.

  11. Institute of Medicine. Improving the quality of health care for mental and substance-use conditions. Washington, D.C.: National Academies Press; 2006.

    Google Scholar 

  12. Office of Surgeon General. Facing addiction in America: the surgeon general's report on alcohol, drugs, and health. Reports of the Surgeon General. Washington (DC): US Department of Health and Human Services; 2016.

  13. Padwa H, Urada D, Gauthier P, Rieckmann T, Hurley B, Crevecouer-MacPhail D, et al. Organizing publicly funded substance use disorder treatment in the United States: moving toward a service system approach. J Subst Abuse Treat. 2016;69:9–18.

    Article  PubMed  Google Scholar 

  14. National Center on Addiction and Substance Abuse at Columbia University. Addiction medicine: closing the gap between science and practice author; 2012 June 2012.

  15. England MJ, Butler AS, Gonzalez ML. Psychosocial interventions for mental and substance use disorders: a framework for establishing evidence-based standards: National Academy Press; 2015.

    Google Scholar 

  16. McLellan AT, Lewis DC, O'Brien CP, Kleber HD. Drug dependence, a chronic medical illness: implications for treatment, insurance, and outcomes evaluation. Jama. 2000;284(13):1689–95.

    Article  CAS  PubMed  Google Scholar 

  17. Carroll KM. new methods of treatment efficacy research: Bridging clinical research and clinical practice. Alcohol Health Research World. 1997;21(4):352–9.

    CAS  PubMed  PubMed Central  Google Scholar 

  18. Stark MJJCpr. Dropping out of substance abuse treatment: A clinically oriented review. 1992;12(1):93-116.

  19. Mattson ME, Del Boca FK, Carroll KM, Cooney NL, DiClemente CC, Donovan D, et al. Compliance with treatment and follow-up protocols in project MATCH: predictors and relationship to outcome. 1998;22(6):1328-1339.

  20. Kelly SM, O’Grady KE, Mitchell SG, Brown BS, Schwartz RPJD, dependence a. Predictors of methadone treatment retention from a multi-site study: a survival analysis. 2011;117(2-3):170-175.

  21. Laudet AB, Stanick V, Sands B. What could the program have done differently? A qualitative examination of reasons for leaving outpatient treatment. J Substance Abuse Treatment. 2009;37(2):182–90.

    Article  Google Scholar 

  22. Teruya C, Schwartz RP, Mitchell SG, Hasson AL, Thomas C, Buoncristiani SH, et al. Patient perspectives on buprenorphine/naloxone: a qualitative study of retention during the starting treatment with agonist replacement therapies (START) study. J Psychoactive Drugs. 2014;46(5):412–26.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Thylstrup BJNSoA, Drugs. Numbers and narratives. Relations between patient satisfaction, retention, outcome and program factors in outpatient substance abuse treatment. 2011;28(5-6):471-86.

  24. Buck JA. The looming expansion and transformation of public substance abuse treatment under the Affordable Care Act. Health Affairs. 2011;30(8):1402–10.

    Article  PubMed  Google Scholar 

  25. Roy AK, Miller MM. The Medicalization of Addiction Treatment Professionals. J Psychoactive Drugs. 2012;44(2):107–18.

    Article  PubMed  Google Scholar 

  26. Pating DR, Miller MM, Goplerud E, Martin J, Ziedonis DM. New systems of care for substance use disorders: treatment, finance, and technology under health care reform. Psychiatric Clinics of North America. 2012;35(2):327–56.

    Article  PubMed  Google Scholar 

  27. Soper MH, Matulis, R., Menschner, C. Moving Toward Value-Based Payment for Medicaid Behavioral Health Services Center for Health Care Strategies, Inc.; 2017.

  28. Andrews C, Abraham A, Grogan CM, Pollack HA, Bersamira C, Humphreys K, et al. Despite Resources From The ACA, Most states do little to help addiction treatment programs implement health care reform. Health Aff (Millwood). 2015;34(5):828–35.

    Article  PubMed  PubMed Central  Google Scholar 

  29. O’Grady MA, Lincourt P, Gilmer E, Kwan M, Burke C, Lisio C, et al. How are substance use disorder treatment programs adjusting to value-based payment? A statewide qualitative study. Substance Abuse: Research and Treatment. 2020;14:1178221820924026.

  30. Hunter SB, Ober AJ, Paddock SM, Hunt PE, Levan D. Continuous quality improvement (CQI) in addiction treatment settings: design and intervention protocol of a group randomized pilot study. Addiction Science & Clinical Practice. 2014;9(1):4.

    Article  Google Scholar 

  31. Hunter SB, Rutter CM, Ober AJ, Booth MS. Building capacity for continuous quality improvement (CQI): A pilot study. J Substance Abuse Treatment. 2017;81:44–52.

    Article  Google Scholar 

  32. Wisdom JP, Ford JH II, Hayes RA, Edmundson E, Hoffman K, McCarty D. Addiction treatment agencies’ use of data: A qualitative assessment. J Behavioral Health Services Res. 2006;33(4):394–407.

    Article  Google Scholar 

  33. Ford JH II, Wise M, Wisdom J. A peek inside the box: how information flows through substance abuse treatment agencies. J Technol Human Services. 2010;28(3):121–43.

    Article  Google Scholar 

  34. Crèvecoeur-MacPhail D, Bellows A, Rutkowski BA, Ransom L, Myers AC, Rawson RA. “I've been NIATxed”: Participants' Experience with Process Improvement. J Psychoactive Drugs. 2010;42(sup6):249–59.

    Article  Google Scholar 

  35. Gustafson DH, Quanbeck AR, Robinson JM, Ford JH 2nd, Pulvermacher A, French MT, et al. Which elements of improvement collaboratives are most effective? A cluster-randomized trial. Addiction. 2013;108(6):1145–57.

    Article  PubMed  PubMed Central  Google Scholar 

  36. McCarty D, Gustafson DH, Wisdom JP, Ford J, Choi D, Molfenter T, et al. The Network for the Improvement of Addiction Treatment (NIATx): enhancing access and retention. Drug Alcohol Depend. 2007;88(2-3):138–45.

    Article  PubMed  Google Scholar 

  37. Fields D, Knudsen HK, Roman PM. Implementation of Network for the Improvement of Addiction Treatment (NIATx) processes in substance use disorder treatment centers. J Behavioral Health Services Res. 2016;43(3):354–65.

    Article  Google Scholar 

  38. Kitson A, Harvey G. Facilitating an evidence-based innovation into practice. Implementing Evidence-Based Practice In Healthcare: A Facilitation Guide. 2015:85.

  39. Harvey G, Kitson AJIS. PARIHS revisited: from heuristic to integrated framework for the successful implementation of knowledge into practice. 2016;11(1):33.

  40. Swindle T, Johnson SL, Whiteside-Mansell L, Curran GM. A mixed methods protocol for developing and testing implementation strategies for evidence-based obesity prevention in childcare: a cluster randomized hybrid type III trial. Implementation Science. 2017;12(1):90.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Stetler CB, Damschroder LJ, Helfrich CD, Hagedorn HJ. A Guide for applying a revised version of the PARIHS framework for implementation. Implementation Science. 2011;6(1):99.

    Article  PubMed  PubMed Central  Google Scholar 

  42. O’Grady MA, Lincourt P, Hussain S, Gilmer E, Neighbors CJ. An instrument for assessing progress in substance use disorder treatment: a pilot study of initial reliability and factor structure of the Treatment Progress Assessment-8. Journal of Addictive Diseases. 2020;38(1):49–54.

    Article  PubMed  Google Scholar 

  43. Jaspers MWM, Steen T, Bos Cvd, Geenen M. The think aloud method: a guide to user interface design. Int J Med Informatics. 2004;73(11):781-795.

  44. Willis G. Cognitive interviewing: a tool for improving questionnaire design. Thousand Oaks: Sage Publications; 2004.

    Google Scholar 

  45. Gill J. Alcohol problems in employment: epidemiology and responses. Alcohol Alcohol. 1994;29(3):233–48.

    CAS  PubMed  Google Scholar 

  46. Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(1):4–23.

    Article  PubMed  Google Scholar 

  47. Neighbors CJ, Yerneni R, O'Grady MA, Sun Y, Morgenstern J. Recurrent use of inpatient withdrawal management services: characteristics, service use, and cost among Medicaid clients. J Subst Abuse Treat. 2018;92:77–84.

    Article  PubMed  Google Scholar 

  48. Shelley DR, Ogedegbe G, Anane S, Wu WY, Goldfeld K, Gold HT, et al. Testing the use of practice facilitation in a cluster randomized stepped-wedge design trial to improve adherence to cardiovascular disease prevention guidelines: HealthyHearts NYC. Implement Sci. 2016;11(1):88.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Drabble L, Trocki KF, Salcedo B, Walker PC, Korcha RA. Conducting qualitative interviews by telephone: Lessons learned from a study of alcohol use among sexual minority and heterosexual women. Qual Soc Work. 2016;15(1):118–33.

    Article  PubMed  PubMed Central  Google Scholar 

  50. Smith EM. Telephone interviewing in healthcare research: a summary of the evidence. Nurse Res. 2005;12(3):32–41.

    Article  PubMed  Google Scholar 

  51. Ruetsch C, Tkacz J, Nadipelli VR, Brady BL, Ronquest N, Un H, et al. Heterogeneity of nonadherent buprenorphine patients: subgroup characteristics and outcomes. Am J Manag Care. 2017;23(6):e172–e9.

    PubMed  Google Scholar 

  52. Peterson AM, Nau DP, Cramer JA, Benner J, Gwadry-Sridhar F, Nichol M. A checklist for medication compliance and persistence studies using retrospective databases. Value Health. 2007;10(1):3–12.

    Article  PubMed  Google Scholar 

  53. National Quality Forum. Continuity of pharmacotherapy for opioid use disorder [Internet]. National Quality Forum; 2018 [Available from:

  54. Center for the Application of Prevention Technologies. Using International Classification of Diseases (ICD) Codes to Assess Opioid-Related Overdose Deaths. Substance Abuse and Mental Health Services Administration; 2018 09/04/2018.

  55. Rudd RA, Seth P, David F, Scholl L. Increases in Drug and Opioid-Involved Overdose Deaths - United States, 2010-2015. MMWR Morb Mortal Wkly Rep. 2016;65(50-51):1445–52.

    Article  PubMed  Google Scholar 

  56. Kabiri M, Chhatwal J, Donohue JM, Roberts MS, James AE, Dunn MA, et al. Long-term disease and economic outcomes of prior authorization criteria for Hepatitis C treatment in Pennsylvania Medicaid. Healthcare. 2017;5(3):105–11.

    Article  PubMed  Google Scholar 

  57. Isenhour CJ, Hariri SH, Hales CM, Vellozzi CJ. Hepatitis C antibody testing in a commercially insured population, 2005-2014. Am J Prev Med. 2017;52(5):625–31.

    Article  PubMed  Google Scholar 

  58. McLellan AT, Kushner H, Metzger D, Peters R, Smith I, Grissom G, et al. The Fifth Edition of the Addiction Severity Index. J Subst Abuse Treat. 1992;9(3):199–213.

    Article  CAS  PubMed  Google Scholar 

  59. Alterman AI, Cacciola JS, Ivey MA, Habing B, Lynch KG. Reliability and validity of the alcohol short index of problems and a newly constructed drug short index of problems. Journal of studies on alcohol and drugs. 2009;70(2):304–7.

    Article  PubMed  PubMed Central  Google Scholar 

  60. Ware JE Jr, Kosinski M, Keller SD. A 12-Item Short-Form Health Survey: construction of scales and preliminary tests of reliability and validity. Medical care. 1996;34(3):220–33.

    Article  PubMed  Google Scholar 

  61. Ling W, Farabee D, Liepa D, Wu LT. The Treatment Effectiveness Assessment (TEA): an efficient, patient-centered instrument for evaluating progress in recovery from addiction. Subst Abuse Rehabil. 2012;3(1):129–36.

    Article  PubMed  PubMed Central  Google Scholar 

  62. Drobes DJ, Thomas SE. Assessing craving for alcohol. Alcohol Res Health. 1999;23(3):179–86.

    CAS  PubMed  PubMed Central  Google Scholar 

  63. Nutting PA, Crabtree BF, Stewart EE, Miller WL, Palmer RF, Stange KC, et al. Effect of facilitation on practice outcomes in the National Demonstration Project model of the patient-centered medical home. Ann Fam Med. 2010;8(Suppl 1):S33–44 S92.

    Article  PubMed  PubMed Central  Google Scholar 

  64. Solberg LI, Asche SE, Margolis KL, Whitebird RR. Measuring an organization's ability to manage change: the change process capability questionnaire and its use for improving depression care. Am J Med Qual. 2008;23(3):193–200.

    Article  PubMed  Google Scholar 

  65. Friedmann PD, Wilson D, Knudsen HK, Ducharme LJ, Welsh WN, Frisman L, et al. Effect of an organizational linkage intervention on staff perceptions of medication-assisted treatment and referral intentions in community corrections. J Subst Abuse Treat. 2015;50:50–8.

    Article  PubMed  Google Scholar 

  66. Green CA, McCarty D, Mertens J, Lynch FL, Hilde A, Firemark A, et al. A qualitative study of the adoption of buprenorphine for opioid addiction treatment. J Subst Abuse Treat. 2014;46(3):390–401.

    Article  PubMed  Google Scholar 

  67. Gold MR, Siegel J, Russell LB, Weinstein MC, editors. Cost-effectiveness in health and medicine. New York: Oxford University Press; 1996.

    Google Scholar 

  68. Drummond MF, O'Brien B, Stoddart GL, Torrance GW. Methods for the economic evaluation of health care programmes. New York: Oxford University Press; 1997.

  69. Muenning P. Designing and Conducting Cost-Effectiveness Analyses in Medicine and Health Care. San Francisco: Jossey-Bass; 2002.

    Google Scholar 

  70. Chase RB, Aquilano NJ, Jacobs RF. Production and operations management: manufacturing and services. 8th ed. Boston: Irwin McGraw-Hill; 1998. p. 889.

    Google Scholar 

  71. Finkler SA, Ward DM. Essential of cost accounting for health care organizations. 2nd ed. Gaithersburg: Aspen Publication; 1999.

    Google Scholar 

  72. Horngren CT, Sundem GL, Stratton WO. Introduction to management accounting. 11th ed. Upper Saddle River: Prentice Hall; 1999.

    Google Scholar 

  73. Horngren CT, Foster G, Datar SM. Cost Accounting: A Managerial Emphasis. 10th ed. Upper Saddle River: Prentice Hall; 2000.

    Google Scholar 

  74. Kaplan RS, Atkinson AA. Advanced Management Accounting. 3rd ed. Upper Saddle River: Prentice Hall; 1998.

    Google Scholar 

  75. Hunt VD. Process mapping: how to reengineer your business process. New York: John Wiley and Sons; 1996.

    Google Scholar 

  76. Coyle D, Lee KM. The problem of protocol driven costs in pharmacoeconomic analysis. Pharmacoeconomics. 1998;14(4):357–63.

    Article  CAS  PubMed  Google Scholar 

  77. Little RA, Rubin DB. Statistical Analysis with Missing Data. New York: Wiley; 1987.

    Google Scholar 

  78. Raghunathan TE. What do we do with missing data? Some options for analysis of incomplete data. Annu Rev Public Health. 2004;25:99–117.

    Article  PubMed  Google Scholar 

  79. Robins JM, Rotnitzky A, Zhao LP. Estimation of regression-coefficients when some regressors are not always observed. Journal of the American Statistical Association. 1994;89(427):846–66.

    Article  Google Scholar 

  80. Hussey MA, Hughes JP. Design and analysis of stepped wedge cluster randomized trials. Contemp Clin Trials. 2007;28(2):182–91.

    Article  PubMed  Google Scholar 

  81. Baio G, Copas A, Ambler G, Hargreaves J, Beard E, Omar RZ. Sample size calculation for a stepped wedge trial. Trials. 2015;16:354.

    Article  PubMed  PubMed Central  Google Scholar 

  82. Hemming K, Lilford R, Girling AJ. Stepped-wedge cluster randomised controlled trials: a generic framework including parallel and multiple-level designs. Stat Med. 2015;34(2):181–96.

    Article  PubMed  Google Scholar 

  83. Hemming K, Taljaard M. Sample size calculations for stepped wedge and cluster randomised trials: a unified approach. J Clin Epidemiol. 2016;69:137–46.

    Article  PubMed  PubMed Central  Google Scholar 

  84. Hughes JP, Granston TS, Heagerty PJ. Current issues in the design and analysis of stepped wedge trials. Contemp Clin Trials. 2015;45(Pt A):55–60.

    Article  PubMed  PubMed Central  Google Scholar 

  85. Martin J, Taljaard M, Girling A, Hemming K. Systematic review finds major deficiencies in sample size methodology and reporting for stepped-wedge cluster randomised trials. BMJ Open. 2016;6(2):e010166.

    Article  PubMed  PubMed Central  Google Scholar 

  86. Cengiz D, Dube A, Lindner A, Zipperer B. The effect of minimum wage on low-wage jobs: Evidence from the United States using a bunching estimator. Cambridge, MA: National Bureau of Economic Research; 2019 January 2019. Contract No.: 25434.

  87. Goodman-Bacon A. Difference-in-difference with variation in treatment timing. Cambridge, MA: National Bureau of Economic Research; 2018 September 2018. Contract No.: 25018.

  88. Creswell J, Plano Clark, V.L., Gutmann, M.L., & Hanson, W.E. Advanced mixed methods research designs. In: Teddlie ATC, editor. Handbook of mixed methods research designs. Thousand Oaks: Sage; 2003.

  89. Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.

    Article  PubMed  Google Scholar 

  90. Erlingsson C, Brysiewicz P. A hands-on guide to doing content analysis. African J Emergency Med. 2017;7(3):93–9.

    Article  Google Scholar 

  91. Singer JW, J.B. Applied longitudinal data analysis: modeling change and event occurance New York: Oxford University Press; 2003.

    Google Scholar 

  92. TreeAge Pro Healthcare Module. 2006 ed. Williamstown, MA: Treeage Software, Inc; 2008.

  93. Manning WG, Basu A, Mullahy J. Generalized modeling approaches to risk adjustment of skewed outcomes data. Journal of Health Economics. 2005;24(3):465–88.

    Article  PubMed  Google Scholar 

  94. Mullahy J. Much ado about two: reconsidering retransformation and the two-part model in health econometrics. J Health Econ. 1998;17(3):247–81.

    Article  CAS  PubMed  Google Scholar 

  95. Duan N. Smearing Estimate:A non-parametric retransformation method. Journal of the American Statistical Association. 1983;78(383):605–10.

    Article  Google Scholar 

  96. Duan N, Manning WG, Morris CN, Newhouse JP. A comparison of alternative models for the demand for medical care. J Business Economic Statistics. 1983;1(2):115–26.

    Google Scholar 

  97. Manning WG, Mullahy J. Estimating log models: to transform or not to transform? J Health Econ. 2001;20(4):461–94.

    Article  CAS  PubMed  Google Scholar 

  98. Hemming K, Girling A. A menu-driven facility for power and detectable-difference calculations in stepped-wedge cluster-randomized trials. Stata J. 2014;14(2):363–80.

    Article  Google Scholar 

  99. Bright RA, Avorn J, Everitt DE. Medicaid data as a resource for epidemiologic studies: strengths and limitations. J Clin Epidemiol. 1989;42(10):937–45.

    Article  CAS  PubMed  Google Scholar 

  100. Garnick DW, Hodgkin D, Horgan CM. Selecting data sources for substance abuse services research. J Subst Abuse Treat. 2002;22(1):11–22.

    Article  PubMed  Google Scholar 

  101. Clark RE, Baxter JD, Aweh G, O'Connell E, Fisher WH, Barton BA. Risk factors for relapse and higher costs among medicaid members with opioid dependence or abuse: opioid agonists, comorbidities, and treatment history. J Subst Abuse Treat. 2015;57:75–80.

    Article  PubMed  PubMed Central  Google Scholar 

  102. Ganguly R, Kotzan JA, Miller LS, Kennedy K, Martin BC. Prevalence, trends, and factors associated with antipsychotic polypharmacy among Medicaid-eligible schizophrenia patients, 1998-2000. J Clin Psychiatry. 2004;65(10):1377–88.

    Article  PubMed  Google Scholar 

  103. Bachhuber MA, Mehta PK, Faherty LJ, Saloner B. Medicaid coverage of methadone maintenance and the use of opioid agonist therapy among pregnant women in specialty treatment. Med Care. 2017;55(12):985–90.

    Article  PubMed  PubMed Central  Google Scholar 

  104. Clark RE, Samnaliev M, McGovern MP. Impact of substance disorders on medical expenditures for medicaid beneficiaries with behavioral health disorders. Psychiatr Serv. 2009;60(1):35–42.

    Article  PubMed  Google Scholar 

  105. Gordon AJ, Lo-Ciganic WH, Cochran G, Gellad WF, Cathers T, Kelley D, et al. Patterns and quality of buprenorphine opioid agonist treatment in a large medicaid program. J Addict Med. 2015;9(6):470–7.

    Article  CAS  PubMed  Google Scholar 

  106. Neighbors CJ, Sun Y, Yerneni R, Tesiny E, Burke C, Bardsley L, et al. Medicaid care management: description of high-cost addictions treatment clients. J Subst Abuse Treat. 2013;45(3):280–6.

    Article  PubMed  PubMed Central  Google Scholar 

  107. Prost A, Binik A, Abubakar I, Roy A, De Allegri M, Mouchoux C, et al. Logistic, ethical, and political dimensions of stepped wedge trials: critical review and case studies. Trials. 2015;16:351.

    Article  PubMed  PubMed Central  Google Scholar 

Download references


This study is funded by the National Institute on Drug Abuse (R33DA049252: PIs Neighbors and Lincourt).

Author information

Authors and Affiliations



CN, PL, and MO conceptualized the study. MO, PL, BG, MM, SH, KG, and CN developed the study procedures, materials, and protocol. PL and BG facilitated partnerships with the substance use disorder clinics. MO drafted the manuscript. All authors have read, edited, and approved the final manuscript.

Corresponding author

Correspondence to Megan A. O’Grady.

Ethics declarations

Ethics approval and consent to participate

All participants in this study will undergo consenting procedures. This study was approved by the IRB at NYU Langone (# i20-00776) on 11/5/2020.

Consent for publication

Not applicable

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

O’Grady, M.A., Lincourt, P., Greenfield, B. et al. A facilitation model for implementing quality improvement practices to enhance outpatient substance use disorder treatment outcomes: a stepped-wedge randomized controlled trial study protocol. Implementation Sci 16, 5 (2021).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • External facilitation
  • Quality metrics
  • Opioid use disorder
  • Quality improvement
  • Implementation
  • Stepped-wedge trial