Skip to main content
  • Study protocol
  • Open access
  • Published:

Study protocol for a factorial-randomized controlled trial evaluating the implementation, costs, effectiveness, and sustainment of digital therapeutics for substance use disorder in primary care (DIGITS Trial)

Abstract

Background

Experts recommend that treatment for substance use disorder (SUD) be integrated into primary care. The Digital Therapeutics for Opioids and Other SUD (DIGITS) Trial tests strategies for implementing reSET® and reSET-O®, which are prescription digital therapeutics for SUD and opioid use disorder, respectively, that include the community reinforcement approach, contingency management, and fluency training to reinforce concept mastery. This purpose of this trial is to test whether two implementation strategies improve implementation success (Aim 1) and achieve better population-level cost effectiveness (Aim 2) over a standard implementation approach.

Methods/Design

The DIGITS Trial is a hybrid type III cluster-randomized trial. It examines outcomes of implementation strategies, rather than studying clinical outcomes of a digital therapeutic. It includes 22 primary care clinics from a healthcare system in Washington State and patients with unhealthy substance use who visit clinics during an active implementation period (up to one year). Primary care clinics implemented reSET and reSET-O using a multifaceted implementation strategy previously used by clinical leaders to roll-out smartphone apps (“standard implementation” including discrete strategies such as clinician training, electronic health record tools). Clinics were randomized as 21 sites in a 2x2 factorial design to receive up to two added implementation strategies: (1) practice facilitation, and/or (2) health coaching. Outcome data are derived from electronic health records and logs of digital therapeutic usage. Aim 1’s primary outcomes include reach of the digital therapeutics to patients and fidelity of patients’ use of the digital therapeutics to clinical recommendations. Substance use and engagement in SUD care are additional outcomes. In Aim 2, population-level cost effectiveness analysis will inform the economic benefit of the implementation strategies compared to standard implementation. Implementation is monitored using formative evaluation, and sustainment will be studied for up to one year using qualitative and quantitative research methods.

Discussion

The DIGITS Trial uses an experimental design to test whether implementation strategies increase and improve the delivery of digital therapeutics for SUDs when embedded in a large healthcare system. It will provide data on the potential benefits and cost-effectiveness of alternative implementation strategies. ClinicalTrials.gov Identifier: NCT05160233 (Submitted 12/3/2021). https://clinicaltrials.gov/ct2/show/NCT05160233

Contributions to the Literature

  • In this randomized controlled trial, the FDA-authorized reSET and reSET-O digital therapeutics are being implemented in primary care clinics

  • The study evaluates the extent to which health coaching and practice facilitation each improve the implementation over a standard implementation strategy

  • Sustainment will be studied, and formative evaluation is used to monitor adaptations and optimize the success of implementation

  • Findings regarding the population-level cost effectiveness of implementation strategies will further provide information to decision makers about the financial implications of this study’s implementation strategies

Background

Substance use disorders (SUD) are prevalent, undertreated, and costly to society [1,2,3,4,5]. Approximately 265 people a day died from drug overdose in the United States in 2021 [6]. However, just 8-14% of people with past-year SUD receive treatment [1, 2]. Most individuals with SUD prefer to receive treatment in primary care [7], and it is believed that providing SUD care in primary care would be less stigmatizing [8]. Consequently, to increase access to SUD treatment, the National Academies of Medicine, Science, and Engineering and others have called for increased integration of substance use care in primary care [9,10,11,12,13,14,15].

New ways to address SUD in primary care are needed. Health systems have taken steps to prevent and treat SUD in primary care through screening, brief intervention, referral to specialty care, and medication treatments that can be prescribed by primary care providers (PCPs) [13,14,15,16,17,18,19,20,21]. However, health systems have been unable to effectively provide treatment to the high volume of patients with SUD who visit primary care [22, 23]. Implementation of SUD treatments is often hindered by feasibility problems, lack of capacity, and discomfort in treating SUD in primary care [24,25,26]. For instance, traditional treatments such as cognitive behavioral therapy and contingency management are among the most proven psychosocial treatments for SUD [27,28,29,30,31], but may require extensive resources to implement and deliver [32, 33]. Buprenorphine, a life-saving treatment for opioid use disorder (OUD), can be prescribed in office-based settings, but most do not receive this medication [34,35,36]. There is a clear need to identify treatments that are both effective and feasible to implement in primary care.

The adoption of digital therapeutics in primary care is potentially one way to increase access to evidence-based treatments. Several digital therapeutics for SUD are supported by evidence for their efficacy or effectiveness [37,38,39]. Digital therapeutics may deliver intervention content such as assessments, treatment modules, and normed feedback to patients via websites or smartphone apps, often under the guidance of a clinician [40]. The use of digital therapeutics could potentially help overcome common barriers in primary care, while providing access to an effective treatment [41, 42]. For instance, studies have shown that digital therapeutics for SUD can produce beneficial effects while reducing the amount of time that clinicians need to spend with patients [39, 43], they are acceptable to patients [44, 45], they may improve clinical outcomes when delivered in real-world care [46,47,48,49], and they can be effective when added to usual care approaches that lack an evidence-base [39, 43].

At least two logistical challenges must be addressed in implementation research to determine how to achieve a far-reaching, sustainable implementation of digital therapeutics in primary care. First, health systems must solve barriers to getting clinicians to offer these treatments to patients. For instance, clinicians encounter difficulty with executing novel workflow processes [50,51,52,53,54], such as creating login accounts for patients and “teaching” them to use apps [40, 55, 56], often impacting adoption of these treatments [52, 57, 58]. A second challenge is that patients often need human support to effectively engage in digital therapeutics [53, 54, 59,60,61,62]. Successful implementations must provide support to patients to help them engage in use of apps, without overburdening primary care teams [51, 53, 54, 59, 63]. Given the known time constraints and competing demands in primary care [25, 64], teams may find it infeasible to offer adequate support for engagement in digital therapeutics. It is unknown whether clinicians in primary care can add additional tasks to their already demanding workload to encourage digital therapeutic engagement, or if they alternatively need dedicated staff to ensure engagement.

The DIGITS Trial

The Digital Therapeutics for Opioids and other SUDs trial (DIGITS Trial) seeks to identify how to best implement digital therapeutics for SUDs in primary care. The clinical intervention includes two 12-week, smartphone-based prescription digital therapeutics, reSET® and reSET-O® made by Pear Therapeutics, which have been authorized by the United States Food and Drug Administration (FDA) for the treatment of SUD and OUD, respectively. reSET and reSET-O are commercial versions of a computerized cognitive-behavioral treatment, the Therapeutic Educational System, which was shown to improve patient outcomes in four RCTs [39, 43, 65,66,67]. However, all prior RCTs were conducted in specialty addiction treatment settings, not in primary care.

The DIGITS Trial seeks to evaluate whether practice facilitation and health coaching can improve digital therapeutic deployment in primary care beyond a standard implementation strategy. The standard implementation strategy, which serves as a comparator, is based on a multifaceted implementation strategy previously used by clinical leaders at the participating healthcare system to implement app-based treatments for depression and anxiety. It includes discrete strategies such as clinician training and electronic health record (EHR) tools.

Practice facilitation is designed to overcome workflow challenges by supporting clinicians to tailor implementation to their local context [68,69,70]. Facilitation methods have been used to implement addiction interventions and experts describe it as one of the most successful implementation strategies [58, 71,72,73,74,75]. It is both a process and a set of strategies designed to build relationships, identify and overcome barriers, and help the clinical team with implementation. Meta-analysis has demonstrated that facilitation increases the odds of evidence-based primary care [76].

Health coaching employs a centralized mid-level provider, namely, a medical assistant (MA), to coach patients to engage in the digital therapeutic while reducing burden on primary care teams. The health coach encourages patient engagement and use of the apps, reinforces learning and skill practice, and encourages completion of healthcare visits with the primary care team. The strategy’s conceptual targets are informed by the literature on patient-mediated implementation strategies, such that health coaching is designed to inform and educate patients, activate them in healthcare, and promote collaboration between patients and healthcare teams [77]. Coaching is an effective strategy for engaging patients in digital therapeutics [53, 54, 59,60,61].

The DIGITS Trial is a parallel group, factorial, cluster-randomized trial. Cluster-randomization is at the clinic level because the implementation strategies were assigned to primary care clinics. The strong evidence for reSET and reSET-O from specialty care settings provides sufficient inferential evidence to conduct a trial in primary care that uses a hybrid effectiveness-implementation design with a main focus of evaluating implementation outcomes and a secondary focus of evaluating population-level effectiveness and cost-effectiveness of implementation strategies (“hybrid type 3”) [78].

Specific Aims

The first aim is to estimate the effect of practice facilitation and health coaching in increasing the reach and fidelity [79, 80] of digital therapeutics. We hypothesize significantly higher reach among clinics randomized to practice facilitation (hypothesis 1) and significantly higher fidelity among clinics randomized to health coaching (hypothesis 2), compared to clinics that did not implement with these respective strategies.

The second aim is to compare the population-level cost-effectiveness (PLCEA) [81] of the implementation strategies in improving reach, fidelity, and substance use. This analysis will inform the economic value of the additional implementation strategies relative to the standard implementation strategy. PLCEA methods consider that real-world implementations of an evidence-based practice may yield different effectiveness or cost-effectiveness results than tightly controlled RCTs of the same intervention.

Other study objectives are to: 1) conduct a formative evaluation to provide feedback to the healthcare system, monitor implementation fidelity, and record adaptations, 2) evaluate additional secondary and other outcome measures, including sustainment of the implementation, to provide a comprehensive assessment of implementation success, and 3) evaluate patient-level moderators of reach.

Methods / Design

Setting

The trial is conducted in primary care clinics of Kaiser Permanente Washington (KPWA), a health insurance coverage and healthcare system that serves a population of privately insured patients and those insured by Medicaid and Medicare. Additional file 1 contains details about the structure of primary care teams and SUD services available. KPWA provides care in urban, rural, and suburban communities. Patient populations of the primary care clinics vary in diversity; clinics range from 13-53% Black, Indigenous, or Persons of Color (includes Black/African American, American Indian/Alaskan Native, Asian, Native Hawaiian/Pacific Islander, Hispanic or Latino). Annually, approximately 340,000 patients attend healthcare visits.

Overview of Trial Design

Primary care clinics are eligible if they have ≥1 clinician trained in reSET and reSET-O and had not previously piloted these digital therapeutics. To maximize the number of study clinics, clinics could become eligible if they met criteria from 12/9/2021 through 8/11/2022 (Fig. 1). Each clinic has an active implementation period of a minimum of 6 months to a maximum of 12 months, starting on the clinic’s randomization date. Table 1 displays a chart of study eligibility assessment, allocation to study arms, and assessments periods following SPIRIT guidelines [82].

Fig. 1
figure 1

Timing of active implementation and sustainment periods at the study sites

Table 1 Timing of the DIGITS Trial’s assessment schedules, following SPIRIT 2013 guidelines (Standard Protocol Items: Recommendations for Interventional Trials)

Each clinic’s implementation approach is assigned using a 2x2 factorial design (see Randomization Procedures). This allows this study to estimate the main effects of the implementation strategies, compare their effectiveness, and evaluate their interactions. This design yields four distinct implementation approaches (Table 2).

Table 2 Random assignment of primary care clinics to two implementation strategies resulting in four implementation approaches (2x2 factorial design)

The study obtains all quantitative data for sample identification and outcome measurement from secondary data. The study is not proactively recruiting participants into the analytic sample; rather, outcomes will be analyzed among all patients who screen positive for unhealthy substance use in primary care, allowing for unbiased assessment of reach into the target population.

Ethical Considerations

The Kaiser Permanente Washington Institutional Review Board granted ethical approvals including providing waivers of consent and HIPAA authorization.

Data Sources

Quantitative Data Sources

Data for the study are drawn from multiple sources, including: (1) EHR data, (2) health plan insurance claims, and (3) reSET and reSET-O usage and patient self-report files [46, 83]. Additional data for the economic evaluation are described later (see Economic Costs).

EHR and claims databases include information generated by patient visits inside and outside of the study healthcare system, respectively. Data domains include demographic characteristics, diagnosis codes, procedure codes, medications, visit type and location, and provider type. EHR databases additionally include behavioral health screening data.

Randomization Procedures

Sites were randomized with equal probability to the four different implementation approaches by the study biostatistician using a computer-generated list of random numbers. Randomization employs permuted blocks of size 4 or 8 (randomly selected) to balance characteristics of clinics that become eligible for the study over time. The allocation sequence is concealed by the study biostatistician in a password-protected file until clinic eligibility is ascertained. A total of 22 clinics were randomized (Fig. 2). Two clinics were paired because of their geographical proximity and sharing of clinic staff, yielding 21 randomized sites.

Fig 2
figure 2

CONSORT diagram of site enrollment and allocation to study arms

Conceptual Framework

This study uses the dynamic sustainability framework (DSF) to conceptualize the process of implementation and sustainment [84, 85]. In the DSF, three core constructs, the intervention, practice setting, and ecological system are constantly in flux. Changes are intrinsic in implementation efforts, thus implementation teams must seek to “maximize fit” by iteratively monitoring, recording, and adapting to changes over time to increase the likelihood of reach and sustainment [84]. A failure to maximize fit may lead to implementation failure and lack of sustainment. We chose an adaptive implementation strategy, practice facilitation, to attend to clinic-level barriers and facilitators that vary across clinics and over time [52, 86]. The DSF also guides our approach to formative evaluation and its continuous assessment of key DSF constructs (see Formative Evaluation).

Study Development and Piloting

From February to May 2021, researchers and healthcare system leaders partnered to conduct a quality improvement project to pilot the standard implementation strategy at two primary care clinics. Goals were to test and refine the clinical workflow for delivering reSET and reSET-O, and to evaluate and improve the acceptability and feasibility of the implementation strategy. Prior to piloting, the study required contracting with the vendor, healthcare system approval of the app’s use of monetary incentives, and an extensive information security risk evaluation. Initial workflow design was informed by two user-centered design studies conducted in partnership clinical leaders [51, 87]. After the quality improvement pilot, practice facilitation and health coaching were added to the pilot clinics. Learnings led to the modification of EHR tools, training materials and processes, procedures for virtual visits, and economic analysis and formative evaluation data collection materials. The health coaching protocol was modified to proactively inform patients about the digital therapeutics. We also conducted secondary data analyses of health system data to refine the statistical analysis plan and operationalization of study measures.

Clinical Intervention

reSET and reSET-O can be prescribed to patients who attend in-person or virtual healthcare visits. Table 3 describes the general procedures used by clinicians to offer these treatments. Clinicians assess patient eligibility for being offered reSET and reSET-O based on the FDA label for each product. Instead of collecting written consent, text is generated by the EHR to notify patients that researchers are using medical records to study offering the apps. Background information about the selection of reSET and reSET-O for this study is included in Additional file 2.

Table 3 General procedures for using reSET and reSET-O in primary care

Implementation Strategies

Standard Implementation

Upon study launch, the healthcare system led a 3-hour, two-part live videoconference training on delivering reSET and reSET-O. The live training was offered three times to accommodate clinician schedules. Clinicians who missed these trainings could complete a “self-study” digitally recorded training. The healthcare system identified that integrated mental health specialists embedded in clinical teams would be the main digital therapeutic prescribers. Within three months after randomization, PCPs and nurses began to express interest in prescribing and thus were provided with self-study training materials.

Trainings cover research evidence for reSET and reSET-O, a product demo and evidence review by Pear Therapeutics staff, a description of the implementation toolkit and procedures for prescribing, a question & answer period, and a walkthrough of EHR documentation templates and tools. The implementation toolkit included: (1) a job aid with major steps of digital therapeutic delivery; (2) patient pamphlets that describe the digital therapeutics and steps for getting started; (3) scripts to help clinicians introduce the digital therapeutics; and (4) EHR tools including documentation templates, patient instructions, and a description of risks/benefits (e.g., app security and privacy information). These were adapted from a toolkit previously used by the healthcare system. New materials for this study included (5) an EHR order set to prescribe the digital therapeutics (“standing order”), and (6) a “huddle card” to guide local reSET and reSET-O prescribers in marketing the apps to primary care teams during standing meetings. All materials are continuously revised as needed and hosted on an intranet website. Healthcare system leaders offer continued support to clinicians on an as-needed basis. Health system leaders review monthly text-based performance reports of clinician and patient digital therapeutic usage data.

Practice Facilitation

In the context of a supportive relationship, facilitation is centered around four components of an implementation facilitation manual [88]:

  1. 1.

    Bolster education: Help the local clinic’s reSET and reSET-O prescriber learn how to market the therapeutic to patients and care teams and garner support for digital therapeutic use.

  2. 2.

    Audit and provide feedback: Share progress on measurable performance goals of reach and fidelity for self-assessment and individual performance ranking in comparison to other anonymized clinics to prompt change in practice.

  3. 3.

    Support Plan-Do-Study-Act (PDSA) cycles: Support the clinic’s reSET and reSET-O prescriber in designing small tests of change to increase intervention reach and fidelity. This involves reviews of audit and feedback data followed by goal setting, problem solving, and adjustment of activities (e.g., workflows).

  4. 4.

    Engage others in change: Invite additional implementation stakeholders (e.g., clinic leadership, PCPs, care team members) to participate in problem-solving and PDSA cycles.

Trained facilitators hold practice facilitation sessions over videoconference consistent with a virtual facilitation approach [88]. After randomization, the facilitator meets with the local reSET and reSET-O prescriber to establish a relationship, set expectations, and learn about the clinic context. The facilitator then meets with clinic leadership to set expectations and secure leadership endorsement of facilitation activities.

After the leadership meeting, the facilitator holds an implementation kickoff meeting with the local clinic reSET and reSET-O prescriber(s) (and health coach, if applicable), which is the first of 12 monthly facilitation meetings. Facilitators are available for ad-hoc support between meetings and keep a log of meeting length, attendance, and implementation activity completion. Meeting agendas are tailored in response to staff capacity and progress on the four facilitation components strategies. If the clinic’s prescriber leaves the organization, the facilitator will suspend monthly meetings until another staff is hired and/or trained, at which point meetings will resume with the new team member.

Health Coaching

The health coach is a credentialed MA employed by the healthcare system (not by researchers). The healthcare system provides a supervision structure, credentialing, and oversight, and the research grant funds the position. The MA completed training and certification as a Certified Wellness Health Coach through Real Balance Global Wellness Services [89], paid for by the study. A study research interventionist developed the manualized coaching protocol, trained the health coach to follow the manual, and provides ongoing technical assistance in approximately weekly calls. The MA conducts visits from an office at a central location via telephone and electronic messages through the EHR patient portal. Before the first session, the health coach notifies patients that researchers are studying the impact of health coaching.

Health coaching includes approximately weekly coaching contact with enrolled patients and outreaches to those who are prescribed but do not initiate or engage with the therapeutic. The health coach discusses digital therapeutic lesson content with the patient and encourages them to engage in the content. The coaching protocol guides the MA to handle patient concerns (e.g., substance use, safety concerns, technical issues) directing clinical concerns to a clinician. The health coach documents contact with patients in the medical record. In addition, the health coach conducts proactive outreach by sending messages to potentially eligible patients via the patient portal to notify them about the availability of the digital therapeutics. To facilitate these activities, the health coach monitors an EHR-based population management workbench and the app vendor’s web-based clinical dashboard.

Additional implementation strategy details

Additional file 3 follows reporting guidelines outlined by Proctor and colleagues to document the implementation strategies [90].

Quantitative Evaluation

Study Sample and Eligibility Criteria

Sites are the unit of analysis. The study uses an open-cohort design, identifying patients following randomization because the digital therapeutic is offered when patients have a clinical encounter. For the statistical analyses, patients will be attributed to the site where they had their first qualifying visit. Patient eligibility criteria for automatic inclusion are determined with EHR data. Patient inclusion criteria are (1) had a primary care visit in a participating clinic from 2 weeks before through the active implementation period (for primary outcome analyses) or sustainment period (for sustainment analyses), (2) screened positive for substance use on the day of the visit or in the prior year, and (3) adult 18 years of age or older at the time of the visit. Positive screens are indicated by patient self-report of daily cannabis use or any drug use in the past year on instruments described previously [91, 92]. Patients are excluded if they have previously requested to opt out of research studies.

Outcome Measures

Table 4 outlines the study’s primary and secondary outcomes, other pre-specified outcomes, and other explanatory and sensitivity analysis measures. Outcomes are conceptualized via the RE-AIM framework (reach, effectiveness, adoption, implementation fidelity, maintenance/sustainment) and measured at the site-level [93, 94].

Table 4 Primary, secondary, and other outcomes in the DIGITS Trial1

Reach and fidelity are primary outcomes. Reach is the site-level proportion of patients who initiate reSET or reSET-O. For this outcome to occur, a clinician must prescribe the digital therapeutic and the patient must complete at least one treatment module. Fidelity is the site-level mean number of weeks in which patients use the digital therapeutic as recommended. This includes completing a recommended 4 or more modules per week while under the care of a clinician [39, 95, 96].

Patient engagement in SUD care, a secondary outcome, is operationalized as the mean number of months in which patients make at least one visit for SUD in any setting. Engagement is conceptualized as an effectiveness outcome. Economic costs are another secondary outcome operationalized using PLCEA methodology (see Economic Evaluation).

Other pre-specified outcome measures include an additional measure of reach quantifying the proportion of patients who are prescribed reSET or reSET-O by a clinician, irrespective of whether a patient initiates its use. An additional fidelity measure will estimate the number of weeks in which patients complete at least one module per week. Two effectiveness outcomes include substance use and abstinence, measured as the proportion of patients who are reached and reduce their substance use, or who use no substances, respectively.

Sustainment will be operationalized as the proportion of patients who are reached (same definition as above) during the sustainment period of the study.

Exploratory measures will be analyzed to comprehensively assess implementation and effectiveness. One adoption measure is operationalized as the proportion of healthcare providers prescribing reSET or reSET-O, overall and by provider type. Another adoption measure is the mean number of months in which providers access clinician dashboards. An additional reach variable will indicate the proportion of patients who download and unlock the digital therapeutics. Two additional fidelity variables indicate the mean number of weeks in which patients use 4 modules per week regardless of whether they see a clinician, and the mean number of modules completed over the 12-week prescription. Additional substance use and abstinence measures will use self-report timeline follow-back data collected by the app every four days during the 12-week prescription [75]. We will also measure the proportion of patients who are reached by reSET-O and achieve abstinence from all substances, as evidenced by results of routine urine drug screens administered as part of clinical care (often administered among patients with OUD).

Statistical Analysis

Following procedures for factorial trials, we will analyze main effects on primary and secondary outcomes by examining the mean responses at one factor level and at a contrasting factor level, collapsed across all levels of the other factors (e.g., practice facilitation versus no practice facilitation) [97,98,99].

Primary Outcome Analyses

Analyses will follow intent-to-treat principles, with site analyzed according to their assigned treatment group regardless of the amount of implementation strategy delivered. We will fit a linear regression model to estimate the main effect of each factor level (practice facilitation, health coach) [97, 99]. For the primary outcomes of reach and fidelity, we will apply linear regression to model the proportion of patients reached within a site and the site-specific mean number of weeks with fidelity, respectively. We will test hypotheses 1 and 2 by testing the appropriate contrast from the regression model and use the Holm procedure to control the familywise type 1 error rate of the two primary hypotheses at 0.05. In addition to the hypothesis above (see Specific Aims), we secondarily hypothesize that practice facilitation is superior to health coaching in increasing reach, and health coaching is superior to practice facilitation in increasing fidelity. We also will examine whether the two enhanced strategies together are superior to standard implementation by comparing clinics with both enhanced strategies to clinics with neither. We will also examine interaction effects between the two enhanced strategies.

Secondary and Other Pre-Specified Outcome Analyses

We hypothesize significantly higher engagement among sites randomized to practice facilitation or health coaching. Analyses will follow the same general modeling approach as the primary outcomes. Secondary outcome analyses will apply a traditional two-sided type 1 error rate of 0.05; considering issues of multiple comparisons, findings will be interpreted with caution. See Economic Evaluation for a description of cost analyses.

Additional pre-specified reach and fidelity measures and other outcomes will follow the same analytic procedures as the primary outcomes (e.g., linear regression of the site-level measures).

Sustainment analyses will describe reach over time during the sustainment period graphically by study arm. If reach is greater than 5% during sustainment (overall or in any study arm), we will conduct secondary analyses allowing the intervention effect to vary over time. Specifically, we will subdivide the sustainment period into discrete time intervals (e.g., 4-month windows) and include interaction terms with intervention group; repeated measures over time within a clinic will be accounted for using a mixed-effects model with site-specific random intercepts.

Patient-level moderators of reach and fidelity

We will conduct exploratory patient-level analyses stratified by sex and by SUD type when estimating reach, fidelity, abstinence, and substance use reductions, allowing us to provide data about implementation effectiveness in specific subgroups.

Sensitivity Analyses

We will conduct several sensitivity analyses to examine the robustness of trial results. If, due to random chance, factor levels are imbalanced in any baseline characteristics (e.g., site size), we will perform sensitivity analyses where we include these characteristics in the regression models. Following reporting guidelines [100, 101], we will examine whether the proportion of screen-positive patients (the measure defining the study population) differs across arms. To assess for identification bias [102, 103], we plan to conduct sensitivity analyses in which we consider alternative population denominators: all patients with a SUD diagnosis, and all patients with visits regardless of whether they screened positive for substance use.

Statistical Power

Minimal detectable differences for fixed 80% power were estimated based on 27 clinics (number of clinics available during the study pilot phase) and a two-sided type 1 error rate of 0.025 for the two primary outcomes of reach and fidelity (to control the familywise type 1 error at 0.05). We estimated that we will have >0.80 power to detect an increase of 2 percentage points in site-level reach among screen-positive patients in sites with versus without practice facilitation. We estimated that we will have >0.80 power to detect an increase in the site-level mean number of weeks fidelity of 0.088 among screen-positive patients in sites with versus without a health coach. Power analysis assumptions and other details are in Additional file 4.

Economic evaluation

Aim 2 will conduct PLCEA to measure the cost of each implementation strategy in increasing reach, fidelity, and engagement.

To operationalize PLCEA, we will measure the following ratio:

$$\frac{\textrm{Incremental}\ \textrm{Population}\ \textrm{level}\ \textrm{Costs}}{\textrm{Incremental}\ \textrm{Population}\ \textrm{level}\ \textrm{effectiveness}}$$

where incremental population-level costs are the difference in costs between population in sites targeted by each implementation strategy. Population-level costs include implementation costs, direct intervention costs and indirect healthcare costs. While the primary outcome analyses above test specific hypotheses regarding the main effects followed by exploratory tests of interactions, the PLCEA will estimate the difference in the outcomes of reach, fidelity and engagement (see Outcome Measures) between respective populations targeted by one of the enhanced implementation strategies and the standard strategy. This produces information about incremental costs of each implementation approach while accounting for potential multiplicative effects on implementation costs [104]. Analyses will be performed from the perspective of a healthcare system and payer.

Implementation Costs

Implementation costs, which are collected for all trial arms, include the monetary value of time and resources incurred by the healthcare system needed to roll-out the digital therapeutics. We use an activity-based approach to calculate the opportunity cost of time devoted to activities related to implementation [105]. This approach uses microcosting, applying unit costs to the amount of time spent by multiplying the number of hours devoted to activities by the estimated wage rate of an implementation participant. Wage rates will be ascertained from Bureau of Labor Statistics data that captures the average wage for an implementation participant’s occupation in the geographical area of employment. Implementation costs also include the direct cost of implementation resources (e.g., printing cost of materials to support adoption). We will sum costs associated with activities and resources needed to execute all implementation strategies. Details about cost data collection are in Additional file 5.

Operating Costs

Operating costs are dollars spent on regular operation of the digital therapeutics incurred by the healthcare system and patients. We include both direct operating and indirect overhead costs for completeness. Direct costs include digital therapeutic licensing fees. Indirect cost expenditures include infrastructure/technical resources and staff support such as information technology, office space, and capital equipment. We will estimate infrastructure costs from public data sources following methods used in prior implementation cost studies [106].

Direct Intervention Costs

Intervention costs include encounters with clinicians related to the initiation and continued use of reSET and reSET-O. We will identify patient visits with app prescribers and health coaches during the 12-week prescription period, using EHR data. We will count encounters on the day of and in 12-weeks after patients activate their prescription.

Using patient-level encounter records, we will calculate intervention costs using KPWA’s internal accounting that measures actual production costs incurred by the healthcare system in providing care to members. This model allocates costs to all encounters based on a general ledger for all services. Cost values include a direct care component (e.g., nurse salaries) or an overhead component (e.g., facilities). Costs excluded from allocation include those not directly related to delivering health services (e.g., insurance) and patient out-of-pocket costs.

Other Indirect Healthcare Costs

Digital therapeutics usage may impact patients’ use of services other than SUD care. Thus, we will sum costs for all outpatient encounters (including primary care, specialty mental health, specialty medical, ancillary, acute care) and dispensed medications expected in this population.

Qualitative Formative Evaluation of Implementation

Evaluation goals and methods

Formative evaluation is used to continuously identify and document modifications, adaptations, and implementation determinants (i.e., barriers and facilitators) as they pertain to each DSF construct. Evaluators communicate salient findings with the implementation team [70]. Evaluators also monitor for contamination across implementation strategy conditions and observe how implementation outcomes change in correspondence to changes in DSF constructs. The Framework for Reporting Adaptations and Modifications to Evidence-based Implementation Strategies (FRAME-IS) guides tracking of modifications, adaptations, processes of and reasons for change [107, 108].

Formative evaluation analyses follow a qualitative rapid assessment process [109, 110]. Using templated forms (Additional file 6), evaluators take field notes at implementation meetings and review secondary data sources (e.g., meeting notes, communications, and implementation documents). Analysis procedures consist of structured data reduction to categorize and summarize fieldnotes and secondary data sources according to predetermined domains [90, 107, 111], while also capturing emergent findings (e.g., decision points). Summary data are entered into a strategy-by-domain matrix to systematically extract themes, detect trends, and answer main evaluation questions (Additional file 7). Synthetized findings and recommended adaptations are regularly presented to the study team and healthcare system partners, who decide by consensus whether to execute adaptations.

Sustainment

Following the end of the active implementation period, funding and support for implementation strategies by the research grant is scaled back to study sustainment and institutionalization of the digital therapeutics. The grant will provide additional prescriptions after the active implementation period ends to allow time for the health system to decide whether to continue offering reSET and reSET-O, and to allow for the continued measurement of reach and fidelity and qualitative measures of sustainment.

Trial Status

No outcome data have been analyzed yet; the last site completes active implementation on 2/10/2023.

Discussion

This trial is designed to overcome several limitations of prior research on digital treatments for SUD. Real-world implementation studies in health systems may provide more actionable information to decision makers such as health system leaders than traditional effectiveness trials [81, 84]. For instance, other trials may be conducted within selected patient populations who consent to participate in studies of treatment for SUD, who may not be representative of typical patients. Moreover, other trials may rely on researchers to deliver digital treatments whose workloads and primary responsibilities are very different from those who are delivering clinical care.

This study fills a gap in implementation science, where evidence is needed on the comparative effectiveness and cost-effectiveness of different implementation strategies on improving care. Findings regarding the population-level cost effectiveness of implementation strategies will further provide information to decisions makers about the financial implications of this study’s strategies. For instance, while an implementation strategy may result in greater effectiveness, it may not necessarily be more cost-effective. Other aspects of this study, such as the workflows created and the insights into sustainment ascertained through qualitative research methods, will help create a roadmap for other health care organizations wishing to care more effectively for SUDs.

Strengths and Limitations

Significant strengths of this trial include that 1) the health system has high universal screening rates for substance use that can assist with comprehensively identifying the population targeted (i.e., the reach denominator); 2) the study has access to a diverse set of data enabling the detailed analysis of clinical and economic outcomes, and 3); the study is built on principles of preserving real-world conditions. However, the study design also presents limitations. Notably, the study is being conducted after the COVID pandemic began; in the context of significant healthcare system staffing shortages, fewer clinics were recruited, and the intervention is delivered in clinics with reduced capacity. The substance use and abstinence measures collected from the EHR will be available only for an estimated ~70% of patients because of reliance on follow-up screening data. Findings may not generalize into systems that primarily serve uninsured populations.

Conclusion

This trial seeks to understand how to best engage primary care clinicians and patients to increase the reach and fidelity of digital therapeutics for SUD. As opposed to providing data on clinical and cost outcomes associated with a digital therapeutic, this study will provide health system leaders with data on the outcomes of using specific implementation strategies.

Availability of data and materials

Economic and formative data collection materials are provided as Additional Files. Complete details on the operationalization of measures using electronic health data are outlined in the Statistical Analysis Plan (available from authors). The trial’s Data Safety and Monitoring Plan and protocol amendments are available from the first author upon reasonable request. Additional study materials and data are available from the first author upon reasonable request. Individual-level data will not be released to avoid risk of participant re-identification.

Abbreviations

DIGITS Trial:

The Digital Therapeutics for Opioids and Other Substance Use Disorders Trial

DSF:

Dynamic sustainability framework

EHR:

Electronic health records

FDA:

United States Food and Drug Administration

FRAME-IS:

Framework for Reporting Adaptations and Modifications to Evidence-based Implementation Strategies (FRAME-IS)

HIPAA:

Health Insurance Portability and Accountability Act

HIT-ACE:

Health Information Technologies-Academic and Commercial Evaluation

KPWA:

Kaiser Permanente Washington

MA:

Medical assistant

SUD:

Substance use disorder

OUD:

opioid use disorder

PCP:

Primary care provider

PDSA:

Plan-Do-Study-Act

PLCEA:

Population-level cost effectiveness analysis

RCT:

Randomized controlled trial

References

  1. Grant BF, Goldstein RB, Saha TD, Chou SP, Jung J, Zhang H, et al. Epidemiology of DSM-5 Alcohol Use Disorder: Results From the National Epidemiologic Survey on Alcohol and Related Conditions III. JAMA Psychiatry. 2015;72:757–66.

    Article  Google Scholar 

  2. Grant BF, Saha TD, Ruan WJ, Goldstein RB, Chou SP, Jung J, et al. Epidemiology of DSM-5 Drug Use Disorder: Results From the National Epidemiologic Survey on Alcohol and Related Conditions-III. JAMA Psychiatry. 2016;73:39–47.

    Article  Google Scholar 

  3. Mokdad AH, Marks JS, Stroup DF, Gerberding JL. Actual causes of death in the United States, 2000. JAMA. 2004;291:1238–45.

    Article  Google Scholar 

  4. Degenhardt L, Hall W. Extent of illicit drug use and dependence, and their contribution to the global burden of disease. Lancet. 2012;379:55–70.

    Article  Google Scholar 

  5. Sacks JJ, Gonzales KR, Bouchery EE, Tomedi LE, Brewer RD. 2010 national and state costs of excessive alcohol consumption. Am J Prev Med. 2015;49:e73–9.

    Article  Google Scholar 

  6. Ahmad FB, Cisewski JA, Rossen LM, Sutton P. Provisional drug overdose death counts. National Center for Health Statistics; 2022. Available from: https://www.cdc.gov/nchs/nvss/vsrr/drug-overdose-data.htm.

  7. Barry CL, Epstein AJ, Fiellin DA, Fraenkel L, Busch SH. Estimating demand for primary care-based treatment for substance and alcohol use disorders. Addiction. 2016;111:1376–84.

    Article  Google Scholar 

  8. U. S. Department of Health and Human Services. Facing addiction in America: The surgeon general’s report on alcohol, drugs, and health [Internet]. Washington DC: U.S. Department of Health & Human Services; 2016. Available from: https://addiction.surgeongeneral.gov/sites/default/files/surgeon-generals-report.pdf

    Google Scholar 

  9. Ghitza UE, Tai B. Challenges and opportunities for integrating preventive substance-use-care services in primary care through the Affordable Care Act. J Health Care Poor Underserved. 2014;25:36–45.

    Article  Google Scholar 

  10. Saitz R, Larson MJ, Labelle C, Richardson J, Samet JH. The case for chronic disease management for addiction. J Addict Med. 2008;2:55–65.

    Article  Google Scholar 

  11. Crowley RA, Kirschner N, Health, Public Policy Committee of the American College of P. The integration of care for mental health, substance abuse, and other behavioral health conditions into primary care: executive summary of an American College of Physicians position paper. Ann Intern Med. 2015;163:298–9.

    Article  Google Scholar 

  12. Institute of Medicine. Improving the quality of health care for mental and substance-use conditions [Internet]. Washington, D.C.: National Academies Press; 2006. Available from: https://www.ncbi.nlm.nih.gov/books/NBK19830/

    Google Scholar 

  13. National Council for Behavioral Health. Implementing care for alcohol & other drug use in medical settings: An extension of SBIRT. 2021. Available from: https://www.thenationalcouncil.org/wp-content/uploads/2021/04/Implementing_Care_for_Alcohol_and_Other_Drug_Use_In_Medical_Settings_-_An_Extension_of_SBIRT.pdf.

  14. U. S. Preventive Services Task Force. Final recommendation statement: Unhealthy alcohol use in adolescents and adults: Screening and behavioral counseling interventions. 2018. Available from: https://health.gov/healthypeople/tools-action/browse-evidence-based-resources/unhealthy-alcohol-use-adolescents-and-adults-screening-and-behavioral-counseling-interventions.

  15. U. S. Preventive Services Task Force, Krist AH, Davidson KW, Mangione CM, Barry MJ, Cabana M, et al. Screening for unhealthy drug use: US preventive services task force recommendation statement. JAMA. 2020;323:2301–9.

    Article  Google Scholar 

  16. Mccrady BS. Health-care reform provides an opportunity for evidence-based alcohol treatment in the USA: The National Institute for Health and Clinical Excellence (NICE) guideline as a model: Editorial. Addiction. 2013;108:231–2.

    Article  Google Scholar 

  17. Jonas DE, Amick HR, Feltner C, Bobashev G, Thomas K, Wines R, et al. Pharmacotherapy for adults with alcohol use disorders in outpatient settings: a systematic review and meta-analysis. JAMA. 2014;311:1889–900.

    Article  Google Scholar 

  18. Watkins KE, Ober AJ, Lamp K, Lind M, Setodji C, Osilla KC, et al. Collaborative care for opioid and alcohol use disorders in primary care: The SUMMIT randomized clinical trial. JAMA Intern Med. 2017;177:1480–8.

    Article  Google Scholar 

  19. Babor TF, McRee BG, Kassebaum PA, Grimaldi PL, Ahmed K, Bray J. Screening, Brief Intervention, and Referral to Treatment (SBIRT): Toward a public health approach to the management of substance abuse. Subst Abus. 2007;28:7–30.

    Article  Google Scholar 

  20. Post EP, Metzger M, Dumas P, Lehmann L. Integrating mental health into primary care within the Veterans Health Administration. Fam Syst Health. 2010;28:83–90.

    Article  Google Scholar 

  21. Campbell CI, Saxon AJ, Boudreau DM, Wartko PD, Bobb JF, Lee AK, et al. PRimary Care Opioid Use Disorders treatment (PROUD) trial protocol: A pragmatic, cluster-randomized implementation trial in primary care for opioid use disorder treatment. Addict Sci Clin Pract. 2021;16:9.

    Article  Google Scholar 

  22. Sayre M, Lapham GT, Lee AK, Oliver M, Bobb JF, Caldeiro RM, et al. Routine assessment of symptoms of substance use disorders in primary care: Prevalence and severity of reported symptoms. J Gen Intern Med. 2020;35:1111–9.

    Article  Google Scholar 

  23. Lapham GT, Lee AK, Caldeiro RM, Glass JE, Carrell DS, Richards JE, et al. Prevalence of behavioral health conditions across frequency of cannabis use among adult primary care patients in Washington state. J Gen Intern Med. 2018;33:1833–5.

    Article  Google Scholar 

  24. Glass JE, Mowbray OP, Link BG, Kristjansson SD, Bucholz KK. Alcohol stigma and persistence of alcohol and other psychiatric disorders: A modified labeling theory approach. Drug Alcohol Depend. 2013;133:685–92.

    Article  Google Scholar 

  25. Chernof BA, Sherman SE, Lanto AB, Lee ML, Yano EM, Rubenstein LV. Health habit counseling amidst competing demands: effects of patient health habits and visit characteristics. Med Care. 1999;37:738–47.

    Article  CAS  Google Scholar 

  26. Jaen CR, Stange KC, Nutting PA. Competing demands of primary care: a model for the delivery of clinical preventive services. J Fam Pract. 1994;38:166–71.

    CAS  Google Scholar 

  27. Prendergast M, Podus D, Finney J, Greenwell L, Roll J. Contingency management for treatment of substance use disorders: a meta-analysis. Addiction. 2006;101:1546–60.

    Article  Google Scholar 

  28. Gates PJ, Sabioni P, Copeland J, Le Foll B, Gowing L. Psychosocial interventions for cannabis use disorder. Cochrane Database Syst Rev. 2016;2016:CD005336.

    Google Scholar 

  29. Dutra L, Stathopoulou G, Basden SL, Leyro TM, Powers MB, Otto MW. A meta-analytic review of psychosocial interventions for substance use disorders. Am J Psychiatry. 2008;165:179–87.

    Article  Google Scholar 

  30. Lussier JP, Heil SH, Mongeon JA, Badger GJ, Higgins ST. A meta-analysis of voucher-based reinforcement therapy for substance use disorders. Addiction. 2006;101:192–203.

    Article  Google Scholar 

  31. Magill M, Ray LA. Cognitive-behavioral treatment with adult alcohol and illicit drug users: a meta-analysis of randomized controlled trials. Journal of studies on alcohol and drugs. 2009;70:516–27.

    Article  Google Scholar 

  32. Carroll KM. Dissemination of evidence-based practices: how far we’ve come, and how much further we’ve got to go. Addiction. 2012;107:1031–3.

    Article  Google Scholar 

  33. Kiluk BD, Carroll KM. New developments in behavioral treatments for substance use disorders. Curr Psychiatry Rep. 2013;15:420.

    Article  Google Scholar 

  34. Lapham G, Boudreau DM, Johnson EA, Bobb JF, Matthews AG, McCormack J, et al. Prevalence and treatment of opioid use disorders among primary care patients in six health systems. Drug Alcohol Depend. 2020;207:107732.

    Article  CAS  Google Scholar 

  35. Boudreau DM, Lapham G, Johnson EA, Bobb JF, Matthews AG, McCormack J, et al. Documented opioid use disorder and its treatment in primary care patients across six U.S. health systems. J Subst Abuse Treat. 2020;112S:41–8.

    Article  Google Scholar 

  36. Hickman M, Steer C, Tilling K, Lim AG, Marsden J, Millar T, et al. The impact of buprenorphine and methadone on mortality: a primary care cohort study in the United Kingdom: burpenorphine vs methadone on mortality risk. Addiction. 2018;113:1461–76.

    Article  Google Scholar 

  37. Carroll KM, Ball SA, Martino S, Nich C, Babuscio TA, Nuro KF, et al. Computer-assisted delivery of cognitive-behavioral therapy for addiction: a randomized trial of CBT4CBT. Am J Psychiatry. 2008;165:881–8.

    Article  Google Scholar 

  38. Gustafson DH, McTavish FM, Chih MY, Atwood AK, Johnson RA, Boyle MG, et al. A smartphone application to support recovery from alcoholism: a randomized clinical trial. JAMA Psychiatry. 2014;71:566–72.

    Article  Google Scholar 

  39. Campbell AN, Nunes EV, Matthews AG, Stitzer M, Miele GM, Polsky D, et al. Internet-delivered treatment for substance abuse: a multisite randomized controlled trial. Am J Psychiatry. 2014;171:683–90.

    Article  Google Scholar 

  40. Hermes ED, Lyon AR, Schueller SM, Glass JE. Measuring the Implementation of Behavioral Intervention Technologies: Recharacterization of Established Outcomes. J Med Internet Res. 2019;21:e11752.

    Article  Google Scholar 

  41. Storholm ED, Ober AJ, Hunter SB, Becker KM, Iyiewuare PO, Pham C, et al. Barriers to integrating the continuum of care for opioid and alcohol use disorders in primary care: A qualitative longitudinal study. Journal of Substance Abuse Treatment. 2017;83:45–54.

    Article  Google Scholar 

  42. McNeely J, Kumar PC, Rieckmann T, Sedlander E, Farkas S, Chollak C, et al. Barriers and facilitators affecting the implementation of substance use screening in primary care clinics: a qualitative study of patients, providers, and staff. Addict Sci Clin Pract. 2018;13:8.

    Article  Google Scholar 

  43. Marsch LA, Guarino H, Acosta M, Aponte-Melendez Y, Cleland C, Grabinski M, et al. Web-based behavioral treatment for substance use disorders as a partial replacement of standard methadone maintenance treatment. J Subst Abuse Treat. 2014;46:43–51.

    Article  Google Scholar 

  44. Campbell AN, Turrigiano E, Moore M, Miele GM, Rieckmann T, Hu MC, et al. Acceptability of a web-based community reinforcement approach for substance use disorders with treatment-seeking American Indians/Alaska Natives. Community Ment Health J. 2015;51:393–403.

    Article  Google Scholar 

  45. Campbell ANC, Montgomery L, Sanchez K, Pavlicova M, Hu M, Newville H, et al. Racial/ethnic subgroup differences in outcomes and acceptability of an Internet-delivered intervention for substance use disorders. J Ethn Subst Abuse. 2017;16:1–19.

    Article  Google Scholar 

  46. Maricich YA, Xiong X, Gerwien R, Kuo A, Velez F, Imbert B, et al. Real-world evidence for a prescription digital therapeutic to treat opioid use disorder. Current Medical Research and Opinion. 2021;37:175–83.

    Article  CAS  Google Scholar 

  47. Xiong X, Braun S, Stitzer M, Luderer H, Shafai G, Hare B, et al. Evaluation of real-world outcomes associated with use of a prescription digital therapeutic to treat substance use disorders. American J Addict. 2022;32:ajad.13346.

    Google Scholar 

  48. Shah N, Velez FF, Colman S, Kauffman L, Ruetsch C, Anastassopoulos K, et al. Real-World Reductions in Healthcare Resource Utilization over 6 Months in Patients with Substance Use Disorders Treated with a Prescription Digital Therapeutic. Adv Ther. 2022;39:4146–56.

    Article  Google Scholar 

  49. Velez FF, Anastassopoulos KP, Colman S, Shah N, Kauffman L, Murphy SM, et al. Reduced Healthcare Resource Utilization in Patients with Opioid Use Disorder in the 12 Months After Initiation of a Prescription Digital Therapeutic. Adv Ther. 2022;39:4131–45.

    Article  CAS  Google Scholar 

  50. Graham AK, Greene CJ, Powell T, Lieponis P, Lunsford A, Peralta CD, et al. Lessons learned from service design of a trial of a digital mental health service: Informing implementation in primary care clinics. Translational Behavioral Medicine. 2020;10:598–605.

    Article  Google Scholar 

  51. Glass JE, Matson TE, Lim C, Hartzler AL, Kimbel K, Lee AK, et al. Approaches for Implementing App-Based Digital Treatments for Drug Use Disorders Into Primary Care: A Qualitative, User-Centered Design Study of Patient Perspectives. J Med Internet Res. 2021;23:e25866.

    Article  Google Scholar 

  52. Mares ML, Gustafson DH, Glass JE, Quanbeck A, McDowell H, McTavish F, et al. Implementing an mHealth system for substance use disorders in primary care: a mixed methods study of clinicians’ initial expectations and first year experiences. BMC Med Inform Decis Mak. 2016;16:126.

    Article  Google Scholar 

  53. Knowles SE, Lovell K, Bower P, Gilbody S, Littlewood E, Lester H. Patient experience of computerised therapy for depression in primary care. BMJ open. 2015;5:e008581.

    Article  Google Scholar 

  54. Gilbody S, Littlewood E, Hewitt C, Brierley G, Tharmanathan P, Araya R, et al. Computerised cognitive behaviour therapy (cCBT) as treatment for depression in primary care (REEACT trial): large scale pragmatic randomised controlled trial. BMJ. 2015;351:h5627.

    Article  Google Scholar 

  55. Mohr D, Cuijpers P, Lehman K. Supportive Accountability: A Model for Providing Human Support to Enhance Adherence to eHealth Interventions. Journal of Medical Internet Research. 2011;13:e30.

    Article  Google Scholar 

  56. Thies K, Anderson D, Cramer B. Lack of Adoption of a Mobile App to Support Patient Self-Management of Diabetes and Hypertension in a Federally Qualified Health Center: Interview Analysis of Staff and Patients in a Failed Randomized Trial. JMIR Hum Factors. 2017;4:e24.

    Article  Google Scholar 

  57. Quanbeck A, Gustafson DH, Marsch LA, Chih MY, Kornfield R, McTavish F, et al. Implementing a Mobile Health System to Integrate the Treatment of Addiction Into Primary Care: A Hybrid Implementation-Effectiveness Study. J Med Internet Res. 2018;20:e37.

    Article  Google Scholar 

  58. Quanbeck AR, Gustafson DH, Marsch LA, McTavish F, Brown RT, Mares M-L, et al. Integrating addiction treatment into primary care using mobile health technology: protocol for an implementation research study. Implementation Science. 2014;9:65.

    Article  Google Scholar 

  59. Gilbody S, Brabyn S, Lovell K, Kessler D, Devlin T, Smith L, et al. Telephone-supported computerised cognitive-behavioural therapy: REEACT-2 large-scale pragmatic randomised controlled trial. Br J Psychiatry. 2017;210:362–7.

    Article  Google Scholar 

  60. Johansson R, Andersson G. Internet-based psychological treatments for depression. Expert Rev Neurother. 2012;12:861–9 quiz 870.

    Article  CAS  Google Scholar 

  61. Andersson G, Cuijpers P. Internet-based and other computerized psychological treatments for adult depression: a meta-analysis. Cogn Behav Ther. 2009;38:196–205.

    Article  Google Scholar 

  62. Park LS, Chih M-Y, Stephenson C, Schumacher N, Brown R, Gustafson D, et al. Testing an mHealth System for Individuals With Mild to Moderate Alcohol Use Disorders: Protocol for a Type 1 Hybrid Effectiveness-Implementation Trial. JMIR Res Protoc. 2022;11:e31109.

    Article  Google Scholar 

  63. Dedert EA, McDuffie JR, Stein R, McNiel JM, Kosinski AS, Freiermuth CE, et al. Electronic Interventions for Alcohol Misuse and Alcohol Use Disorders: A Systematic Review. Annals of Internal Medicine. 2015;163:205.

    Article  Google Scholar 

  64. Lau AY, Piper K, Bokor D, Martin P, Lau VS, Coiera E. Challenges During Implementation of a Patient-Facing Mobile App for Surgical Rehabilitation: Feasibility Study. JMIR Hum Factors. 2017;4:e31.

    Article  Google Scholar 

  65. Bickel WK, Marsch LA, Buchhalter AR, Badger GJ. Computerized behavior therapy for opioid-dependent outpatients: a randomized controlled trial. Exp Clin Psychopharmacol. 2008;16:132–43.

    Article  CAS  Google Scholar 

  66. Christensen DR, Landes RD, Jackson L, Marsch LA, Mancino MJ, Chopra MP, et al. Adding an Internet-delivered treatment to an efficacious treatment package for opioid dependence. J Consult Clin Psychol. 2014;82:964–72.

    Article  Google Scholar 

  67. Maricich YA, Nunes EV, Campbell ANC, Botbyl JD, Luderer HF. Safety and efficacy of a digital therapeutic for substance use disorder: Secondary analysis of data from a NIDA clinical trials network study. Substance Abuse. 2022;43:937–42.

    Article  Google Scholar 

  68. Ritchie MJ, Dollar KM, Miller CJ, Oliver KA, Smith JL, Lindsay JA, et al. Using implementation facilitation to improve care in the Veterans Health Administration (Version 2). Quality Enhancement Research Initiative (QUERI) for Team-Based Behavioral Health: Veterans Health Administration; 2017.

    Google Scholar 

  69. Harvey G, Kitson A. PARIHS revisited: from heuristic to integrated framework for the successful implementation of knowledge into practice. Implement Sci. 2016;11:33.

    Article  Google Scholar 

  70. Stetler CB, Legro MW, Rycroft-Malone J, Bowman C, Curran G, Guihan M, et al. Role of “external facilitation” in implementation of research findings: a qualitative evaluation of facilitation experiences in the Veterans Health Administration. Implement Sci. 2006;1:23.

    Article  Google Scholar 

  71. Glass JE, Bobb JF, Lee AK, Richards JE, Lapham GT, Ludman E, et al. Study protocol: A cluster-randomized trial implementing Sustained Patient-centered Alcohol-related Care (SPARC trial). Implement Sci. 2018;13:108.

    Article  Google Scholar 

  72. Kottke TE, Solberg LI, Brekke ML, Conn SA, Maxwell P, Brekke MJ. A controlled trial to integrate smoking cessation advice into primary care practice: Doctors Helping Smokers. Round III. J Fam Pract. 1992;34:701–8.

    CAS  Google Scholar 

  73. Fortney J, Enderle M, McDougall S, Clothier J, Otero J, Altman L, et al. Implementation outcomes of evidence-based quality improvement for depression in VA community based outpatient clinics. Implement Sci. 2012;7:30.

    Article  Google Scholar 

  74. Cockburn J, Ruth D, Silagy C, Dobbin M, Reid Y, Scollo M, et al. Randomised trial of three approaches for marketing smoking cessation programmes to Australian general practitioners. BMJ. 1992;304:691–4.

    Article  CAS  Google Scholar 

  75. Gustafson DH, Quanbeck AR, Robinson JM, Ford JH, Pulvermacher A, French MT, et al. Which elements of improvement collaboratives are most effective? A cluster-randomized trial: NIATx 200 main outcomes. Addiction. 2013;108:1145–57.

    Article  Google Scholar 

  76. Baskerville NB, Liddy C, Hogg W. Systematic review and meta-analysis of practice facilitation within primary care settings. The Annals of Family Medicine. 2012;10:63–74.

    Article  Google Scholar 

  77. Gagliardi AR, Legare F, Brouwers MC, Webster F, Badley E, Straus S. Patient-mediated knowledge translation (PKT) interventions for clinical encounters: a systematic review. Implement Sci. 2016;11:26.

    Article  Google Scholar 

  78. Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50:217–26.

    Article  Google Scholar 

  79. Carroll C, Patterson M, Wood S, Booth A, Rick J, Balain S. A conceptual framework for implementation fidelity. Implementation Sci. 2007;2:40.

    Article  Google Scholar 

  80. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38:65–76.

    Article  Google Scholar 

  81. Fortney JC, Pyne JM, Burgess JF. Population-level cost-effectiveness of implementing evidence-based practices into routine care. Health Serv Res. 2014;49:1832–51.

    Google Scholar 

  82. Chan A-W, Tetzlaff JM, Altman DG, Laupacis A, Gøtzsche PC, Krleža-Jerić K, et al. SPIRIT 2013 Statement: Defining Standard Protocol Items for Clinical Trials. Ann Intern Med. 2013;158:200.

  83. Maricich YA, Gerwien R, Kuo A, Malone DC, Velez FF. Real-world use and clinical outcomes after 24 weeks of treatment with a prescription digital therapeutic for opioid use disorder. Hospital Practice. 2021;49:348–55.

    Article  Google Scholar 

  84. Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8:117.

    Article  Google Scholar 

  85. Nilsen P. Making sense of implementation theories, models and frameworks. Implementation Science. 2015;10:53.

    Article  Google Scholar 

  86. Helfrich CD, Li Y-F, Sharp ND, Sales AE. Organizational readiness to change assessment (ORCA): development of an instrument based on the Promoting Action on Research in Health Services (PARIHS) framework. Implementation Science. 2009;4:38.

    Article  Google Scholar 

  87. Glass JE, Tiffany B, Matson TE, Lim C, Gundersen G, Kimbel K, et al. Approaches for implementing digital interventions for alcohol use disorders in primary care: A qualitative, user-centered design study. Implementation Research and Practice. 2022;3:263348952211352.

    Article  Google Scholar 

  88. Ritchie MJ, Dollar KM, Miller CJ, Smith, JL, Oliver KA, Kim B, et al. Using implementation facilitation to improve healthcare (Version 3). Veterans Health Administration, Behavioral Health Quality Enhancement Research Initiative (QUERI); 2020. Available from: https://www.queri.research.va.gov/tools/Facilitation-Manual.pdf.

  89. Real Balance Global Wellness Services Inc. 2022. Available from: https://realbalance.com/.

  90. Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:139.

    Article  Google Scholar 

  91. Richards JE, Bobb JF, Lee AK, Lapham GT, Williams EC, Glass JE, et al. Integration of screening, assessment, and treatment for cannabis and other drug use disorders in primary care: An evaluation in three pilot sites. Drug Alcohol Depend. 2019;201:134–41.

    Article  Google Scholar 

  92. Matson TE, Lapham GT, Bobb JF, Johnson E, Richards JE, Lee AK, et al. Cannabis use, other drug use, and risk of subsequent acute care in primary care patients. Drug Alcohol Depend. 2020;108227:108227.

    Article  Google Scholar 

  93. Glasgow RE, Phillips SM, Sanchez MA. Implementation science approaches for integrating eHealth research into practice and policy. Int J Med Inform. 2014;83:e1–11.

    Article  Google Scholar 

  94. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89:1322–7.

    Article  CAS  Google Scholar 

  95. Pear Therapeutics. reSET-O Digital Mobile Therapy: Clinician Directions for Use. Boston; 2018. Report No.: #CDFU-0002, Rev A

  96. Pear Therapeutics. reSET Digital Mobile Therapy: Clinician Directions for Use. Boston; 2018. Report No.: #CDFU-0001, Rev D

  97. Kugler KC, Trail JB, Dziak JJ, Collins LM. Effect coding versus dummy coding in analysis of data from factorial experiments. University Park, PA: The Methodology Center, Pennsylvania State University; 2012.

    Google Scholar 

  98. Wu CJ, Hamada MS. Experiments: planning, analysis, and optimization, vol. 552. Wiley; 2011.

  99. Collins LM, Dziak JJ, Kugler KC, Trail JB. Factorial experiments: efficient tools for evaluation of intervention components. Am J Prev Med. 2014;47:498–504.

    Article  Google Scholar 

  100. Eldridge S, Campbell M, Campbell M, Drahota-Towns A, Giraudeau B, Higgins J et al. Revised Cochrane risk of bias tool for randomized trials (RoB 2.0): additional considerations for cluster-randomized trials. 2016.

  101. Eldridge S, Kerry S, Torgerson DJ. Bias in identifying and recruiting participants in cluster randomised trials: what can be done? BMJ. 2009;339:b4006.

    Article  Google Scholar 

  102. Bobb JF, Qiu H, Matthews AG, McCormack J, Bradley KA. Addressing identification bias in the design and analysis of cluster-randomized pragmatic trials: a case study. Trials. 2020;21:1–12.

    Article  Google Scholar 

  103. Bobb J, Cook A, Shortreed S, Glass J, Vollmer W. Experimental designs and randomization schemes: designing to avoid identification bias. In: Rethinking Clinical Trials: A Living Textbook of Pragmatic Clinical Trials [Internet]. Bethesda, MD: NIH Health Care Systems Research Collaboratory; 2019. Available from: http://rethinkingclinicaltrials.org/chapters/design/experimental-designs-randomization-schemes-top/designing-to-avoid-identification-bias/.

    Google Scholar 

  104. Dakin H, Gray A. Economic evaluation of factorial randomised controlled trials: challenges, methods and recommendations: Factorial trials: Methods for economic evaluation. Statist Med. 2017;36:2814–30.

    Article  Google Scholar 

  105. Baker JJ. Activity-based Costing and Activity-based Management for Health Care: Jones & Bartlett Learning; 1998.

    Google Scholar 

  106. Yeung K, Richards J, Goemer E, Lozano P, Lapham G, Williams E, et al. Costs of using evidence-based implementation strategies for behavioral health integration in a large primary care system. Health Serv Res. 2020;55:913–23.

    Article  Google Scholar 

  107. Miller CJ, Barnett ML, Baumann AA, Gutner CA, Wiltsey-Stirman S. The FRAME-IS: a framework for documenting modifications to implementation strategies in healthcare. Implement Sci. 2021;16:36.

    Article  Google Scholar 

  108. Wiltsey Stirman S, Baumann AA, Miller CJ. The FRAME: an expanded framework for reporting adaptations and modifications to evidence-based interventions. Implementation Sci. 2019;14:58.

    Article  Google Scholar 

  109. Hamilton AB. Qualitative methods in rapid turn around health services research. Department of Veterans Affairs; 2013. Available from: https://www.hsrd.research.va.gov/for_researchers/cyber_seminars/archives/780-notes.pdf.

  110. Hamilton AB, Finley EP. Qualitative methods in implementation research: An introduction. Psychiatry Research. 2019;280:112516.

    Article  Google Scholar 

  111. Bach-Mortensen AM, Lange BCL, Montgomery P. Barriers and facilitators to implementing evidence-based interventions among third sector organisations: a systematic review. Implementation Sci. 2018;13:103.

    Article  Google Scholar 

Download references

Acknowledgments

The authors are grateful to the patients and clinicians of Kaiser Permanente Washington. We thank Rebecca Parrish, MSW, LICSW of Kaiser Permanente Washington for her important contributions to the study’s design and her continued implementation support. We also wish to express gratitude to scientific members of the DIGITS Trial Steering Committee, Mark McGovern, Geoffrey Curran, and Sarah Becker for their feedback on the study design and conduct. The authors thank the faculty and fellows of the National Institutes of Health Implementation Research Institute, the Summer Institute on Randomized Behavioral Clinical Trials, and the Mixed Methods Research Training Programs for their feedback on early conceptualizations of this grant proposal.

Funding

Research reported in this publication was supported by the National Institute on Drug Abuse of the National Institutes of Health under Award Number R01DA047954. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The study funder had no role in study design; collection, management, analysis, and interpretation of data; writing of the report; nor the decision to submit the report for publication, or authority over any of these activities.

Author information

Authors and Affiliations

Authors

Contributions

Funding acquisition and the first full manuscript draft was completed by JEG. CND made substantial contributions to the first full manuscript draft. Study conceptualization and design was conducted by JEG, ESW, JCF, RMC, and KAB. Data acquisition is being carried out by JG, CND, TB, JFB, LP, ESW, DK, JM, KS, DK, AGM, RT, and RMC. AI, DK, JFB, ESW, and JG are conducting the data analysis. All authors will contribute to data analysis and interpretation. All authors were involved in editing and manuscript approval.

Corresponding author

Correspondence to Joseph E. Glass.

Ethics declarations

Ethics approval and consent to participate

The Kaiser Permanente Washington Institutional Review Board granted ethical approvals. The primary data collection for the economic evaluation was determined to be exempt from review. A waiver of documentation of consent was granted for health coaching. Waivers of consent were granted to randomize clinics, and to access, use and collect data for the formative evaluation, practice facilitation, and quantitative evaluation because the study was deemed to have no more than minimal risks and it would not be otherwise feasible to conduct the study.

Consent for publication

Not applicable.

Competing Interests

Pear Therapeutics Inc provided digital therapeutic prescriptions at no cost to Kaiser Permanente Washington during a quality improvement pilot study that preceded this implementation trial. During the current trial, the study’s National Institutes of Health grant (R01DA047954) pays Pear Therapeutics for digital therapeutic prescriptions and other support. JEG is a coinvestigator on a Small Business Innovation Award funded awarded to Pear Therapeutics, Inc by the National Institute on Drug Abuse, which evaluates potential improvements to reSET-O (R44DA042652). Pear Therapeutics does not provide funding to the authors.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Description of primary care team structure and availability of substance use disorder care at the participating health system.

Additional file 2.

 Identification and Selection of the Digital Therapeutic.

Additional file 3.

 Specification and reporting of the DIGITS Trial implementation strategies.

Additional file 4.

 Statistical power analysis details. Additional Information about Implementation Costs Data Collection.

Additional file 5.

Additional Information about Implementation Costs Data Collection.

Additional file 6.

Formative Evaluation Field Notes Template. DIGITS Trial Meeting Field Notes – Active Implementation.

Additional file 7.

Formative evaluation questions.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Glass, J.E., Dorsey, C.N., Beatty, T. et al. Study protocol for a factorial-randomized controlled trial evaluating the implementation, costs, effectiveness, and sustainment of digital therapeutics for substance use disorder in primary care (DIGITS Trial). Implementation Sci 18, 3 (2023). https://doi.org/10.1186/s13012-022-01258-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13012-022-01258-9

Keywords