Skip to main content

The Balanced Opioid Initiative: protocol for a clustered, sequential, multiple-assignment randomized trial to construct an adaptive implementation strategy to improve guideline-concordant opioid prescribing in primary care



Rates of opioid prescribing tripled in the USA between 1999 and 2015 and were associated with significant increases in opioid misuse and overdose death. Roughly half of all opioids are prescribed in primary care. Although clinical guidelines describe recommended opioid prescribing practices, implementing these guidelines in a way that balances safety and effectiveness vs. risk remains a challenge. The literature offers little help about which implementation strategies work best in different clinical settings or how strategies could be tailored to optimize their effectiveness in different contexts. Systems consultation consists of (1) educational/engagement meetings with audit and feedback reports, (2) practice facilitation, and (3) prescriber peer consulting. The study is designed to discover the most cost-effective sequence and combination of strategies for improving opioid prescribing practices in diverse primary care clinics.


The study is a hybrid type 3 clustered, sequential, multiple-assignment randomized trial (SMART) that randomizes clinics from two health systems at two points, months 3 and 9, of a 21-month intervention. Clinics are provided one of four sequences of implementation strategies: a condition consisting of educational/engagement meetings and audit and feedback alone (EM/AF), EM/AF plus practice facilitation (PF), EM/AF + prescriber peer consulting (PPC), and EM/AF + PF + PPC. The study’s primary outcome is morphine-milligram equivalent (MME) dose by prescribing clinicians within clinics. The study’s primary aim is the comparison of EM/AF + PF + PPC versus EM/AF alone on change in MME from month 3 to month 21. The secondary aim is to derive cost estimates for each of the four sequences and compare them. The exploratory aim is to examine four tailoring variables that can be used to construct an adaptive implementation strategy to meet the needs of different primary care clinics.


Systems consultation is a practical blend of implementation strategies used in this case to improve opioid prescribing practices in primary care. The blend offers a range of strategies in sequences from minimally to substantially intensive. The results of this study promise to help us understand how to cost effectively improve the implementation of evidence-based practices.

Trial registration

NCT 04044521 ( Registered 05 August 2019.


Opioids are commonly prescribed in primary care to relieve chronic non-cancer pain. Although opioids are indicated for some patients, no scientifically rigorous studies with adequate periods of observation are available to optimally guide patient selection and monitoring practices [1]. Accompanying burdens have become clear and widespread. By 2017, drug overdose was the leading cause of accidental death in the USA. Although the volume of opioids prescribed in the USA declined each year from 2010 to 2015, about three times more opioids were prescribed per person in 2015 as in 1999, and prescribing rates still vary greatly, with the highest prescribing counties prescribing six times more opioids per person than the lowest prescribing counties [2]. About half of opioid prescriptions are written in primary care [1, 3]. Clinical guidelines for opioid prescribing in primary care have been advanced, most notably those issued by the US Centers for Disease Control and Prevention in 2016 [1]. Clinical guidelines provide expert consensus around a few basic ideas: (1) Physicians should discuss the risks and benefits of opioid therapy with patients by reviewing and signing formal treatment agreements before initiating the first opioid dose and throughout treatment [1]. (2) Clinicians should avoid prescribing opioids in doses higher than 90–100 morphine milligram equivalent (MME) daily, since evidence shows that patients with a dose of 100 MME or greater are 11 times more likely to die from overdose than patients taking doses less than 20 MME [4,5,6]. (3) Patients at increased risk for misuse (i.e., those with mental health or substance use disorders) are more likely to receive opioid prescriptions and higher doses; thus, screening for mental health and substance use disorders should be in place [4, 7,8,9]. (4) Opioid-benzodiazepine co-prescribing in any combination of doses should be avoided to reduce the risk of overdose [10]. (5) Monitoring via urine drug testing should be instituted to ensure appropriate use of opioid medications [11].

The CDC’s guideline and widespread media attention to the problem of opioid prescribing have produced gains. In July 2019, the CDC published provisional data showing a 5.1% decrease in drug overdose deaths (most of which are caused by opioids) in 2018 compared with 2017 [12]. Despite this positive news, the authors of the CDC guideline published in June 2019 a warning about misapplications of the guideline, such as reducing opioids used to treat cancer pain, rapid tapering of long-term opioid users, sudden discontinuation of opioid therapy, and prescribers’ dismissing patients taking opioids from their practices [13]. In October 2019, the US Department of Health and Human Services published a guide warning clinicians against abruptly discontinuing or tapering patients from long-term opioid use [14, 15]. Clearly, guideline-concordant care is complicated and remains a challenge in 2020.

More broadly, the US healthcare system is notoriously slow to adopt established guidelines or other evidence-based practices (EBPs), regardless of the condition [16]. Lau et al. conducted a 2015 review [17] of 91 studies aimed at determining the effectiveness of strategies for the implementation of complex interventions in primary care settings. The most commonly used strategies were targeted at individual providers, generally demonstrating modest effects, with considerable variability in effectiveness between studies. The authors found little use of implementation strategies targeted at organizations (e.g., the clinic) or a wider context (e.g., health systems). Finally, the review found very limited data on the costs and cost-effectiveness of different implementation strategies. The authors concluded that the literature remains unclear about which implementation strategies should be used under what conditions, and that future research should study implementation strategies targeted at levels broader than individual providers.

The current study (R01DA047279)—The Balanced Opioid Initiative—builds on preliminary research conducted in a pilot study funded by a 3-year R34 clinical trial planning grant from the National Institute on Drug Abuse (R34-DA036720), the results of which were published in 2018 [18]. The mixed-methods pilot study tested the feasibility and acceptability of systems consultation, which is a theoretically and empirically grounded [18], multi-component implementation intervention [19, 20].

In the pilot, systems consultation consisted of a bundle of three discrete implementation strategies provided contemporaneously to four primary care clinics over the course of a 6-month period: (1) audit and feedback (henceforth AF, providing baseline performance feedback to clinics and ongoing information that points to progress and opportunities for improvement), (2) practice facilitation (PF, help in tailoring guidelines to specific clinical contexts and processes through workflow assessment and workflow changes), and (3) prescriber peer consulting (PPC, in which a respected physician expert in opioid management provides advice on how to improve clinical practice and address issues with more challenging cases). Four clinics received systems consultation, and four clinics served as controls; see [18] for details. During the 6-month intervention period, the systems consultation implementation team (consisting of a physician expert and a practice facilitator) met with clinic staff in monthly site visits, videoconferences, or teleconferences. The systems consultation strategy generally proved to be feasible and acceptable: Attendance by clinic staff at intervention meetings was 83%, and more than 80% of intervention participants agreed or strongly agreed with the statements, “I am more familiar with guidelines for safe opioid prescribing” and “My clinic’s workflow for opioid prescribing is easier.” In addition, systems consultation appeared to be effective: Compared with control clinics, intervention clinics reduced average morphine-milligram equivalent (MME) dose for patients on long-term opioid therapy by 19.7% over 12 months.

A qualitative formative evaluation conducted during the pilot study [21] yielded a number of observations suggesting modifications to the systems consultation intervention. We used participant observation, focus groups, interviews, and activity tracking to collect data used to follow implementation over time and explore what worked well and what required modification. Synthesizing these data and discussing them among research team members informed plans for the current study as follows. (1) The absence of a dedicated informational session at the health system level—i.e., a meeting designed to inform and engage participants on a wide scale—was a missed opportunity to (a) galvanize common support and engage all primary care clinics and health system leaders around the importance of guideline-concordant prescribing, and (b) introduce audit and feedback (a health-system-level strategy) to clinicians and frontline staff. In addition, the absence of an educational and engagement meeting also meant that (c) initial clinic visits often involved repeatedly explaining the rationale for systems consultation, leading to greater cost to and time burden on the implementation team. (2) Clinicians and frontline staff were better prepared to work on the activities that are part of PF (e.g., workflow changes) after receiving clinic-level performance data supplied by AF reports, which summarized baseline performance and provided a vehicle for ongoing progress monitoring. (3) We developed a team-based implementation and engagement model using both a physician expert and a practice facilitator because it quickly became clear that assigning sole responsibility to the physician expert for advising, communicating, and coordinating with change teams (at the clinic) was overly burdensome and not scalable. We added a practice facilitator to support the physician expert at all monthly PF sessions and facilitate follow-up communications over the course of 6 months. However, it appeared that these roles (physician expert and facilitator) could feasibly be sequenced into separate implementation stages with different foci and different targets of action (i.e., focusing initially on support staff, and later—if needed—on both support staff and prescribers). (4) In three of four pilot clinics, we learned that clinicians and frontline staff charged with implementing new practices often lacked the time and skills to conduct organizational change projects and benefitted from monthly follow-up; yet in one of the four clinics, a single site visit led by a respected physician peer consultant was all that was needed to enact change, lessening the value of subsequent follow-up engagements. The case of this particular clinic gave us reason to believe that an effective educational and engagement meeting might be sufficient to effect positive change in some clinics, without the need for more intensive implementation support. (5) We observed that different clinics prioritized different focal problems that may be better addressed using different combinations of the implementation strategies. For example, regular urine drug screening can be addressed by PF; opioid-benzodiazepine co-prescribing can be addressed by PPC. (6) Candidate determinants of the success of different implementation strategies were identified across multiple levels: the existence of a related health system policy (at health system level), the overall size of the clinic panel and experience doing quality improvement work (at the clinic level), and the number of high-dose opioid patients and “inherited” opioid patients (at the prescriber level).

These insights suggested a set of modifications to systems consultation that informed the design of the current study. We added an educational/engagement meeting (EM, with the option of receiving continuing medical education credits) as a complementary health system strategy alongside AF. Our experience with the pilot suggested the need for a sequenced approach to the strategies within systems consultation, whereby potentially more intensive strategies are offered only after providing less intensive strategies. In the current study, we operationalize this idea by first providing EM/AF to all clinics in a first stage of implementation, then considering whether to augment EM/AF with PF in a second stage of implementation, and then considering whether to augment with PPC in a third stage of implementation. This approach—considering whether to progress from broad organizational support via EM/AF (the least intensive set of strategies) to support for individual prescribers via PPC (the most intensive)—aligns well with existing multi-level frameworks guiding implementation [19, 22] (see Fig. 1). Finally, our experience also suggested the need to consider a tailored approach to delivering the strategies within systems consultation [23, 24], whereby different clinics may be provided PF depending on their needs following EM/AF, and different clinics may be provided PPC depending on their needs following prior intervention (e.g., EM/AF followed by PF).

Fig. 1
figure 1

Theoretical and empirical framework. Sources: Ferlie and Shortell [22]; Powell [19]

This combination of a sequenced and tailored approach to systems consultation is known as an adaptive implementation strategy [25,26,27,28,29,30]. However, two sets of challenges prevent us from evaluating or recommending such an approach to addressing opioid prescribing in primary care using the results of our pilot alone. First, currently there is no empirical support for the effectiveness of the PF and/or PPC strategies (singly or jointly) following EM/AF. Second, there is currently no empirical support, nor guidance, on whether or how—i.e., based on which measures—to tailor PF or PPC to the needs of different clinics. This study aims to address these important gaps in the literature.

Overall goal

The ultimate goal of this optimization study is a proposal for an adaptive implementation strategy, adaptive systems consultation, that provides clear guidance to implementation decision-makers about when and for which clinics to provide PF and PPC in the context of EM/AF for the purpose of improving opioid prescribing in primary care settings. We will do this by addressing the following specific aims. Taken together, these aims test the effectiveness of the PF and PPC strategies and seek to improve our understanding of how best to dynamically tailor PF and PPC to the needs of the primary care clinics.

Primary aim

The study’s primary aim is to compare (1) clinics offered the most intensive sequence of strategies (EM/AF + PF + PPC) vs. (2) clinics offered the least intensive strategy (EM/AF alone) on change in clinic-level, average morphine-milligram equivalent dose (the primary outcome) from the end of month 3 (the point at which a clinic may be randomized to PF) to the end of month 21 (end of intervention).

Secondary aim

The study will also estimate the cost of delivering four different sequences and combinations of strategies (EM/AF, EM/AF + PF, EM/AF + PPC, and EM/AF + PF + PPC), including the incremental cost effectiveness of adding facilitation and prescriber peer consulting. Results will help decision-makers weigh the costs and effects of using different sequences of implementation strategies.

Exploratory analyses

The study will conduct exploratory analyses to understand contextual factors that influence the effectiveness of the different sequences of implementation strategies. We conjecture that two factors are useful to consider in deciding at the end of month 3 whether to add PF: (a) a clinic’s experience of doing quality improvement and (b) size of clinic. At the end of month 9, we posit that those two factors plus two others are useful to consider in deciding whether to add PPC: (c) whether PF was offered at the end of month 3 and (d) number or percent of high-dose opioid patients at the end of month 9. Put another way, these factors are candidate moderators of the effect of the implementation strategies. We had also identified the existence of a health system’s opioid prescribing policy as a candidate variable, but since both systems in the trial have such policies, the study offers no variation. Qualitative methods will be used to address two other important questions: (1) how the implementation strategies that make up systems consultation can be further specified, including adaptations made during the trial, and (2) whether a quantitative assessment can be developed to help decision-makers tailor systems consultation to different contexts. A tailoring assessment will be developed, based on a literature review and the findings of the pilot study [18, 21], to assess whether other system-, clinic-, and prescriber-level contextual factors also ought to be considered as tailoring variables in the decisions concerning whether to offer PF and PPC. We will then test the usefulness of these additional variables in the analysis of the trial.


Trial design

The trial is an unrestricted, 2 × 2, clustered, sequential, multiple-assignment, randomized trial (SMART) [25,26,27] (see Fig. 2). Because our goal is to test the effectiveness of the components of an implementation strategy while gathering clinical data related to patient outcomes, the study can be viewed as a hybrid type 3 effectiveness-implementation trial design [30]. All clinics will receive EM/AF to start. At the end of 3 months, half of clinics will be randomly assigned to receive PF for 18 months. At the end of 9 months, a second randomization occurs where half of clinics will receive PPC for 12 months in addition to previously assigned strategies. Clinics will have equal probability of being assigned to one of the four implementation sequences represented by the boxes (A, B, C, or D) on the right side of Fig. 2.

Fig. 2
figure 2

Study design. EM: Educational/engagement meeting; AF: Audit with monthly feedback reports; R: Randomization point; PF: Practice facilitation; PPC: prescriber peer consulting


We are recruiting primary care clinics from two health systems in the Midwestern USA. One health system operates in the northeast region of Wisconsin and the southern part of the Upper Peninsula of Michigan, which are predominantly rural areas that have been particularly hard hit by prescription opioids. The second health system operates in the Madison, Wisconsin, metropolitan area and the surrounding, predominantly rural region of southcentral Wisconsin. Efforts to interest and involve clinics in the trial took place in the fall of 2019, and a run-in period started with the first educational/engagement meeting on February 13, 2020. The run-in period is planned to end and recruitment to close on May 13, 2020, when randomization takes place.

Clinic inclusion/exclusion criteria

Only primary care clinics (non-pediatric primary care, internal medicine, and family medicine) will be approached to participate. Clinics that explicitly prohibit initiating opioid therapy will be excluded (e.g., some clinics require that opioids be initiated by a specialty pain clinic). At baseline (before the start of EM/AF), a clinic will be considered ineligible if it already shows exemplary performance on key measures of guideline concordance and would thus receive no benefit from the implementation support we would provide. Specifically, we define a clinic as ineligible if it meets these criteria: (1) 80% or more of a clinic’s long-term opioid patients have treatment agreements and a urine drug screen in the last 12 months and (2) fewer than 10% of the clinic’s patients on long-term opioid therapy have doses above 90 MME. Figure 3 shows the participant flow.

Fig. 3
figure 3

Participant flow. EM, Educational/engagement meeting; AF, Audit with monthly feedback reports; PF, Practice facilitation; PPC, prescriber peer consulting

Prescriber inclusion/exclusion criteria

Prescribers must be primary care physicians or other providers with prescribing privileges (e.g., nurse practitioners, physician assistants). We will exclude “float” providers (temporary physicians who do not manage stable panels of patients).

Patient inclusion/exclusion criteria

Patients included in the calculation of the prescriber-level outcome will have three consecutive months with an opioid prescription in the most recent 3 months documented in the electronic health record, indicating long-term opioid use. We will exclude patients from the calculation who have a cancer diagnosis or are receiving hospice care.

Randomization and stratification

After 3 months of EM/AF, eligible clinics will be stratified by health system, average number of patients at the clinic prescribed opioids over the first 3 months (above or below the median), and average MME over the first 3 months (above or below the median); clinics will then be randomly assigned with equal probability to PF or no PF arms within each of the eight stratum. At 9 months, clinics will again be stratified by health system, by average number of patients at the clinic prescribed opioids over the past 3 months (median cut) and by average MME over the past 3 months; clinics within each of the resulting strata will be randomly assigned with equal probability to PPC or no PPC arms. The project statistician will generate the random allocation sequence using a random number generator to perform block randomization with blocks of two and four. The study coordinator will enroll clinics and assign them to their randomized group. Consents will be obtained from prescribers and other clinic staff who participate in study activities.

Implementation strategies

Educational/engagement meetings

Educational/engagement meetings (EM) are conceived as a low-intensity, system-level implementation strategy. EM involves a broadcast model of communication, involving one or two experts imparting information to many clinicians at once. Educational/engagement meetings will take place at the beginning of the study and then quarterly; a total of six will take place during the 21-month intervention period. The first will be a regionally hosted, in-person training session for each health system; attendees will have the option of participating via webinar. Before the first educational/engagement meeting, the implementation team will ask each clinic’s medical director to identify a change team leader to work with a group of three to seven clinic staff members on improving workflows related to opioid prescribing, and ask the medical director and change team leader to identify other members of the clinic’s change team. We will ask each clinic’s medical director, clinic manager, and change team leader to attend the educational/engagement meeting, minimally, but all change team members and others involved in clinic workflows related to opioid prescribing—prescribers (physicians, nurse practitioners, physician assistants), nurses, medical assistants, lab techs, and so on—will be invited. The educational/engagement meeting will be led by physicians from the University of Wisconsin-Madison with expertise in primary care and addiction medicine and extensive experience managing the care of long-term opioid patients. The session will be designed both to impart information and elicit early engagement and enthusiasm from clinic staff.

As part of educational/engagement meetings, the implementation team will ask system leaders and the medical directors of participating clinics about the challenges they face in opioid prescribing for patients with chronic non-cancer pain. Their responses will be used to focus each presentation on the issues most salient to each system. In addition to covering the current status of opioid prescribing in the system, the sessions will explain how implementation efforts address key issues identified by participants and include time for questions and answers. Subsequent educational/engagement meetings will take place as webinars, be led by physician experts, and cover such topics as effective tapering, opioid rotation, and balancing goals for quality of life, pain, and opioid dose.

Audit and feedback

The AF implementation strategy involves system-generated performance feedback reports being sent to participating clinicians. System- and clinic-level feedback reports will be introduced at the first educational/engagement meeting to initiate AF and referred to in all subsequent meetings. After the first meeting, a data coordinator at each clinic (a change team member asked to assume this role) will access and distribute monthly feedback reports to other change team members, prescribers, and the clinic’s medical director and manager.

The combination of EM and AF represents a blended, system-level implementation strategy. While system-level implementation strategies are relatively inexpensive and easy to scale, such strategies have limited evidence of effectiveness [17]. Nonetheless, simply learning and being convinced about what to do with respect to opioid prescribing—and having access to performance data that can be used to guide changes—may suffice to improve prescribing in some clinics. Moreover, the provision of subsequent strategies (e.g., PF and PPC) is expected to build synergistically on the foundational knowledge and reports generated by EM and AF, respectively.

Practice facilitation

Practice facilitation (PF) is a clinic-level strategy that targets clinic processes and workflows. In general, practice facilitation focuses on local customization and has a stronger evidence base than educational/engagement meetings and audit and feedback reports, which are usually not tailored to specific clinics [31]. Practice facilitation is also more labor intensive than educational/engagement meetings and audit and feedback reports. Solving workflow problems (e.g., streamlining processes around opioid prescription refills), which practice facilitation addresses, may be the key to improvement in some clinics.

In a clinic randomized to receive practice facilitation, an external change agent trained in practice facilitation (the facilitator) will visit the clinic in-person, and then follow-up over the course of five monthly and four quarterly videoconferences or teleconferences to help clinics improve processes related to opioid prescribing, such as (1) ensuring that treatment agreements are initiated and regularly updated and (2) integrating urine drug testing into clinic workflows. Clinic change teams consisting of a change team leader, a data coordinator responsible for distributing AF reports, at least one prescriber, and up to four other staff members will form the change team for opioid prescribing. The facilitator will work with the change team to use systems engineering tools (e.g., walkthrough exercises, flowcharting, and nominal group technique [32, 33]) to make changes in clinic workflows. The facilitator will reinforce the content of the EMs and guide teams in using their clinic and prescriber level AF reports to monitor progress towards goals.

Prescriber peer consulting

Prescriber peer consulting (PPC) is a prescriber-level strategy that aims to help prescribers manage their patients on long-term opioids by providing the opportunity to consult with a physician experienced in opioid management. PPC will be available to all prescribers at clinics randomized to receive this strategy; hence, it is conceptualized in this trial as a clinic-level strategy in terms of its delivery. Peer consultants will be physicians or pharmacists with relevant experience in opioid prescribing and addiction medicine nominated by health system leaders to help their peers manage patients on long-term opioid therapy (e.g., how to manage the tapering of opioid doses for long-term opioid patients with clinically indicated dose reductions). Participating prescribers in clinics randomized to PPC (including nurse practitioners and physician assistants) will receive up to four quarterly consulting sessions over 12 months. Consultations will be delivered via videoconference or teleconference. Prescribers may choose to include other staff at their clinic (e.g., RNs, MAs) in these consultations as well. Prescriber peer consulting is highly resource intensive, but our preliminary research suggests that physician-to-prescriber interaction may be the most effective way to change prescribing behavior.

Measures and outcomes

The study uses the RE-AIM model as an evaluation framework [34]. RE-AIM is a comprehensive evaluation framework that assesses five dimensions: Reach, Effectiveness, Adoption, Implementation, and Maintenance. Specific measures for each RE-AIM dimension are presented in Table 1. Evaluation data will come primarily from electronic health records (EHRs). Both health systems use Epic Systems’ EHR, which will facilitate the extraction of EHR data. One system was the site of our pilot research. Detailed specifications were developed during the pilot that will be used to ensure consistent data definitions across both systems. The primary outcome is prescriber average of MME dose per day per opioid patient, calculated over a 3-month period.

Table 1 Outcome measures by RE-AIM category

Primary aim analyses

All clinics randomized at the end of month 3 will be included in the intent-to-treat sample for all aims. The primary research outcome, MME, will be available for all prescribers within all clinics that consent to be in the study (approximately 6 per clinic). Table 2 shows the sequences of implementation strategies that will be employed in the trial.

Table 2 Sequences of implementation strategies

For the primary aim, we will determine the effect of strategy sequence D (the most intensive sequence of strategies) vs. strategy sequence A (the least intensive strategy) on change in MME from intervention months 3 to 21. Strategy sequence D offers EM/AF during months 3–21, augments with PF during months 4–21, and then further augments with PPC during months 10–21. By contrast, strategy sequence A offers EM/AF but never offers PF or PPC. This analysis is a mean comparison of change in MME between strategy sequences D versus A. The analysis will use a longitudinal (repeated-measures) analysis. Time will be coded such that t = 0 denotes the end of month 3 of the intervention period (the initial randomization); in the following text, data collected prior to t = 0 is considered baseline data. The primary outcome (MME) is a continuous measure and is collected at intervention month 3 (t = 0, immediately prior to randomization) and every month up to intervention month 21 (t = 18). The primary outcome is an average over 3 months; thus, there are a total of 7 measurement occasions. This is a 3-level analysis: repeated measures of MME, within prescribers, within clinics.

A piecewise-linear model with a knot at intervention month 9 (t = 6, MME collected immediately before the second randomization) will be used to model the temporal trajectories over the course of intervention months 4–21. Equation 1 displays the planned longitudinal model we will use to model the mean MME over time and test the primary aim.

Equation 1 Longitudinal model for mean MME

$$ {\displaystyle \begin{array}{l}{\eta}^{\prime }X+{\gamma}_0+{I}_{\left(t\le 6\right)}\left({\gamma}_1t+{\gamma}_2{tA}_1\right)\\ {}\kern4.199998em +{I}_{\left(t>6\right)}\left(6{\gamma}_1+6{\gamma}_2{A}_1+{\gamma}_3\left(t-6\right)+{\gamma}_4\left(t-6\right){A}_1+{\gamma}_5\left(t-6\right){A}_2+{\gamma}_6\left(t-6\right){A}_1{A}_2\right)\end{array}} $$

X is the mean-centered baseline covariates (clinic aggregate MME at t = 0 and a dummy indicator for health system), A1 is the indicator for the first randomization (PF = 1 vs. no PF = − 1), and A2 denotes the second randomization (PPC = 1 vs. no PPC = − 1). The model has a linear trend from t = 0 to t = 6 for prescribers in PF and no PF clinics, and a linear trend from t = 6 to month t = 18 for each of the four sequences of strategies (A–D). We allow for changes in the mean trajectory (i.e., deflections) at intervention month 9 (t = 6) since this is the point at which clinician prescribers may begin receiving PPC. γ0 is the mean outcome at intervention month 3 (t = 0), averaged across all four strategy sequences; γ1 is the average change in MME from month 3 (t = 0) to month 9 (t = 6), averaged across all four strategy sequences; 2*γ2 is the causal effect of PF vs. no PF on change in MME from intervention month 3 (t = 0) to intervention month 9 (t = 6); γ3 represents the average change in MME from intervention month 9 (t = 6) to intervention month 21 (t = 18), averaged across all four strategy sequences; 2*γ4 is the main causal effect of PF vs. no PF on change in MME from month 9 (t = 6) to month 21 (t = 18), averaged over PPC vs. no PPC; 2*γ5 is the main causal effect of PPC vs. no PPC on change in MME from month 9 (t = 6) to month 21 (t = 18), averaged over PF vs. no PF; γ6 is the interaction term to quantify whether and how PF and PPC work together to impact change in MME from month 9 (t = 6) to month 21 (t = 18).

The planned statistical test associated with the primary aim (for which we power the study) is a test of the null hypothesis that 12γ2 + 24γ4 + 24γ5 = 0, that is, that there is no difference on change in MME from month 3 (t = 0) to month 21 (t = 18) between implementation sequence A vs. implementation sequence D. We will report estimates of each coefficient in the model with their corresponding 95% confidence intervals.

Since the covariates X and the primary outcome data are available and passively collected from the EHR, except in rare cases (e.g., clinician turnover, clinician retirement, or an error leading to data loss in the EHR), we expect to have little missing data.

Additional file 1 describes the planned analysis for the primary outcome in more detail.

Sample size and power

The total sample size for this study is based on the primary aim: a comparison on average difference on change in MME from intervention month 3 (t = 0) to intervention month 21 (t = 18) between implementation sequence D vs. implementation sequence A. This is a comparison between two of the four groups embedded in the trial (see Table 2). The sample size calculator for this comparison is a straightforward adjustment to the sample size calculator for a standard two-sample hypothesis test. The adjustment accounts for the clustering of prescribers within clinics through a variance inflation factor (VIF) of 1 + (m − 1) ρ, where m is the (average) number of prescribers per clinic and ρ is inter-clinic correlation coefficient (ICC) for MME at month 21 (t = 18). Based on intervention clinics in the R34 pilot data, the ICC was estimated to be ρ = 0.14. Assuming an average of m = 6 prescribers per clinic (based on information from the new health systems that have agreed to participate), a Type-1 error rate of α = 5% and ρ = 0.14, a minimum of 64 prescribers in each group (11 clinics per group) will provide at least 80% power to detect a moderate effect size of d = 2/3 between the two implementation sequences on change in MME. Because we have four groups in this trial, the minimum total study sample size is 256 clinician prescribers, corresponding to roughly N = 40–45 clinics (depending on prescriber count).

Based on the pilot data that found a standard deviation of 35 for MME, an effect size of d = 2/3 corresponds to detecting an average difference of at least 23 on the MME between the two implementation sequences after 21 months. The above calculation is expected to be conservative because it does not account for within-prescriber correlation in MME, which is accounted for in the longitudinal analyses and could permit detection of smaller differences in MME.

Exploratory aim

Q-learning [35]—a generalization of moderated regression analysis to multiple stages of implementation—will be used to test the moderators and construct a candidate adaptive implementation strategy.

Secondary aim

In the secondary aim, we will estimate the cost of delivering four different sequences and combinations of strategies (EM/AF, EM/AF + PF, EM/AF + PPC, and EM/AF + PF + PPC), including the incremental cost-effectiveness of adding facilitation and prescriber peer consulting. Results will help decision-makers weigh the costs and effects of using different sequences of implementation strategies.

In line with the pragmatism that underlies this research, we will employ an operational cost analysis based on tenets of engineering economics. Traditional health economic approaches incorporate concepts from welfare economics and take a societal perspective towards decision analysis [36, 37]. Engineering economic analysis tends to have a narrower scope. Whereas health economic evaluation provides information primarily for policymakers, engineering economic analysis produces information primarily for the organizational leaders who ultimately make decisions about the adoption of evidence-based practices in their organizations. We adopt the perspective of the healthcare system (rather than society at large) in considering the incremental costs and effects associated with ratcheting up the implementation strategy. This perspective deemphasizes some societal costs (e.g., patient travel time) and effects (e.g., crime related to addiction) that are often considered in traditional cost-effectiveness analysis [38]. However, the health system perspective aligns with updated guidelines for cost-effectiveness that were re-issued in 2016 [39] that acknowledge the importance of the health care perspective for pragmatic purposes. The health systems we will work with, like many health systems, are Accountable Care Organizations, which means that they are responsible for their patients’ total cost of care. The pragmatic optimization approach featured in this aim was designed in close partnership with our research collaborators to model the considerations healthcare decision-makers told us they actually use when making decisions about adopting and sustaining evidence-based practices.

We developed an approach to costing the systems consultation strategy in our pilot research [18]. Detailed logs were kept of all contacts between members of the research team and the clinic change teams to estimate the number of hours spent delivering systems consultation. These estimates were multiplied by hourly wage rates for physician consultants and the facilitators. Costs for clinic incentives (continuing education credits, clinic stipends) and expenses associated with site visits were also included in the cost assessment. The total cost of delivering the entire systems consultation implementation strategy for 6 months (i.e., the full package of strategies corresponding to box D in Fig. 2) was estimated in the pilot research. The log-based costing approach is sufficiently fine-grained to construct detailed breakdowns of unit costs associated with each component (EM/AF, PF, and PPC) of the full systems consultation intervention [18].

We will use incremental cost-effectiveness ratios (ICERs) to quantify the tradeoff between the additional effectiveness achieved through scaling up the intensity of implementation strategies. The primary ICER will be the incremental cost per unit reduction in MME. We will use a 21-month timeline for the cost analysis. Implementation costs will be organized using the Cost of Implementing New Strategies framework [38] and categorized under the “Implementation” domain of RE-AIM (see Table 1). Secondary ICERs will include incremental cost per unit change in prevalence of opioid/benzodiazepine co-prescribing, completion of treatment agreements, urine drug screens, and so on, as shown in Table 1. Cost-effectiveness acceptability curves will be generated (using Monte Carlo simulation techniques) for all primary and secondary ICERs to model uncertainty in our estimates of cost-effectiveness [39].

Trial status

Sites were identified and participation confirmed by January 31, 2020. Site training began with the first educational/engagement meetings, which were held in February and March 2020. The trial was temporarily suspended on March 25, 2020, because the coronavirus pandemic took priority in both health systems.


Although much is known about what to do at the patient level to more safely prescribe and monitor opioids for chronic, non-cancer pain, how to implement these practices among prescribers remains a challenge. The literature offers virtually no guidance on which implementation strategies are most effective in different clinical contexts. This study collects data that can be used to empirically develop a multi-level, adaptive, scalable implementation strategy called systems consultation, which is a blend of discrete implementation strategies. The goal of the study is to understand the optimal sequencing and combination of implementation strategies that specific types of clinics and prescribers need to adopt clinical guidelines for opioid prescribing. Specifically, we will test four different sequences and combinations of strategies (EM/AF, EM/AF + PF, EM/AF + PPC, and EM/AF + PF + PPC) to assess the effectiveness and cost of augmenting a systems-level, broadly based implementation strategy (EM/AF) with progressively more intensive strategies.

The proposed study uses a novel clustered SMART design, which allows us to compare the different sequences of strategies over time in real-world primary care clinics. The design allows us to document the value added to the least intensive strategy by more intensive strategies. The SMART also enables us to construct empirically an adaptive implementation strategy to meet the needs of different clinics. On the PRECIS-2 continuum of trials from explanatory to pragmatic, the study falls at the pragmatic end [40].

Despite the strengths of the study, it also has limitations. All eligible clinics in two health systems were invited to participate. The clinics that volunteered may be more motivated to change than those that do not, limiting the generalizability of the findings. The two health systems in which the study will be conducted serve relatively rural regions with less racial and ethnic diversity than are common in other parts of the USA. Study data will come from data in the electronic health record. Pragmatic data collection of this kind eases the burden of data collection on research participants, but may offer comparatively limited understanding compared with using data from validated instruments.


The design of the current study will enable us to learn how to tailor implementation strategies so that, ideally, clinics and prescribers can receive exactly the implementation support they need, when they need it. The pragmatic trial we propose may be generalized to other regions of the USA struggling with prescription opioids, and potentially to other significant public health issues. Results of the study, when combined with results from other trials also focusing on adaptive implementation strategies, will increase our knowledge of how to tailor implementation strategies to different settings, ultimately increasing the speed and efficiency of delivering evidence-based practices into health systems.

Availability of data and materials

The datasets used and/or analyzed during the current study will be available from the corresponding author on reasonable request.



Audit and feedback


Evidence-based practice


Educational/engagement meeting


Electronic health record


Morphine-milligram equivalent dose


Practice facilitation


Prescriber peer consulting


Reach, Effectiveness, Adoption, Implementation, and Maintenance


Sequential, multiple-assignment randomized trial


  1. Dowell D, Haegerich TM, Chou R. CDC guideline for prescribing opioids for chronic pain — United States, 2016. MMWR Recomm Rep 2016;65(No. RR-1):1–49. doi: 10.15585/mmwr.rr6501e1external icon..

  2. Centers for Disease Control and Prevention (CDC). Vital signs: opioid prescribing. U.S. Department of Health and Human Services. Sept 26, 2017. Accessed 17 Oct 2019..

  3. Daubresse M, Chang HY, Yu Y, Viswanathan S, Shah ND, Stafford RS, et al. Ambulatory diagnosis and treatment of nonmalignant pain in the United States, 2000-2010. Med Care. 2013;51:870–8.

    Article  PubMed  Google Scholar 

  4. Edlund MJ, Martin BC, Fan MY, Devries A, Braden JB, Sullivan MD. Risks for opioid abuse and dependence among recipients of chronic opioid therapy: results from the TROUP study. Drug Alcohol Depend. 2010;112:90–8.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Bohnert AS, Valenstein M, Bair MJ, Ganoczy D, McCarthy JF, Ilgen MA, et al. Association between opioid prescribing patterns and opioid overdose-related deaths. JAMA. 2011;305:1315–21.

    Article  CAS  PubMed  Google Scholar 

  6. Dunn KM, Saunders KW, Rutter CM, Banta-Green CJ, Merrill JO, Sullivan MD, et al. Opioid prescriptions for chronic pain and overdose: a cohort study. Ann Intern Med. 2010;152:85–92.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Edlund MJ, Martin BC, Devries A, Fan MY, Braden JB, Sullivan MD. Trends in use of opioids for chronic noncancer pain among individuals with mental health and substance use disorders: the TROUP study. Clin J Pain. 2010;26(1):1–8.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Weisner CM, Campbell CI, Ray GT, Saunders K, Merrill JO, Banta-Green C, et al. Trends in prescribed opioid therapy for non-cancer pain for individuals with prior substance use disorders. Pain. 2009;145:287–93.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  9. Banta-Green CJ, Merrill JO, Doyle SR, Boudreau DM, Calsyn DA. Opioid use behaviors, mental health and pain--development of a typology of chronic pain patients. Drug Alcohol Depend. 2009;104:34–42.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Vankata HG, Gaurav C, Ram S, Talari G. Co-prescription of opioids and benzodiazepines. BMJ. 2017;356:j760

    Google Scholar 

  11. Chou R, Fanciullo GJ, Fine PG, Adler JA, Ballantyne JC, Davies P, et al. Clinical guidelines for the use of chronic opioid therapy in chronic noncancer pain. J Pain. 2009;10:113–30.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  12. National Center for Health Statistics, Centers for Disease Control and Prevention (CDC). Provisional Drug Overdose Death Counts. 2019. Accessed 19 Oct 2019..

  13. Dowell D, Haegerich T, Chou R. No shortcuts to safer opioid prescribing. N Engl J Med. 2019;380:2285–7.

    Article  PubMed  PubMed Central  Google Scholar 

  14. US Department of Health and Human Services. HHS guide for clinicians on the appropriate dosage reduction or discontinuation of long-term opioid analgesics. 2019. Accessed 19 Oct 2019..

  15. Dowell D, Compton WM, Giroir BP. Patient-centered reduction or discontinuation of long-term opioid analgesics: the HHS guide for clinicians. JAMA. 2019.

  16. Levine DM, Linder JA, Landon BE. The quality of outpatient care delivered to adults in the United States, 2002 to 2013. JAMA Intern Med. 2016;176:1778–90.

    Article  PubMed  Google Scholar 

  17. Lau R, Stevenson F, Ong BN, Dziedzic K, Treweek S, Eldridge S, et al. Achieving change in primary care—effectiveness of strategies for improving implementation of complex interventions: systematic review of reviews. BMJ Open. 2015;5:e009993.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Quanbeck A, Brown RT, Zgierska AE, Jacobson N, Robinson JM, Johnson RA, et al. A randomized matched-pairs study of feasibility, acceptability, and effectiveness of systems consultation: a novel implementation strategy for adopting clinical guidelines for opioid prescribing in primary care. Implement Sci. 2018;13:21.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:21.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:139.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Jacobson N, Johnson R, Deyo B, Alagoz E, Quanbeck A. Systems consultation for opioid prescribing in primary care: a qualitative study of adaptation. BMJ Qual Saf. 2019;28:397–404..

    Article  PubMed  Google Scholar 

  22. Ferlie EB, Shortell SM. Improving the quality of health care in the United Kingdom and the United States: a framework for change. Milbank Q. 2001;79:281–315.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  23. Wensing M, Bosch M, Grol R. Developing and selecting interventions for translating knowledge to action. CMAJ. 2010;182:E85–8.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Baker R, Camosso-Stefinovic J, Gillies C, Shaw EJ, Cheater F, Flottorp S, et al. Tailored interventions to address determinants of practice. Cochrane Database Syst Rev 2015, Issue 4. Art. No.: CD005470. DOI: 10.1002/14651858.CD005470.pub3..

  25. Almirall, D., Nahum-Shani, I., Wang, L., & Kasari, C. (2018). Experimental designs for research on adaptive interventions: Singly and sequentially randomized trials. In Optimization of Behavioral, Biobehavioral, and Biomedical Interventions (pp. 89-120). Springer, Cham..

  26. Almirall D, Chronis-Tuscano A. Adaptive interventions in child and adolescent mental health. J Clin Child Adolesc Psychol 2016;45(4):383-395. PubMed PMID: 27310565; PubMed Central PMCID: PMC4930370..

  27. Kilbourne, AM, Smith, SS, Choi, SY, Koschmann, E, Liebrecht, C, Rusch, A, Abelson, JL, Eisenberg, D, Himle, JA, Fitzgerald, K, and Almirall, D (2018). Adaptive school-based implementation of CBT (ASIC): clustered-SMART for building an optimized adaptive implementation intervention to improve uptake of mental health interventions in schools. Implementation Science, 13 (1) 119. DOI: 10.1186/s13012-018-0808-8. PMCID: PMC6126013..

  28. Kilbourne AM, Almirall D, Goodrich DE, Lai Z, Abraham KM, Nord KM, Bowersox NW. Enhancing outreach for persons with serious mental illness: 12-month results from a cluster randomized trial of an adaptive implementation strategy. Implement Sci 2014;9:163. PubMed PMID: 25544027; PubMed Central PMCID: PMC4296543..

  29. Kilbourne AM, Almirall D, Eisenberg D, Waxmonsky J, Goodrich DE, Fortney JC, Kirchner JE, Solberg LI, Main D, Bauer MS, Kyle J, Murphy SA, Nord KM, Thomas MR. Protocol: Adaptive Implementation of Effective Programs Trial (ADEPT): cluster randomized SMART trial comparing a standard versus enhanced implementation intervention to improve outcomes of a mood disorders program. Implement Sci 2014;9:132. PubMed PMID: 25267385; PubMed Central PMCID: PMC4189548..

  30. Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50:217–26.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Alagoz E, Chih MY, Hitchcock M, Brown R, Quanbeck A. The use of external change agents to promote quality improvement and organizational change in healthcare organizations: a systematic review. BMC Health Serv Res. 2018;18:42.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Gustafson DH, Johnson KA, Capoccia V, Cotter F, Ford JH II, et al. The NIATx model: process improvement in behavioral health. Madison WI: University of Wisconsin – Madison; 2011..

    Google Scholar 

  33. Delbecq AL, Van de Ven AH, Gustafson DH. Group techniques for program planning: a guide to nominal group and Delphi processes. Glenview, IL: Scott Foresman & Company; 1975..

  34. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89:1322–7.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  35. Nahum-Shani I, Qian M, Almirall D, Pelham WE, Gnagy B, Fabiano GA. et a. Q-learning: a data analysis method for constructing adaptive interventions. Psychol Methods. 2012;17:478–94..

    Article  PubMed  PubMed Central  Google Scholar 

  36. Gold MR, Siegel JE, Russell L, Weinstein MC, editors. Cost-effectiveness in health and medicine: report of the panel on cost-effectiveness in health and medicine. New York: Oxford University Press; 1996..

    Google Scholar 

  37. Sanders GD, Neumann PJ, Basu A, Brock DW, Feeny D, Krahn M, et al. Recommendations for conduct, methodological practices, and reporting of cost-effectiveness analyses: second panel on cost-effectiveness in health and medicine. JAMA. 2016;316:1093–103.

    Article  PubMed  Google Scholar 

  38. Saldana L, Chamberlain P, Bradford WD, Campbell M, Landsverk J. The cost of implementing new strategies (COINS): a method for mapping implementation resources using the stages of implementation completion. Child Youth Sev Rev. 2014;39:177–82..

    Article  Google Scholar 

  39. Fenwick E, Marshall DA, Levy AR, Nichol G. Using and interpreting cost-effectiveness acceptability curves: an example using data from a trial of management strategies for atrial fibrillation. BMC Health Serv Res. 2006;6:52.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Loudon K, Treweek S, Sullivan F, Donnan P, Thorpe KE, Zwarenstein M. The PRECIS-2 tool: designing trials that are fit for purpose. BMJ. 2015;350:h2147.

    Article  PubMed  Google Scholar 

Download references


The authors acknowledge the support of the National Institutes of Health in conducting this study. The authors also thank the two health systems where the study is being conducted for their partnership; Maureen Smith and Lauren Bednarz for helping create and coordinate work on opioid metrics; the Wisconsin Collaborative for Healthcare Quality, specifically Matthew Gigot and Abbey Hargreaves, for providing quantitative data and creating new opioid metrics for this study; Joshua Delforge, Oliver Eng, and Yonghe Yan for creating feedback reports at the health systems; Deanne Boss, Denise Grossman, Mary Henningfield, Paul Hunter, Lisa Sampson, and Alice Yuroff at the Wisconsin Research and Education Network for help in developing the practice facilitation intervention; and Aleksandra Zgierska for her leadership during the pilot study and ongoing contributions to research on opioid prescribing.


This study is supported by the following grants from the National Institutes of Health: R01DA047279, R01AA024150, R01DA039901, and UL1TR002373. The opinions expressed herein are the views of the authors and do not necessarily reflect the official policy or position of any part of the NIH or the US Department of Health and Human Services.

Author information

Authors and Affiliations



AQ, DA, RTB, JKL, and NJ conceptualized this study. AQ, RAJ, DA, and NJ wrote the grant proposal. AQ, DA, NJ, and RAJ drafted the manuscript. All other authors reviewed and edited the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Andrew Quanbeck.

Ethics declarations

Ethics approval and consent to participate

This study was approved by the University of Wisconsin-Madison’s Health Sciences Institutional Review Board, submission 2018-1276-CP008. Informed consent is being obtained from all relevant participants.

Consent for publication

Not applicable.

Competing interests

The other authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1:.

Supplementary appendix: Balanced Opioid Initiative trial design and primary analysis protocol

Additional file 2:.

CONSORT checklist for cluster-randomized trials

Additional file 3:.

StaRI Checklist

Additional file 4:.

Proof of ethics

Additional file 5:.

Proof of funding

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Quanbeck, A., Almirall, D., Jacobson, N. et al. The Balanced Opioid Initiative: protocol for a clustered, sequential, multiple-assignment randomized trial to construct an adaptive implementation strategy to improve guideline-concordant opioid prescribing in primary care. Implementation Sci 15, 26 (2020).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: