Skip to main content

A novel counterbalanced implementation study design: methodological description and application to implementation research



Implementation research is increasingly being recognised for optimising the outcomes of clinical practice. Frequently, the benefits of new evidence are not implemented due to the difficulties applying traditional research methodologies to implementation settings. Randomised controlled trials are not always practical for the implementation phase of knowledge transfer, as differences between individual and organisational readiness for change combined with small sample sizes can lead to imbalances in factors that impede or facilitate change between intervention and control groups. Within-cluster repeated measure designs could control for variance between intervention and control groups by allowing the same clusters to receive a sequence of conditions. Although in implementation settings, they can contaminate the intervention and control groups after the initial exposure to interventions. We propose the novel application of counterbalanced design to implementation research where repeated measures are employed through crossover, but contamination is averted by counterbalancing different health contexts in which to test the implementation strategy.


In a counterbalanced implementation study, the implementation strategy (independent variable) has two or more levels evaluated across an equivalent number of health contexts (e.g. community-acquired pneumonia and nutrition for critically ill patients) using the same outcome (dependent variable). This design limits each cluster to one distinct strategy related to one specific context, and therefore does not overburden any cluster to more than one focussed implementation strategy for a particular outcome, and provides a ready-made control comparison, holding fixed. The different levels of the independent variable can be delivered concurrently because each level uses a different health context within each cluster to avoid the effect of treatment contamination from exposure to the intervention or control condition.


An example application of the counterbalanced implementation design is presented in a hypothetical study to demonstrate the comparison of ‘video-based’ and ‘written-based’ evidence summary research implementation strategies for changing clinical practice in community-acquired pneumonia and nutrition in critically ill patient health contexts.


A counterbalanced implementation study design provides a promising model for concurrently investigating the success of research implementation strategies across multiple health context areas such as community-acquired pneumonia and nutrition for critically ill patients.

Peer Review reports


The movement to translate research evidence into healthcare policy and practice is well established [1,2,3]. Delays in the uptake of research evidence can prolong the provision of ineffective, low-value, and even potentially harmful healthcare interventions [4, 5]. In response, governments, organisations, and health professionals are increasingly expected to ensure policy, and practice is informed by high-quality, contemporaneous research. Implementation research has been promoted as one way to facilitate the translation of research into practice [6]. This developing field of research evaluates the success of strategies such as knowledge brokering [7, 8], algorithms [9], and multifaceted approaches [10, 11] for individual and organisational change. Health service researchers have increasingly recognised implementation research as a field of science [12], although the benefits of many implementation attempts remain unclear [7, 13].

If the translation of research into practice is to be improved, it is important to understand which implementation strategies are effective and cost-effective. One approach is to compare one group receiving a strategy against another receiving usual care control conditions. However, often there is not only interest in whether a strategy is effective, but also in whether one strategy is more or less effective than another. Some strategies with minimal success may still be efficient if they are also low-cost. Likewise, the cost of successful strategies must be considered when making implementation resource allocation decisions [14]. In addition, many healthcare interventions found to be efficacious in one health context fail to translate benefits to effective patient care outcomes when implemented across diverse contexts [15]. Therefore, comparative effectiveness study designs that compare multiple strategies across different contexts are required.

Parallel cluster randomised controlled trials are commonly used in implementation science. These designs are amenable to evaluating studies of implementation when practice change is desired on a service level rather than an individual level [16]. However, many potential difficulties exist in the application of traditional research methodologies such as parallel designs to studies of implementation success. Logistical challenges in recruitment, implementation strategy delivery, and ensuring adequate statistic power in analyses have been reported in randomised controlled trials that did not reach recruitment targets [17]. In addition, some organisations and individuals are more ready for change than others due to the differences in organisational culture, previous history with change, and differences in individuals’ resistance to change [18, 19]. When these organisational and individual differences are combined with small sample sizes, this can lead to imbalances in factors that impede or facilitate change between intervention and control groups. Performing post hoc analysis and measuring additional factors such as ‘organisational readiness for change’ have historically been used to account for these issues [20,21,22]. However, doing so creates additional data collection burden and may still not adequately control for the differences between organisations, as these measures have been found to involve conceptual ambiguities and limited evidence of reliability or validity [23]. It would be preferable to mitigate this potential source of bias and minimise data collection burden through a design-based solution [24].

One approach to minimise inter-group variation is to use within-cluster repeated measure designs. These differ from between-cluster designs by exposing each cluster to the strategies being evaluated and comparing the effect within clusters rather than between [25]. Crossover studies are a commonly used within-cluster design, which provides each cluster with a random sequence of strategies to counterbalance order effects in repeated measure designs. The limitation of crossover designs in this setting is that exposure to an implementation strategy can permanently contaminate study clusters, so their characteristics are no longer comparable to what they were before they were exposed to the alternate condition. We propose a novel application of counterbalanced design to implementation science, which allows the concurrent comparison of interventions while minimising inter-group differences and the risk of contamination. In this paper, we firstly discuss the rationale and implications of using such an approach. We then describe a hypothetical example application of this approach to compare the effectiveness of two implementation strategies involving a ‘video-based’ evidence summary and ‘written-based’ evidence summary.


Common randomised study designs used in implementation research

The more traditional cluster randomised study design used in implementation research is a between-cluster parallel design. As parallel designs rely on comparing different clusters response to study conditions over the same time period, there is a risk of imbalance between the study groups where small sample sizes are recruited. Alternatively, researchers may consider within-cluster designs, such as crossover and stepped-wedge studies. These designs randomly assign clusters to a sequence of study conditions over a series of time periods. However, there is a risk that exposure to one implementation strategy will contaminate the cluster, so they are no longer comparable in these designs. Factorial designs allow clusters to experience all combinations of conditions, where there are two or more ‘factors’, each with different ‘levels’. One of the advantages is the ability to look at interaction effects between the factors. However, this design may also contaminate clusters after initial exposure to implementation strategies. In the proposed counterbalanced implementation study, units are randomised to alternative interventions; the different levels are applied in different contexts to reduce the risk of contamination. Figure 1 contrasts the counterbalanced design with other commonly used designs in implementation research.

Fig. 1

Contrast in condition allocation between commonly used randomised study designs in implementation research

The counterbalanced implementation study methodological design

This design differs from conventional studies in that the implementation strategy (independent variable) can take on two or more levels which are evaluated across an equivalent number of health contexts (e.g. inpatients, outpatients, medical, surgical), using the same outcome (dependent variable). This design limits each cluster to one distinct strategy related to one specific context, and therefore does not overburden any cluster to more than one focussed implementation strategy for a particular outcome, and provides a ready-made control comparison, holding fixed. Unlike conventional designs, the different levels of the implementation strategy (independent variable) do not need to be provided sequentially; they can be provided concurrently, thereby eliminating contamination and period effects. Each level of the implementation strategy (independent variable) being delivered uses a different health context within each individual cluster to avoid contamination. This approach is represented most simply by a study comparing two implementation strategies (i.e. one independent variable that has two levels):

R XA1|B1 XA2|B2 O
R XA2|B1 XA1|B2 O

Where R indicates the random assignment of units to conditions, O represents the unit outcome assessment, A is the implementation strategy, and B is the health context.

In this model, the XA1|B1 and XA2|B2 study group receives implementation strategy 1 (e.g. audit and feedback) in health context 1 (e.g. surgical inpatients) and strategy 2 (e.g. knowledge broker) in context 2 (e.g. medical outpatients), whereas the XA2|B1 and XA1|B2 study group receives implementation strategy 2 (e.g. knowledge broker) in health context 1 (e.g. surgical inpatients) and strategy 1 (e.g. audit and feedback) in context 2 (e.g. medical outpatients), see Fig. 2a for a three-dimensional representation.

Fig. 2

ac Counterbalanced implementation study model. Circle, participant group 1; square, participant group 2; triangle, participant group 3; star: participant group 4

The counterbalanced approach is not limited to two-level applications. Theoretically, a factorial approach could be used for any number of the implementation strategy and health context combinations. However, there may be a risk of imbalance in characteristics between the clusters, particularly with low recruitment rates. Therefore, any number of implementation strategies can be examined; however, a new health context would ideally be added for each additional implementation strategy included. This process is illustrated with the three-level helical study model:

R XA1|B1 XA2|B2 XA3|B3 O
R XA2|B1 XA3|B2 XA1|B3 O
R XA3|B1 XA1|B2 XA2|B3 O

In this model, the XA1|B1, XA2|B2, and XA3|B3 study group receive implementation strategy 1 in health context 1, strategy 2 in context 2, and strategy 3 in context 3. The XA2|B1, XA3|B2, and XA1|B3 study group receives implementation strategy 2 in health context 1, strategy 3 in context 2, and strategy 1 in context 3, and similarly for the XA3|B1, XA1|B2, and XA2|B3 study group, see Fig. 2b for a three-dimensional representation. The described three-sequence trial has six potential choices of assignment. In order to reduce the potential risk of order effects for both concurrent and sequential study condition assignment, a randomly selected balanced design could be achieved by matched randomisation of the three different strategies and contexts with blocks. Figure 2c illustrates three dimensionally how a four-level study model can be applied. The advantage of this model is systematic replication without contamination.

We next illustrate a hypothetical counterbalanced implementation study example. This will demonstrate the pragmatic comparison of ‘video-based’ and ‘written-based’ evidence summary research implementation strategies for changing clinical practice in community-acquired pneumonia and nutrition in critically ill patient health contexts. For simplicity, a two-level counterbalanced implementation study is presented, with the effectiveness of two different implementation strategies examined in two different health contexts.


Hypothetical counterbalanced implementation study example

Two potential priority health contexts for the implementation of research evidence in practice are ‘management of community-acquired pneumonia’ and ‘nutrition in critically ill patients’. Community-acquired pneumonia is the second highest cause of mortality globally [26]. Evidence suggests that corticosteroid therapy may reduce mortality, need for mechanical ventilation, hospital length of stay, time to clinical stability, and severe complications [27,28,29]. Despite these findings, poor adherence to guidelines for the use of corticosteroids to treat community-acquired pneumonia has been reported [9, 30]. A second health context where current practice does not always align with the most contemporaneous research is nutrition in critically ill patients. Despite guidelines recommending the provision of early enteral nutrition for critically ill patients [31,32,33,34,35], many do not receive adequate nutritional support [34,35,36].

Two potential strategies which could be used to implement research into practice in the management of community-acquired pneumonia and nutrition in critically ill patient health contexts are ‘video-based’ research evidence summaries and ‘written-based’ research evidence summaries. Written-based evidence summaries have been a conventional method for evidence dissemination through scientific journal articles, abstracts, books, and editorials [13, 37]. There has been some examination of the effectiveness of written-based evidence summaries [38, 39]. However, this evidence is limited and does not include comparison with a non-written evidence summary approach. Similarly, video-based evidence summaries have been examined in some areas such as falls prevention education [40]. However, the results may not be applicable to other health contexts.

A counterbalanced implementation study could examine the success of written (A1) versus video (A2) evidence summaries for translating research into practice for community-acquired pneumonia (B1) and nutrition in critically ill patients (B2). This study model would be conducted as follows:

R XA1|B1 XA2|B2 O
R XA2|B1 XA1|B2 O

In this model, the XA1|B1 and XA2|B2 study group receives the written-based evidence summary implementation strategy (A1) in community-acquired pneumonia (B1) and the video-based evidence summary implementation strategy (A2) in nutrition for critically ill patient contexts (B2). The XA2|B1 and XA1B2 study group would receive the opposite, video-based evidence summary implementation strategy (A2) in the community-acquired pneumonia context (B1) and the written-based evidence summary implementation strategy (A1) in the nutrition for critically ill patient context (B2), see Fig. 3 for a three-dimensional representation.

Fig. 3

Two-level counterbalanced study model for written and video evidence summaries in community-acquired pneumonia and nutrition for critically ill patients

Measuring success in a counterbalanced implementation study

How to best measure implementation success is a developing concept in this field of science [7, 41]. Researchers must clearly define their aims before planning outcome measurement, as process measures may not necessarily translate to changed behaviour or improved patient outcomes, and vice versa. Potential outcomes can be conceptualised as an educational intervention measured according to the four-level Kirkpatrick model hierarchy: (1) reaction, (2) learning, (3) behaviour, and (4) results [42]. In addition, a number of process measures and implementation-specific outcomes can be considered [41]. Arguably, results such as patient and health service outcomes should be the standard for judging the success of an implementation strategy. However, it is important to understand the mechanisms leading to those changes and the sustainability of change.

Several logistical difficulties in the measurement of patient or organisation outcomes are posed in the counterbalanced implementation study paradigm. As multiple health contexts are considered, several different context-specific outcomes may need to be measured. In the hypothetical example, the time required to achieve 80% of the calculated energy intake goal may be an appropriate measure for the nutrition in critically ill patients example [32]; however, for community-acquired pneumonia, pneumonia-associated complications may be an appropriate measure [9]. Data would also need to be clustered for analysis, which potentially limits the statistical power if the implementation strategies were delivered at the ward or hospital level (Additional file 1). Another consideration is whether recruitment rates would reach an adequate threshold so that the number of health professionals in each cluster is not sub-therapeutic within the cluster. For example, a video- or written-based evidence summary aiming to improve corticosteroid use in the management of community-acquired pneumonia would need to recruit enough medical, nursing, and allied health staff on each ward to be able to change the outcomes at the aggregate ward cluster level. This problem could be offset by requiring a smaller overall sample size from which to collect outcomes, as balanced within-cluster repeated measure study designs may require a relatively smaller sample size than other unbalanced designs to achieve adequate statistical power [43].

Implementation-specific outcomes and changes in attitudes, learning, and behaviour may provide a more pragmatic, adequate measure of success in a counterbalanced implementation study, if assumptions about flow-on effects from process measures to health outcome changes are avoided and setting-specific organisational factors are considered. Whichever level of outcome from the Kirkpatrick hierarchy is selected to be the primary outcome in a counterbalanced trial, it is important to consider using the same outcome either within or across health contexts depending on the focus of the study, as this may allow pooling of results across implementation studies. Where a health outcome is the chosen primary outcome measure, a generic scale (such as a generic health-related quality of life scale) could also be applied within or across health contexts to allow comparison of success across studies. It is worth noting that generic scales can be less responsive than disease-specific scales, which may have implications for the statistical power and sample size of the study [44].

Statistical analysis considerations

Analysis of a counterbalanced implementation study for the effectiveness of implementation strategies in each context should follow intention-to-treat conventions based on paired data, given participants act as their own controls [45]. Both superiority and non-inferiority analytical tests can be performed depending on the focus of the study and outcome measures of interest.

The overall effect of an implementation strategy can be derived using a single mixed effects generalised linear model (Additional file 2). In this statistical model, each cluster (unit of randomisation) is treated as a random effect (nesting where relevant), and each implementation strategy and (separately) context as a fixed effect (to account for the potential for differential exposure of implementation strategies to different context areas through non-permuted randomisation). Clusters are considered to be uncorrelated across health contexts. A two-level model is presented, however, multilevel approaches can be applied.

Interaction effects also may need to be considered in a counterbalanced implementation study (Additional file 3). This is because an implementation strategy may be more effective in one health context than another. Including an interaction effect term in the statistical model can be used to further clarify the relationships between dependent and independent variables. The model presented in Additional file 3 includes the same statistical model described previously with the addition of an interaction effect term.

For the analysis of counterbalanced designs in implementation science, consideration of whether health context is treated as a fixed or random factor is needed. An implementation study could also be designed with the health context as a random factor, providing an efficient way of producing generalisable knowledge. Mixed effects modelling with multiple crossed random effects have been used to evaluate similar designs in social science research [46,47,48]. This research often counterbalances participant response to test items under different conditions or treatment response for different stimuli. In this setting, both participants and condition/stimuli are considered random factors, as they have been sampled from a larger population. The inference is that researchers can generalise the findings across participants and condition/stimuli.

Sample size and power calculations

For this study design, sample size estimations can be calculated in two steps. First, a power analysis for a two-sampled paired-proportions test is conducted assuming two independent samples to determine the sample size required for a between-cluster study design. This calculation is then followed by an adjustment for the relative efficiency (RE) of within-cluster designs. In order to illustrate this two-stage approach, an example sample size and a power calculation are presented.

For a study with three levels of context and implementation strategy, a sample size of 50 clusters (unit of randomisation) per group (150 clusters total) would provide 80% power at a 0.05 significance level with a two-sample paired-proportions test adjusted for the relative efficiency of within-cluster designs examining a pairwise contrast between two levels of implementation strategy across all three contexts, under the assumption that 60% of those in an intervention group align with the desired practice change and 40% in the control group. This process begins with a sample size of 107 clusters (unit of randomisation) across 3 levels of context and implementation strategy levels, generating 321 cluster-level outcome measurement observations. In this first step, a power analysis for a two-sample paired-proportions test examining a pairwise contrast between two levels of implementation strategy across all three contexts would provide 80% power at a 0.05 significance level, under the assumption that 60% of those in an intervention group align with the desired practice change and 40% in the control group. In the second step, adjustment for the relative efficiency (RE) of repeated measures in a counterbalanced design can be performed using the formula: RE = 0.5[(1 − Pc − p)/(1 + (n − 1)p)], where n is the number of units within each cluster across periods, p is the intra-class correlation coefficient (ICC) between units in the same cluster at the same time point, and Pc is the the inter-period correlation [49, 50]. One unit within each cluster across three periods, an assumed ICC of 0.001 between units and an inter-period correlation of 0.0442, leads to an estimated relative efficiency for crossover in the counterbalanced design of 0.4685. Once the between-cluster sample size is multiplied by the relative efficiency of within-cluster design (107 × 0.4685), this adjustment creates the target study sample size of 50 clusters (unit of randomisation) per group (150 total clusters total).

Planning a counterbalanced implementation trial design

Implementation studies can be designed to change practice at the organisational level (e.g. health services), departmental level (e.g. hospital wards), health professional level (e.g. medical staff or physiotherapists), or patient level (e.g. inpatient or outpatients). Selecting the unit level of analysis is dependent on the focus of the study, and researchers also need to consider the feasibility of recruitment, implementation strategy delivery, and statistical power. Wilson and colleagues reflected on a randomised controlled trial attempt which did not reach recruitment targets, reporting that recruiting at a departmental or divisional unit of allocation, rather than at the level of individual staff members, may provide more meaningful analysis of implementation success [17]. However, this approach has limitations in statistical power, which would need to be addressed.

Planning a counterbalanced implementation study involves the selection of both health contexts and implementation strategies. We recommend that researchers engage in a consultative process with potential participant organisations and individuals, which, in turn, may improve the recruitment rates and fidelity of interventions. This type of early engagement is important because it can help to ensure health contexts and implementation strategies are relevant to participating individuals and organisations. Consultation would identify the context areas of interest to patients, clinicians, managers, policy-makers, and organisations. A prioritisation process may then provide a mechanism for selecting the specific health contexts for implementation in a counterbalanced study. Limitations could be imposed on the number of helices (strategy/context combinations) in the study, or several different studies could be run. Health contexts should involve interventions, assessments, policies, programmes, or practices with good levels of evidence to support both effectiveness and cost-effectiveness prior to implementation.

We recommend the selection of implementation strategies for examination that involve the application of an evidence-based framework, model, or theory [51]. Evaluation of implementation strategies can be aimed at describing the process of translating research into practice, understanding different variables which influence implementation outcomes, or examining strategy success depending on the overall aims of the study [52]. Process models such as the knowledge to action framework can specify the stages to guide the process of translating research into practice [53]. The Promoting Action on Research Implementation in Health Service (PARIHS) framework [54], theory of diffusion [55], and the Capability, Opportunity, Motivation, and Behaviour (COM-B) model [56] are all examples of understanding what influences implementation outcomes, where the Reach, Efficacy, Adoption, Implementation, and Maintenance (RE-AIM) model focuses on examining the implementation strategy success [57]. Once an overall theory is established, implementation strategies can then be tailored to the trial organisation or individuals, as studies have shown tailored strategies may be more effective than generic strategies [58, 59].

It is worth noting the consideration of system levels when planning a counterbalanced study. By replicating a counterbalanced study across multiple systems (e.g. hospitals, healthcare organisations, public/private, primary/tertiary, states), more generalisable findings regarding implementation strategies applied in different contexts could be developed. Developing evidence in this way creates an opportunity to ensure external validity of findings, which would support the scale-up of implementation strategies [60]. Alternatively, it could be valuable to conduct a counterbalanced trial in a single system. Creating locally specific knowledge around the implementation process would re-direct the study focus to how strategies work within local organisational cultures. This would be useful for those interested in the internal validity of strategies and their applicability at a local site [61].


In this manuscript, we have described a novel approach to the evaluation of different implementation strategies across multiple health contexts. The counterbalanced implementation trial compares the same subject response to all interventions, effectively reducing the number of potential confounding covariates. This approach may improve the efficiency and precision of studies through smaller levels of variance, which can reduce the sample size required to identify a statistically significant change in outcomes [62]. In situations where implementation strategies cannot be evaluated concurrently, the potential for order effects needs to be considered. Homogeneity between implementation strategies or context areas would have to be addressed to avoid potential carryover or ‘learning’ effects, where the benefits of the implementation strategy in one context are potentially carried over to the next implementation strategy in the next context. Some implementation strategies are ‘health context specific’, in that they may be successful in one area but not in others [63, 64]. Researchers should consider whether their implementation strategy is transferrable across health contexts, or whether a counterbalanced design would be better employed to compare the strategy in the same health context across different organisations or study sites, rather than across contexts.

The applicability of traditional study designs to implementation science has been questioned [65]. Randomised controlled trials are important for determining the efficacy of treatments in highly controlled environments to ensure internal validity [66]. However, it has been suggested that these designs may not be appropriate for the implementation phase of evaluation, due to potentially low levels of external validity [65]. Novel designs, such as the counterbalanced, stepped-wedge [67], and adaptive trials [68] that incorporate the use of routinely collected health service data and incorporate consent waivers where ethically appropriate, may provide a logistically simple, low-cost pathway for the effectiveness and implementation phase of clinical research. Ideally, implementation evaluations would be conducted in ‘real-world’ settings, be appropriately statistically powered, and designed to reduce potential confounders and risk of bias that could mislead study conclusions [9]. Implementation studies often focus on the processes to integrate evidence-informed decision-making in healthcare organisations, involving clinicians, managers, and policy-makers [7]. These populations are notoriously time-poor [69,70,71,72,73], which can lead to difficulties in enrolling participants in an implementation study [17]. Therefore, study designs that can increase exposure to different conditions while maximising statistical power are valuable for implementation researchers. The counterbalanced implementation study design provides a pathway for progressive upscaling of successful implementation strategies in different health contexts, allowing gradual refinement of strategies for certain contexts. Upon study conclusion, all participants or participant clusters will have been provided with each implementation strategy, which can be continued if proved effective, cost-effective, and is taken up by the study organisation.

Potential limitations of a counterbalanced design

Despite the advantages to conducting a counterbalanced implementation trial presented, there are limitations to applying this study design in some circumstances. The main limitations identified and described below relate to the feasibility and potential sources of bias.


A necessity of the counterbalanced implementation study is that recruited health services or individuals can be exposed to the target implementation strategies in each health context. Each individual participant or participating healthcare organisation would need to provide services towards each of the health contexts selected for the study. For example, if the two context areas were chosen, (1) reducing prescription of antibiotics for upper respiratory tract infections which have been shown to be overused [74] and (2) reducing the use of arthroscopic surgery for knee osteoarthritis which has no benefit for un-discriminated osteoarthritis [75], participants or participating healthcare organisations would need to be involved in both prescribing antibiotics for upper respiratory tract infections and routine treatment delivery for knee osteoarthritis. This may be difficult to achieve given the different disciplines and specialties involved.

The selection of certain health contexts and implementation strategies for evaluation may be restricted by the organisational policies, individual preferences, and limitations in resources provided at the health service or study location. Identifying multiple health contexts and implementation strategies in collaboration with organisations and individuals involved in a prospective counterbalanced implementation study may address this limitation. This collaborative approach may not differ largely from conventional implementation settings, where practice change must be aligned to organisational policies, goals, and priorities. An interesting approach that could account for these issues is ‘co-production’ or ‘co-design’ in implementation strategy development [76, 77]. This concept involves the collaborative development of implementation strategies by both the producer and user stakeholders [78] and may assist researchers and organisations in evaluating the implementation of programmes, practices, or policies.

Selection of outcomes in a counterbalanced implementation study also requires a consideration of feasibility, as specific measurements may differ between health contexts. We recommend patient or health service outcomes be used as standard measurements of implementation success where feasible. Feasibility should be determined prior to study conduct by considering the difficulty and cost of obtaining outcomes. Data such as hospital length of stay and rate of adverse events are routinely collected by health services and could therefore be considered examples of feasible outcome measures [9, 79, 80]. It must also be considered whether participant recruitment is likely to reach adequate thresholds as to effect outcomes within clusters (e.g. wards, hospitals). Alternatively, changes in attitudes, knowledge, behaviour, or implementation process outcomes may be considered, where outcomes are not based on routinely collected data (e.g. patient comprehension errors), or recruitment is unlikely to alter outcomes at the cluster level.

Potential source of bias

Two potential sources of bias in a counterbalanced implementation study are outcome and participant selection bias. Outcomes should be carefully selected and interpreted from a grounded theoretical basis, and pre-specified in clinical trial registration and published protocols to avoid the selection of ‘convenient’ outcomes, or selective reporting in published manuscripts. Participant selection bias can occur through low response rates in recruitment for counterbalanced implementation studies. Therefore, reporting should address whether there was a low response rate in participant recruitment and approaches used to address this potential risk of bias. In addition, there should be clear reporting when there are limitations to the application of results outside of the study sample.

Future research

There are currently no reporting guidelines specific for counterbalanced randomised studies. Until consensus is established for the minimum standards for transparent reporting of counterbalanced trials, we recommend that reporting should follow the CONsolidated Standards of Reporting Trials (CONSORT) Statement 2010 [45, 81]. In addition, reporting of any testing (or non-testing) for carry-over effects of interventions across periods, and reporting of washout periods, should be included if clusters are exposed to study conditions sequentially. Other manuscript requirements from the International Committee of Medical Journal Editors (ICMJE) should also be considered [82]. Future research should focus on consensus reporting guidelines and analyses of counterbalanced implementation trials to ensure quality of conduct and reporting.


The proposed novel counterbalanced implementation trial provides a potentially efficient and pragmatic research implementation study design for the evaluation of different strategies across multiple contexts. This design extends conventional trials used in the evaluation of implementation strategies. In the example application, comparing ‘video-’ versus ‘written-based’ research evidence summaries in community-acquired pneumonia and nutrition for critically ill patients, the counterbalanced implementation design would offer a potentially feasible and cost-effective means for evaluation and tailoring of implementation strategies in a study setting. The improved study balance through repeated measure design may result in proportionally fewer participants needed for adequate statistical power compared to potentially less balanced parallel approaches, allowing a ‘two for one’ evaluation of implementation strategies for high priority implementation contexts. Further refinement and tailoring of strategies within studies may facilitate scale-up of successful implementation strategies for translating research into practice across healthcare organisations.



CONsolidated Standards of Reporting Trials


International Committee of Medical Journal Editors


  1. 1.

    Glasgow RE, Vinson C, Chambers D, Khoury MJ, Kaplan RM, Hunter C. National Institutes of Health Approaches to Dissemination and Implementation Science: current and future directions. Am J Public Health. 2012;102(7):1274–81.

    PubMed  PubMed Central  Article  Google Scholar 

  2. 2.

    Thornicroft G, Lempp H, Tansella M. The place of implementation science in the translational medicine continuum. Psychol Med. 2011;41(10):2015–21.

    CAS  PubMed  Article  Google Scholar 

  3. 3.

    Eccles MP, Mittman BS. Welcome to implementation science. Implement Sci. 2006;1:1–1.

    PubMed Central  Article  Google Scholar 

  4. 4.

    Elshaug AG, Watt AM, Mundy L, Willis CD. Over 150 potentially low-value health care practices: an Australian study. Med J Aust. 2012;197(10):556–60.

    PubMed  Article  Google Scholar 

  5. 5.

    Haines T, Bowles K, Mitchell D, O’Brien L, Markham D, Plumb S, May K, Philip K, Haas R, Sarkies M. Impact of disinvestment from weekend allied health services across acute medical and surgical wards: 2 stepped-wedge cluster randomised controlled trials. PLoS Med. 2017;14(10):e1002412.

    PubMed  PubMed Central  Article  Google Scholar 

  6. 6.

    Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translation of research findings. Implement Sci. 2012;7:50.

  7. 7.

    Sarkies MN, Bowles K-A, Skinner EH, Haas R, Lane H, Haines TP. The effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare: a systematic review. Implement Sci. 2017;12(1):132.

    PubMed  PubMed Central  Article  Google Scholar 

  8. 8.

    Sarkies MN, White J, Morris ME, Taylor NF, Williams C, O’Brien L, Martin J, Bardoel A, Holland AE, Carey L, et al. Implementation of evidence-based weekend service recommendations for allied health managers: a cluster randomised controlled trial protocol. Implement Sci. 2018;13(1):60.

    PubMed  PubMed Central  Article  Google Scholar 

  9. 9.

    Skinner EH, Lloyd M, Janus E, Ong ML, Karahalios A, Haines TP, Kelly A-M, Shackell M, Karunajeewa H. The IMPROVE-GAP Trial aiming to improve evidence-based management of community-acquired pneumonia: study protocol for a stepped-wedge randomised controlled trial. Trials. 2018;19(1):88.

    PubMed  PubMed Central  Article  Google Scholar 

  10. 10.

    Riis A, Jensen CE, Bro F, Maindal HT, Petersen KD, Bendtsen MD, Jensen MB. A multifaceted implementation strategy versus passive implementation of low back pain guidelines in general practice: a cluster randomised controlled trial. Implement Sci. 2016;11(1):143.

    PubMed  PubMed Central  Article  Google Scholar 

  11. 11.

    McKenzie JE, French SD, O’Connor DA, Grimshaw JM, Mortimer D, Michie S, Francis J, Spike N, Schattner P, Kent PM, et al. IMPLEmenting a clinical practice guideline for acute low back pain evidence-based manageMENT in general practice (IMPLEMENT): cluster randomised controlled trial study protocol. Implement Sci. 2008;3:11.

    PubMed  PubMed Central  Article  Google Scholar 

  12. 12.

    Bammer G. Integration and implementation sciences: building a new specialization. Ecol Soc. 2003;10(2):95–107.

    Google Scholar 

  13. 13.

    Bero LA, Grilli R, Grimshaw JM, Harvey E, Oxman AD, Thomson MA. Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings. BMJ. 1998;317(7156):465–8.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  14. 14.

    Reilly KL, Reeves P, Deeming S, Yoong SL, Wolfenden L, Nathan N, Wiggers J. Economic analysis of three interventions of different intensity in improving school implementation of a government healthy canteen policy in Australia: costs, incremental and relative cost effectiveness. BMC Public Health. 2018;18(1):378.

    PubMed  PubMed Central  Article  Google Scholar 

  15. 15.

    Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50.

    PubMed  PubMed Central  Article  Google Scholar 

  16. 16.

    Hemming K, Haines TP, Chilton PJ, Girling AJ, Lilford RJ. The stepped wedge cluster randomised trial: rationale, design, analysis, and reporting. BMJ. 2015;350.

  17. 17.

    Wilson MG, Grimshaw JM, Haynes RB, Hanna SE, Raina P, Gruen R, Ouimet M, Lavis JN. A process evaluation accompanying an attempted randomized controlled trial of an evidence service for health system policymakers. Health Res Policy Syst. 2015;13(1):78.

    PubMed  PubMed Central  Article  Google Scholar 

  18. 18.

    Ingersoll GL, Kirsch JC, Merk SE, Lightfoot J. Relationship of organizational culture and readiness for change to employee commitment to the organization. J Nurs Adm. 2000;30(1):11–20.

    CAS  PubMed  Article  Google Scholar 

  19. 19.

    Backer TE. Assessing and enhancing readiness for change: implications for technology transfer. NIDA Res Monogr. 1995;155:21–41.

    CAS  PubMed  Google Scholar 

  20. 20.

    Hagedorn HJ, Heideman PW. The relationship between baseline organizational readiness to change assessment subscale scores and implementation of hepatitis prevention services in substance use disorders treatment clinics: a case study. Implement Sci. 2010;5(1):46.

    PubMed  PubMed Central  Article  Google Scholar 

  21. 21.

    Whitten P, Holtz B, Meyer E, Nazione S. Telehospice: reasons for slow adoption in home hospice care. J Telemed Telecare. 2009;15(4):187–90.

    PubMed  Article  Google Scholar 

  22. 22.

    Bohman TM, Kulkarni S, Waters V, Spence RT, Murphy-Smith M, McQueen K. Assessing health care organizations’ ability to implement screening, brief intervention, and referral to treatment. J Addict Med. 2008;2(3):151–7.

    PubMed  Article  Google Scholar 

  23. 23.

    Weiner BJ, Amick H, Lee S-YD. Conceptualization and measurement of organizational readiness for change: a review of the literature in health services research and other fields. Med Care Res Rev. 2008;65(4):379–436.

    PubMed  Article  Google Scholar 

  24. 24.

    Pannucci CJ, Wilkins EG. Identifying and avoiding bias in research. Plast Reconstr Surg. 2010;126(2):619–25.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  25. 25.

    Charness G, Gneezy U, Kuhn MA. Experimental methods: between-subject and within-subject design. J Econ Behav Organ. 2012;81(1):1–8.

    Article  Google Scholar 

  26. 26.

    Soh S-E, Morris ME, Watts JJ, McGinley JL, Iansek R. Health-related quality of life in people with Parkinson. Aust Health Rev. 2016;40(6):613–8.

  27. 27.

    Siemieniuk RC, Meade MO, Alonso-Coello P, et al. Corticosteroid therapy for patients hospitalized with community-acquired pneumonia: a systematic review and meta-analysis. Ann Intern Med. 2015;163(7):519–28.

    PubMed  Article  Google Scholar 

  28. 28.

    Marti C, Grosgurin O, Harbarth S, Combescure C, Abbas M, Rutschmann O, Perrier A, Garin N. Adjunctive corticotherapy for community acquired pneumonia: a systematic review and meta-analysis. PLoS One. 2015;10(12):e0144032.

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  29. 29.

    Holland AE. Physiotherapy management of acute exacerbations of chronic obstructive pulmonary disease. J Phys. 2014;60(4):181–8.

    Google Scholar 

  30. 30.

    Adler NR, Weber HM, Gunadasa I, Hughes AJ, Friedman ND. Adherence to therapeutic guidelines for patients with community-acquired pneumonia in Australian hospitals. Clin Med Insights Circ Respir Pulm Med. 2014;8:17–20.

    PubMed  PubMed Central  Article  Google Scholar 

  31. 31.

    Kreymann KG, Berger MM, Deutz NE, Hiesmayr M, Jolliet P, Kazandjiev G, Nitenberg G, van den Berghe G, Wernerman J, Ebner C, et al. ESPEN Guidelines on Enteral Nutrition: intensive care. Clin Nutr. 2006;25(2):210–23.

    CAS  PubMed  Article  Google Scholar 

  32. 32.

    Martin CM, Doig GS, Heyland DK, Morrison T, Sibbald WJ. Network fTSOCCR: multicentre, cluster-randomized clinical trial of algorithms for critical-care enteral and parenteral therapy (ACCEPT). Can Med Assoc J. 2004;170(2):197–204.

    Google Scholar 

  33. 33.

    Heyland DK, Dhaliwal R, Drover JW, Gramlich L, Dodek P. Canadian clinical practice guidelines for nutrition support in mechanically ventilated, critically ill adult patients. J Parenter Enter Nutr. 2003;27(5):355–73.

    Article  Google Scholar 

  34. 34.

    Doig GS, Simpson F, Finfer S, Delaney A, Davies AR, Mitchell I, Dobb G. Effect of evidence-based feeding guidelines on mortality of critically ill adults: a cluster randomized controlled trial. Jama. 2008;300(23):2731–41.

    CAS  PubMed  Article  Google Scholar 

  35. 35.

    Doig GS, Heighes PT, Simpson F, Sweetman EA, Davies AR. Early enteral nutrition, provided within 24 h of injury or intensive care unit admission, significantly reduces mortality in critically ill patients: a meta-analysis of randomised controlled trials. Intensive Care Med. 2009;35(12):2018–27.

    CAS  PubMed  Article  Google Scholar 

  36. 36.

    Heyland DK, Schroter-Noppe D, Drover JW, Jain M, Keefe L, Dhaliwal R, Day A. Nutrition support in the critical care setting: current practice in Canadian ICUs--opportunities for improvement? JPEN J Parenter Enteral Nutr. 2003;27(1):74–83.

    PubMed  Article  Google Scholar 

  37. 37.

    Giguère A, Légaré F, Grimshaw J, Turcotte S, Fiander M, Grudniewicz A, et al. Printed educational materials: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;(10).

  38. 38.

    Dobbins M, Cockerill R, Barnsley J. Factors affecting the utilization of systematic reviews: a study of public health decision makers. Int J Technol Assess Health Care. 2001;17(2):203–14.

    CAS  PubMed  Article  Google Scholar 

  39. 39.

    Beynon P, Chapoy C, Gaarder M, Masset E. What difference does a policy brief make?: Institute of Development Studies and 3ie; 2012.

    Google Scholar 

  40. 40.

    Hill AM, McPhail S, Hoffmann T, Hill K, Oliver D, Beer C, Brauer S, Haines TP. A randomized trial comparing digital video disc with written delivery of falls prevention education for older patients in hospital. J Am Geriatr Soc. 2009;57(8):1458–63.

    PubMed  Article  Google Scholar 

  41. 41.

    Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, Hensley M. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Admin Pol Ment Health. 2011;38(2):65–76.

    Article  Google Scholar 

  42. 42.

    Kirkpatrick DL. Evaluating training programs: a collection of articles from the Journal of the American Society for Training and Development: American Society for Training and Development; 1975.

    Google Scholar 

  43. 43.

    Sibbald B, Roberts C. Understanding controlled trials crossover trials. BMJ. 1998;316(7146):1719–20.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  44. 44.

    Wiebe S, Guyatt G, Weaver B, Matijevic S, Sidwell C. Comparative responsiveness of generic and specific quality-of-life instruments. J Clin Epidemiol. 2003;56(1):52–60.

    PubMed  Article  Google Scholar 

  45. 45.

    Mills EJ, Chan A-W, Wu P, Vail A, Guyatt GH, Altman DG. Design, analysis, and presentation of crossover trials. Trials. 2009;10(1):27.

    PubMed  PubMed Central  Article  Google Scholar 

  46. 46.

    Quené H, van den Bergh H. Examples of mixed-effects modeling with crossed random effects and with binomial data. J Mem Lang. 2008;59(4):413–25.

    Article  Google Scholar 

  47. 47.

    Judd CM, Westfall J, Kenny DA. Treating stimuli as a random factor in social psychology: a new and comprehensive solution to a pervasive but largely ignored problem. J Pers Soc Psychol. 2012;103(1):54.

    PubMed  Article  Google Scholar 

  48. 48.

    Westfall J, Kenny DA, Judd CM. Statistical power and optimal design in experiments in which samples of participants respond to samples of stimuli. J Exp Psychol Gen. 2014;143(5):2020.

    PubMed  Article  Google Scholar 

  49. 49.

    Rutterford C, Copas A, Eldridge S. Methods for sample size determination in cluster randomized trials. Int J Epidemiol. 2015;44(3):1051–67.

    PubMed  PubMed Central  Article  Google Scholar 

  50. 50.

    Rietbergen C, Moerbeek M. The design of cluster randomized crossover trials. J Educ Behav Stat. 2011;36(4):472–90.

    Article  Google Scholar 

  51. 51.

    Slade SC, Philip K, Morris ME. Frameworks for embedding a research culture in allied health practice: a rapid review. Health Res Policy Syst. 2018;16(1):29.

    PubMed  PubMed Central  Article  Google Scholar 

  52. 52.

    Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:53.

    PubMed  PubMed Central  Article  Google Scholar 

  53. 53.

    Wilson KM, Brady TJ, Lesesne C. An organizing framework for translation in public health: the knowledge to action framework. Prev Chronic Dis. 2011;8(2):A46.

  54. 54.

    Kitson A, Harvey G, McCormack B. Enabling the implementation of evidence based practice: a conceptual framework. BMJ Qual Saf. 1998;7(3):149–58.

    CAS  Article  Google Scholar 

  55. 55.

    Rogers EM. Diffusion of innovations. New York: The Free Press; 1995.

    Google Scholar 

  56. 56.

    Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. 2011;6:42.

    PubMed  PubMed Central  Article  Google Scholar 

  57. 57.

    Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322–7.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  58. 58.

    Dobbins M, Hanna SE, Ciliska D, Manske S, Cameron R, Mercer SL, O’Mara L, DeCorby K, Robeson P. A randomized controlled trial evaluating the impact of knowledge translation and exchange strategies. Implement Sci. 2009;4(1):1–16.

    Article  Google Scholar 

  59. 59.

    LaRocca R, Yost J, Dobbins M, Ciliska D, Butt M. The effectiveness of knowledge translation strategies used in public health: a systematic review. BMC Public Health. 2012;12:751.

  60. 60.

    Brown CH, Curran G, Palinkas LA, Aarons GA, Wells KB, Jones L, Collins LM, Duan N, Mittman BS, Wallace A. An overview of research and evaluation designs for dissemination and implementation. Annu Rev Public Health. 2017;38:1–22.

    PubMed  PubMed Central  Article  Google Scholar 

  61. 61.

    Duan N, Kravitz RL, Schmid CH, et al. J Clin Epidemiol. 2013;66(8, Supplement):S21–8.

    PubMed  PubMed Central  Article  Google Scholar 

  62. 62.

    Piantadosi S. Crossover designs. Clinical Trials: A Methodologic Perspective. 2nd ed; 2005. p. 515–27.

    Google Scholar 

  63. 63.

    Diwan VK, Wahlstrom R, Tomson G, Beermann B, Sterky G, Eriksson B. Effects of ‘group detailing’ on the prescribing of lipid-lowering drugs: a randomized controlled trial in Swedish primary care. J Clin Epidemiol. 1995;48(5):705–11.

    CAS  PubMed  Article  Google Scholar 

  64. 64.

    Watson MC, Bond CM, Grimshaw JM, Mollison J, Ludbrook A, Walker AE. Educational strategies to promote evidence-based community pharmacy practice: a cluster randomized controlled trial (RCT). Fam Pract. 2002;19(5):529–36.

    CAS  PubMed  Article  Google Scholar 

  65. 65.

    Schliep ME, Alonzo CN, Morris MA. Beyond RCTs: innovations in research design and methods to advance implementation science. Evid Based Commun Assess Interv. 2017;11(3–4):82–98.

  66. 66.

    Stolberg HO, Norman G, Trop I. Randomized controlled trials. Am J Roentgenol. 2004;183(6):1539–44.

    Article  Google Scholar 

  67. 67.

    Haines T, O’Brien L, McDermott F, Markham D, Mitchell D, Watterson D, Skinner E. A novel research design can aid disinvestment from existing health technologies with uncertain effectiveness, cost-effectiveness, and/or safety. J Clin Epidemiol. 2014;67(2):144–51.

    PubMed  Article  Google Scholar 

  68. 68.

    Bhatt DL, Mehta C. Adaptive designs for clinical trials. N Engl J Med. 2016;375(1):65–74.

    PubMed  Article  Google Scholar 

  69. 69.

    Cheater F, Baker R, Gillies C, Hearnshaw H, Flottorp S, Robertson N, et al. Tailored interventions to overcome identified barriers to change: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2005;(3).

  70. 70.

    Boström AM, Kajermo KN, Nordström G, Wallin L. Barriers to research utilization and research use among registered nurses working in the care of older people: does the BARRIERS scale discriminate between research users and non-research users on perceptions of barriers? Implement Sci. 2008;3:24.

    PubMed  PubMed Central  Article  Google Scholar 

  71. 71.

    Jette DU, Bacon K, Batty C, Carlson M, Ferland A, Hemingway RD, Hill JC, Ogilvie L, Volk D. Evidence-based practice: beliefs, attitudes, knowledge, and behaviors of physical therapists. Phys Ther. 2003;83(9):786–805.

    PubMed  Google Scholar 

  72. 72.

    Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014;14(1):1–12.

    Article  Google Scholar 

  73. 73.

    Grol R, Wensing M. What drives change? Barriers to and incentives for achieving evidence-based practice. Med J Aust. 2004;180(6 Suppl):S57.

    PubMed  Google Scholar 

  74. 74.

    Schroeck JL, Ruh CA, Sellick JA, Ott MC, Mattappallil A, Mergenhagen KA. Factors associated with antibiotic misuse in outpatient treatment for upper respiratory tract infections. Antimicrob Agents Chemother. 2015;59(7):3848–52.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  75. 75.

    Laupattarakasem W, Laopaiboon M, Laupattarakasem P, Sumananont C. Arthroscopic debridement for knee osteoarthritis. Cochrane Database Syst Rev. 2008;(1).

  76. 76.

    Heaton J, Day J, Britten N. Collaborative research and the co-production of knowledge for practice: an illustrative case study. Implement Sci. 2015;11(1):20.

    Article  Google Scholar 

  77. 77.

    Haines KJ, Kelly P, Fitzgerald P, Skinner EH, Iwashyna TJ. The untapped potential of patient and family engagement in the organization of critical care. Crit Care Med. 2017;45(5):899–906.

    PubMed  Article  Google Scholar 

  78. 78.

    Voorberg WH, Bekkers VJ, Tummers LG. A systematic review of co-creation and co-production: embarking on the social innovation journey. Public Manag Rev. 2015;17(9):1333–57.

    Article  Google Scholar 

  79. 79.

    Sarkies M, Bowles K-A, Skinner E, Mitchell D, Haas R, Ho M, Salter K, May K, Markham D, O’Brien L. Data collection methods in health services research: hospital length of stay and discharge destination. Appl Clin Inform. 2015;6(1):96.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  80. 80.

    Sarkies MN, Bowles K-A, Skinner EH, Haas R, Mitchell D, O'Brien L, et al. Do daily ward interviews improve measurement of hospital quality and safety indicators? A prospective observational study. J Eval Clin Pract . 2016;(5):792–8.

  81. 81.

    Campbell MK, Piaggio G, Elbourne DR, Altman DG. Consort 2010 statement: extension to cluster randomised trials. BMJ: Br Med J. 2012;345.

  82. 82.

    Editors ICoMJ: International Committee of Medical Journal Editors (ICMJE). Uniform requirements for manuscripts submitted to biomedical journals: writing and editing for biomedical publication. Haematologica. 2004;89(3):264.

    Google Scholar 

Download references


We wish to thank Monash University, Monash Health, and the Victorian Department of Health and Human Services for providing support for this project. We thank the members of the Evidence Translation in Allied Health (EviTAH) project for their support of this project.


This work was funded by a partnership grant from the National Health and Medical Research Council (NHMRC) Australia (APP1114210) and the Victorian Department of Health and Human Services. The funding body had no role in the design, collection, analysis, and interpretation of data and in writing the manuscript.

Availability of data and materials

Not applicable

Author information




MS and TH were responsible for the overall conceptual development and methodological design. MS wrote the first draft version of the manuscript, with MS, TH, KAB, ES, JW, MM, CW, LOB, AB, JM, LC, and AH contributing to the final manuscript version. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Mitchell N. Sarkies.

Ethics declarations

Authors’ information

(1) Department of Physiotherapy, School of Primary and Allied Health Care, Faculty of Medicine, Nursing and Health Sciences, Monash University. (2) Allied Health Research Unit, Monash Health.

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:

Terminology guide. (DOCX 46 kb)

Additional file 2:

Main analysis model. (DOCX 68 kb)

Additional file 3:

Interaction effects analysis model. (DOCX 79 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Sarkies, M.N., Skinner, E.H., Bowles, K. et al. A novel counterbalanced implementation study design: methodological description and application to implementation research. Implementation Sci 14, 45 (2019).

Download citation


  • Crossover
  • Counterbalanced
  • Method
  • Design
  • Research
  • Implementation
  • Context
  • Strategy
  • Randomised controlled trial
  • Study