Skip to main content

The relative value of Pre-Implementation stages for successful implementation of evidence-informed programs



Most implementations fail before the corresponding services are ever delivered. Measuring implementation process fidelity may reveal when and why these attempts fail. This knowledge is necessary to support the achievement of positive implementation milestones, such as delivering services to clients (program start-up) and competency in treatment delivery. The present study evaluates the extent to which implementation process fidelity at different implementation stages predicts achievement of those milestones.


Implementation process fidelity data—as measured by the Stages of Implementation Completion (SIC)—from 1287 implementing sites across 27 evidence-informed programs were examined in mixed effects regression models with sites nested within programs. Implementation process fidelity, as measured by the proportion of implementation activities completed during the three stages of the SIC Pre-Implementation phase and overall Pre-Implementation (Phase 1) and Implementation (Phase 2) proportion scores, was assessed as a predictor of sites achieving program start-up (i.e., delivering services) and competency in program delivery.


The predicted probability of start-up across all sites was low at 35% (95% CI [33%, 38%]). When considering the evidence-informed program being implemented, that probability was nearly twice as high (64%; 95% CI [42%, 82%]), and 57% of the total variance in program start-up was attributable to the program. Implementation process fidelity was positively and significantly associated with achievement of program start-up and competency. The magnitude of this relationship varied significantly across programs for Pre-Implementation Stage 1 (i.e., Engagement) only. Compared to other stages, completing more Pre-Implementation Stage 3 (Readiness Planning) activities resulted in the most rapid gains in probability of achieving program start-up. The predicted probability of achieving competency was very low unless sites had high scores in both Pre-Implementation and Implementation phases.


Strong implementation process fidelity—as measured by SIC Pre-Implementation and Implementation phase proportion scores—was associated with sites’ achievement of program start-up and competency in program delivery, with early implementation process fidelity being especially potent. These findings highlight the importance of a rigorous Pre-Implementation process.

Peer Review reports


Although enthusiasm is high for integrating evidence-informed programs into standard practice [1, 2], over 60% of implementation efforts fail before delivering services to clients [3]. This identifies a clear need. To address it—and to benefit the individuals in need of evidence-informed services—strong implementation methods are essential [4].

The role of implementation process fidelity

The role of implementation process fidelity—fidelity to the complete process of implementing an evidence-based program—mirrors the role of intervention fidelity. Traditionally, evidence-informed programs are developed through a series of tests, beginning with efficacy (controlled circumstances), then effectiveness (realistic circumstances), and then implementation (real-world circumstances). More recently, hybrid effectiveness-implementation trials have enabled simultaneous testing, with the aim of making evidence-informed programs available more quickly [5, 6]. Across this range of tests, a core component is intervention fidelity—that is, delivery of the intervention in the manner intended. More specifically, intervention fidelity is defined as adherence to key treatment components, where those components are delivered with competence [7, 8]. If an intervention is delivered with poor fidelity—that is, if the active treatment components are never delivered or delivered poorly—the intervention may fail to produce its intended outcomes. Therefore, to promote the successful delivery of an intervention, measurement of intervention fidelity is essential.

In a similar way to intervention fidelity, it is possible to consider implementation process fidelity. Whereas intervention fidelity focuses on the actual delivery of the intervention (e.g., from provider to client/patient), implementation process fidelity begins earlier, focusing on the complete process of implementing the evidence-informed intervention program in the intended service delivery setting. Just as there are specific intervention activities involved in administering an intervention (e.g., drafting an avoidance hierarchy with clients), there are specific implementation activities involved in implementing an evidence-informed program (e.g., facilitating community partner meetings with referring agencies). Completion of those activities as intended can be thought of as implementation process fidelity. A strong implementation process is critical for achieving key implementation milestones [4], for example, successfully initiating service delivery to clients. Measurement of implementation process fidelity has the potential to inform implementation strategies and explain implementation outcomes. Such measurement may be a critical consideration for reducing the well-known gap between knowledge and practice [9].

According to the ubiquitous EPIS framework, the implementation process is characterized by four phases: Exploration, Preparation, Implementation, and Sustainment (EPIS; [10]). Not all implementing sites complete activities in every phase. For example, a site may attempt to offer services without any Preparation, or after offering services, they may cease monitoring the program and complete no activities in Sustainment. Such departures from the implementation process—that is, low implementation process fidelity—may impact the achievement of implementation milestones. Measuring implementation process fidelity allows for this to be tested with implementation milestones spanning the implementation process, such as program start-up and competency in program delivery. An implementing site has achieved the early indicator of implementation success, program start-up, when they first deliver the corresponding program’s services to clients/patients. A site has achieved competency in program delivery—a long-term indicator of implementation successwhen they can demonstrate that they consistently deliver those services as intended, with the necessary infrastructure and support for sustainment. Measuring implementation process fidelity across the EPIS framework can identify what activities in what phases support these implementation outcomes, thus increasing the potential for successful implementations and broadening the reach of evidence-informed programs.

Pre-Implementation: a critical period

There is some evidence that implementation process fidelity during the initial phases of implementation—during Exploration and Preparation—may be particularly important for achieving later implementation milestones. In other words, eventual implementation success may depend, in part, on implementation process fidelity during the earliest parts of the implementation—i.e., Pre-Implementation [11]. This is supported by multiple studies, which have found that sites with higher implementation process fidelity early-on were more likely to make it to the point of delivering services to clients [12,13,14,15]. Additionally, in a descriptive study of 23 implementation attempts [16], sites that eventually became competent in delivering the intervention program were those that—early-on—had 100% implementation process fidelity. In contrast, of sites that did not reach competency in program delivery, the average level of implementation process fidelity during Pre-Implementation was only 47%. Other descriptive research has found that, among sites not succeeding with implementation, most discontinue during Pre-Implementation [3]. Finally, an implementation strategy targeting Pre-Implementation fidelity (i.e., social-cognitive pre-implementation enhancement strategies; SC-PIES) whereby teachers’ motivation and perception of the uptake of the EIP was targeted through training about the value of the intervention and committing to its use, was associated with higher teacher fidelity and improved behavioral outcomes for youth whose teachers received the intervention [17]. Together, these findings suggest that Pre-Implementation fidelity may lay the groundwork for achieving later success. However, these studies have limitations. Most have focused on implementation attempts for only one or two EIPs. This leaves questions about the consistency or variability of findings across a wider range of programs. Likewise, due to sample size, many studies including evaluations of later implementation outcomes, such as achieving competency in program delivery, have been restricted to descriptive evaluations. To address these limitations, a large sample of implementing sites across various EIPs is necessary.

The present study

A site’s early implementation process fidelity may support achievement of both proximal and distal implementation milestones. The Stages of Implementation Completion (SIC)—which measures implementations with a widely used, web-based implementation process fidelity system [18]—provides an opportunity to investigate this association across EIPs. The SIC measures both implementation process fidelity and important implementation milestones, such as program start-up and achievement of competency in program delivery. The unit of measurement—rather than being a practitioner or service recipient—is an individual site (e.g., a clinic or facility) that is attempting to implement an EIP. Consistent with the EPIS framework, the SIC divides the implementation process into multiple phases: Pre-Implementation (EPIS Exploration and Preparation phases), Implementation, and Sustainment.

The SIC originally was developed to compare implementation process fidelity across two sites during a head-to-head trial of competing multicomponent implementation strategies [12] and has been adapted or customized to track implementation process fidelity for more than 2,000 sites [19] and 70 evidence-informed programs (e.g., [20,21,22]). For example, Aalsma and colleagues [23] are utilizing the SIC to monitor the implementation process of the implementation of a multi-system, multi-component intervention to treat substance use disorders in youth involved in the juvenile justice system. In a very different context, Dubowitz and colleagues [24] are using the SIC to monitor their evaluation of two training strategies for the implementation of a child maltreatment preventive intervention delivered in primary care settings. The implementation process for each of these evidence-informed programs may present both similar and unique implementation challenges, and this is captured by the SIC.

The goal of the present study is to leverage the SIC database—the largest known repository of implementation process data from EIPs, including implementing sites and programs, their implementation process fidelity, and their achievement of program start-up and competency—to address the following questions:

  1. 1.

    Across evidence-informed programs, how much variability is there in achievement of (a) program start-up and (b) competency in program delivery?

  2. 2.

    To what extent is early implementation process fidelity associated with (a) program start-up and (b) competency in program delivery?



Universal Stages of Implementation Completion (SIC)

The Universal SIC is a standardized measure of the implementation process spanning three implementation phases: Pre-Implementation, Implementation, and Sustainment [19]. Phases are divided into subsidiary SIC stages. The Pre-Implementation phase, which is of particular importance for the present study, is divided into three stages: Engagement (Stage 1), Consideration of Feasibility (Stage 2), and Readiness Planning (Stage 3). The Universal SIC stages are comprised of 46 implementation activities identified as being “common” across a range of EIPs [18]. Examples of activities (across all stages and phases) are presented in Table 1. Some programs choose to tailor their SIC by adding, modifying, or removing activities based on their individual implementation processes; however, the universal activities are largely represented across all SICs. With this pool of universal activities, SIC data can be combined across programs using the standard Universal SIC, providing an extensive data repository.

Table 1 SIC stages and phases with example activities

Data entry and validation

Programs partner with the SIC team to track new implementations. These programs agree for their deidentified data to be included in implementation research. Program purveyors leverage naturally occurring contacts to monitor an implementing sites’ progress. As each site advances through the implementation process, purveyors use the SIC website to report the date on which each activity was completed to a satisfactory standard. The SIC team works closely with purveyors to ensure data integrity. For example, possible data entry errors (e.g., program start-up precedes hiring of study staff) are identified by SIC staff during monthly validation checks and corrected if needed after discussion with purveyors.

Implementation process fidelity

The SIC yields two main scores: the duration of activity completion and the proportion of activities completed. The latter is the focus of the present investigation and reflects the proportion of activities completed in each phase (or stage). Activities are rated as “completed’ based on an operationalized definition to ensure that partially or inaccurately completed activities are not endorsed. The proportion score is computed by dividing the number of completed activities by the number of possible activities for the corresponding phase (or stage). These proportion scores reflect the level of implementation process fidelity for the respective implementation phase (or stage). Past research has indicated strong reliability for proportion scores of the Pre-Implementation phase (Phase 1; 15 activities; Rasch reliability = 0.81; Cronbach-equivalent test reliability = 0.89; site-level reliability = 0.72; [25]) and Implementation phase (Phase 2; 23 activities; Rasch reliability = 0.79; Cronbach-equivalent test reliability = 0.93; site-level reliability = 0.69). Phase 1 and Phase 2 proportion scores were available for all participating sites. The SIC most often is scored by phase; however, to address the present research questions, proportion scores also were calculated for the subsidiary Pre-Implementation stages: Stage 1 (Engagement, 2 Universal activities), Stage 2 (Feasibility, 4 Universal activities), and Stage 3 (Readiness Planning, 9 Universal activities).

Implementation outcomes

The SIC also captures two key implementation outcomes: achievement of program start-up and achievement of competency in program delivery. Each outcome is dichotomous (0 = Not Achieved, 1 = Achieved). Program start-up is achieved when a site first delivers services to clients/patients. Of sites that achieve program start-up, some will achieve competency in program delivery. Sites achieve competency when they can demonstrate that they deliver services from the EIP consistently and as intended. A site may have missing data for program start-up or both outcomes if its implementation is ongoing and the status of the outcome (not achieved or achieved) is not yet known.

Sites and programs

The Universal SIC database included 1759 sites across 30 evidence-informed programs at the time of data retrieval (August 2020). Before analysis, some sites and EIPs were removed. One program was removed because its 207 sites completed all implementation activities in unison (i.e., there was no variability). Additionally, 92 sites were removed because they were “expansions” of established sites. Expansions reflect a need for increased service delivery capacity within an existing site, and as such, they do not reflect new implementation attempts. Finally, ongoing implementation attempts were eliminated (i.e., where start-up and competency status were unknown). The total sample of sites and programs varied across the two outcomes (i.e., achievement of program start-up and competency in program delivery) because for ongoing implementations, the start-up status is known prior to the competency status. The program start-up sample was 1287 sites across 27 evidence-informed programs, with the median program having 20 sites (Mean = 48, Min. = 1, Max. = 364). From this, n = 182 sites without a known status for achieving competency were removed. This resulted in n = 1105 sites across 19 evidence-informed programs, with the median program having 25 sites (Mean = 58, Min. = 2, Max. = 359). Table 2 provides a summary of the complete sample of 27 EBIs including program focus, population, and setting.

Table 2 Program and site sample sizes and start-up rates summarized by key program characteristics across 27 different evidence-informed programs

Data analysis strategy

The two outcomes—achievement of program start-up and competency in program delivery—were dichotomous and modeled according to a Bernoulli distribution (logit link) with adaptive quadrature estimation. The data were structured with sites (level-1) nested within evidence-informed programs (level-2). Specifically, each site attempted implementation of only one EIP. Nesting was addressed using mixed-effects models [26] implemented in SuperMix Version 2.1 [27]. The competency outcome was available for only a modest number of EIPs (n = 19). However, simulation studies with as few as 10 upper-level units have indicated that level-1 regression coefficients (and associated SEs) were estimated accurately, though there was notable bias for upper-level variance components [28, 29].

For the program start-up outcome, two groups of models were performed. The first evaluated the proportion score for the overall Pre-Implementation phase, and the second evaluated the proportion score for each individual Pre-Implementation stage. The latter included separate models for activities completed in Engagement (Stage 1), Consideration of Feasibility (Stage 2), and Readiness Planning (Stage 3). For each model, the focal proportion score was centered around its grand mean prior to entry, and the corresponding random effect specification was based on the likelihood ratio test. For the achievement of competency in program delivery outcome, which is later in the implementation process, the model was extended to include implementation process fidelity in the Implementation phase. Of note, completing any activity in the Implementation phase meant—by definition—having completed at least one activity in the prior Pre-Implementation phase. Because of this, it was not viable to enter a main effect for Implementation proportion. Instead, the model only included a main effect for Pre-Implementation proportion (as described for the program start-up outcome) and an interaction between Pre-Implementation and Implementation proportions. As above, each term was centered prior to entry, but because of the reduced number of EIPs with sites achieving the competency outcome, random effects were not evaluated for the predictors.


Descriptive statistics

Table 3 presents descriptive statistics for the proportion of activities completed in Pre-Implementation—including the subsidiary Stages 1, 2, and 3—as well as the Implementation phase, organized by implementation outcome status (i.e., discontinued prior to start-up, achieved start-up, discontinued prior to competency, or achieved competency). Across the N = 1,287 sites with a known status for program start-up, the rate of start-up was 35% (N = 455). The average across evidence-informed programs was nearly twice as high at 62% (Median = 68%, Min. = 0%, Max. = 100%). The discrepancy between sites and programs reflects considerable variability across programs, both in the number of sites and the rates of achieving start-up. Table 2 provides the average start-up rates by EIP focus, population, and setting.

Table 3 Descriptive statistics for implementation process fidelity by implementation outcome status

Across the subset of N = 1,105 sites with a known status for competency, only 15% achieved competency (N = 162). When considering the average across 19 evidence-informed programs, the rate was comparable at 17% (Median = 5%, Min. = 0%, Max = 67%). Of the sites that did not achieve program start-up (n = 832 of 1287, 65%), 96% (n = 798) discontinued in the Pre-Implementation phase (i.e., Stages 1, 2 or 3), with 62% (n = 518) discontinuing in Stage 1, 18% (n = 149) in Stage 2, and 16% (n = 131) in Stage 3. Of the sites that did not achieve competency (n = 943 of 1105, 85%), 85% (n = 798) discontinued during Pre-Implementation. In contrast, of sites that completed the Pre-Implementation phase, 93% achieved program start-up and 53% achieved competency.

Prediction models

Program start-up and competency by program

To estimate the overall rate of program start-up and competency in program delivery across sites, an initial unconditional model was performed that did not include programs. This was followed by an unconditional two-level mixed-effects regression model that nested sites (level-1) within programs (level-2). Across all sites, the overall predicted probability of achieving start-up was 35% (95% CI [33%, 38%]; β = -0.60, SE = 0.058, z = 10.41, p < 0.001). When considering the program being implemented, the average predicted probability of start-up was nearly twice as high at 64% (95% CI [42%, 82%], β = 0.59, SE = 0.47, z = 1.24, p = 0.215), and 57% of the total variance in start-up was attributable to the program (τπ = 4.33; with the remaining 43% attributable to sites). For achievement of competency, across all sites, the overall predicted probability was 15% (95% CI [13%, 17%]; β = -1.76, SE = 0.085, z = -20.71, p < 0.001). When considering the program being implemented, the average predicted probability of competency was lower at 8% (95% CI [3%, 20%], β = -2.41, SE = 0.09, z = -4.66, p < 0.001), and 51% of the total variance in competency was attributable to the program (τπ = 3.44; with the remaining 49% attributable to sites). The differing estimates—when only considering sites versus when also considering programs—are attributable to variability in the number of sites per program and the program-specific rates of start-up and competency.

Implementation process fidelity and program start-up

Phase proportion

The two-level unconditional model for program start-up was extended to include the proportion of activities completed in the Pre-Implementation phase (i.e., across Stages 1–3). Completion of Pre-Implementation activities was significantly and positively associated with achieving program start-up (p < 0.001; Table 4), and the effect of Pre-Implementation proportion did not vary significantly by program. As an example, for sites completing 75% versus 25% of Pre-Implementation activities, the predicted probability of program start-up would be 95% versus 2%.

Table 4 Results of mixed-effects logistic regression models with SIC proportion scores by phase and stage predicting program start-up status

Stage proportion

Pre-Implementation proportion (i.e., phase proportion) was removed, and separate models were performed to evaluate each stage-specific proportion score. In each case, stage proportion was significantly associated with achieving program start-up, with a higher proportion of completed activities associated with a higher log-odds of program start-up (ps < 0.015; Table 4).

Although the direction of the association was the same across Stage 1, Stage 2, and Stage 3, as illustrated in Fig. 1, the shape of that association varied by stage. With a low level of activity completion in a given stage, such as 10%, the predicted probability of achieving program start-up was low regardless of the stage: < 1% for activities completed in Stage 1, 2% for Stage 2, and 1% for Stage 3. In contrast, for moderate activity completion, such as 50%, the predicted probability of start-up varied by stage: 5% for activities completed in Stage 1, 35% for Stage 2, and 72% for Stage 3. For 100% activity completion, the probability of program start-up was high for activities completed in Stages 2 and 3, at 97% and > 99%, but somewhat lower for activities completed in Stage 1, at 79%.

Fig. 1
figure 1

The predicted probability of achieving program start-up based on the proportion of Universal SIC activities completed in a given stage of the Pre-Implementation phase. Note. For each of the three stage proportion scores, the predicted probabilities are based on a separate mixed-effects logistic regression model

It is important to note that these predicted probabilities reflect averages across all EIPs. However, with 57% of the outcome variance attributable to program, there was strong evidence of variability from program to program. Furthermore, for Stage 1 only (i.e., not Stages 2 or 3), the association between proportion and start-up varied significantly across programs. To illustrate this, Empirical Bayes residuals were used to calculate the specific probability of start-up for each program (the complete table of which is available upon request). In Stage 3, if a site completed all activities, the probability of start-up was ≥ 95% for all programs. Stage 2 was similar, with completion of all activities leading to a ≥ 84% probability of start-up for all but one program. For Stage 1, that probability decreased to 79%; however, for over one-fifth of programs—which represented nearly half of all sites—the probability of achieving program start-up was less than 40%. Stated differently, for a large number of sites, completing all activities in Stage 1 was not sufficient for a high probability of program start-up. Thus, across stages, the likelihood of program start-up increased as sites completed more implementation activities. However, for Stage 1, completion of all activities was associated with a somewhat lower likelihood of start-up compared to completion of all activities in Stages 2 or 3. Furthermore, only Stage 1 showed evidence of variability across programs in this association, with some programs being more likely to achieve program start-up at high Engagement activity completion than others.

Implementation process fidelity and competency in program delivery

The two-level unconditional model for competency in program delivery was extended to include the proportion of activities completed in the Pre-Implementation phase and the interaction between Pre-Implementation and Implementation proportion. Pre-Implementation proportion was significantly and positively associated with achieving competency (p < 0.001, see Table 4). The interaction between Pre-Implementation and Implementation proportion scores was also positive and significant (p < 0.001; see Table 4). This reflects a differential effect of Implementation proportion depending on the level of Pre-Implementation proportion. As illustrated in Fig. 2, when not considering the influence of Implementation fidelity (i.e., Implementation proportion = 0.00), the full range of the Pre-Implementation fidelity (i.e., 0.00 versus 1.00) increased the probability of achieving competency from < 1% to 7%. Generally, as implementation process fidelity increased during Implementation, the probability of achieving competence increased above the level expected from Pre-Implementation alone. However, Pre-Implementation fidelity played an influential role. If it was low (e.g., Pre-Implementation = 0.10), the probability of achieving competency was low regardless of the level of Implementation fidelity—even if it was 1.00. In contrast, if Pre-Implementation fidelity was high (e.g., Pre-Implementation = 0.90), the probability of achieving competency depended on the level of Implementation fidelity. This ranged from 7%, if Implementation fidelity was low (i.e., 0.00), to 87%, if Implementation fidelity was high (i.e., 1.00). Thus, Implementation fidelity had a significant differential effect depending on the level of Pre-Implementation process fidelity.

Fig. 2
figure 2

The predicted probability of achieving competency based on the proportion of Universal SIC activities completed in Phase 1: Pre-Implementation by Phase 2: Implementation Proportion. Note. This figure graphically represents Phase 1 and Phase 1 × 2 proportion scores predicting site competency status


This study provides empirical evidence supporting the critical value of Pre-Implementation for the successful implementation of EIPs. Not only does Pre-Implementation support the start-up of a new program, but programs that start-up after a poor Pre-Implementation process are unlikely to achieve competency in program delivery. Just as the measurement of intervention fidelity (the extent to which the intervention is delivered as intended) can promote positive treatment outcomes, positive implementation outcomes may be supported by measuring sites’ implementation process fidelity. A repository of data collected using the Universal SIC provided an unprecedented opportunity to examine the relationship between implementation process and outcomes across a range of programs, using a shared measure of implementation process fidelity.

Across over a thousand implementation attempts, most implementing sites—upwards of 85%—discontinued before achieving program start-up or competency in program delivery. Notably, different EIPs achieved those implementation milestones at different rates. Importantly, when considering the nesting of sites within evidence-informed programs—rather than sites alone—the average predicted probability of achieving start-up nearly doubled. This reflects the presence of programs with many implementing sites that have overall lower rates of achieving start-up. Despite considerable variation across programs, the completion of SIC-defined activities—that is, high implementation process fidelity—reliably predicted positive implementation outcomes for a range of EIPs. Specifically, across 27 programs and 1287 sites, Pre-Implementation fidelity was positively and significantly associated with achieving program start-up for newly adopting sites. In addition, across 1105 individual implementation attempts for 19 evidence-informed programs, SIC-derived Pre-Implementation and Implementation fidelity scores were positive and significant predictors of sites achieving competency in program delivery. Critically, engagement with the implementation process was only a potent predictor of positive implementation outcomes when sites reported high implementation process fidelity during the early Pre-Implementation phase.

Pre-Implementation: a critical period in the implementation process

When a site engages with the implementation process, the ultimate goal is for the desired services to reach the target population. Traditionally, implementers and those who fund them might prioritize the activities in the SIC-defined Implementation phase, jumping ahead to serving clients, and this might vary by setting. For instance, although not the focus of analyses, in the current sample EIPs implemented in Education or Healthcare settings averaged less time spent in Pre-Implementation (mean 5.9 and 4.2 months respectively) than those is Justice or Child Welfare settings (mean = 13.4 and 9.0 months respectively). However, the present study demonstrates that sites with strong implementation process fidelity in the Implementation phase still have low probabilities of achieving competency in program delivery, which is necessary to sustain the program, if they had poor Pre-Implementation process fidelity (Fig. 2). Given that 85% of implementing sites discontinue before achieving competency, activities early in the implementation process appear to be critical for successful implementation. This might be particularly true for complex interventions; of the EIPs included in the current analyses, most represent programs implemented in system settings such as child welfare, education, criminal justice, healthcare, or combined settings (Table 2), where implementation activities necessitate involvement of multiple levels of leadership, providers, and community partners. To support positive implementation outcomes, it is necessary to: 1) to support sites’ engagement in early implementation activities and 2) understand the relationship between those outcomes and early implementation process. The SIC provides us with the tools to measure and evaluate such activity: proportion scores from Stages 1, 2, and 3 of the Pre-Implementation phase.

The importance of Pre-Implementation stages

Each of the three Pre-Implementation stages are positively, significantly, and independently associated with program start-up. Although the Pre-Implementation phase (across all of the first three stages) typically stands alone (e.g., [16, 21]) and was shown in the current study to be predictive of program start-up, the relative importance of each stage was remarkable. This provides further evidence that each of the three stages measures different aspects of Pre-Implementation fidelity [25]. Additionally, and more importantly, by understanding the unique and important role of Engagement, Consideration of Feasibility, and Readiness Planning activities during Pre-Implementation, adopters can better understand both how to focus their implementation efforts and the importance of maintaining a thorough implementation process. Thus, not unlike measures of intervention fidelity that assess the delivery of the intervention in order to achieve positive clinical outcomes, the SIC as a measure of implementation process fidelity assesses the completion of activities that contribute to positive implementation outcomes.

The relative value of each Pre-Implementation stage is not surprising. Results showed that activities completed during the Readiness Planning stage were most impactful in supporting program start-up. The importance of completing Readiness activities is well-cited in the implementation science literature [30,31,32,33]. In fact, a number of standardized instruments exist for measuring organizational readiness to implement new programs (e.g., ORIC, ORC, ORCA; [33,34,35]), and many evidence-informed programs come with guidance or technical assistance for sites in how to conduct Readiness activities. On the other hand, even high levels of activity completion for the Engagement stage were associated with comparatively meager chances of achieving program start-up. Thus, supporting the field’s plea for implementation strategies that facilitate successful program start-up [36, 37], the present results illustrate that Engagement—informing a site about a program, engaging them in discussions about program fit, and providing them with program materials—is insufficient for successful adoption.

Of note, for sites that completed a large proportion of Engagement activities, the probability of achieving program start-up was surprisingly high (79%) compared to prior research, where this probability was estimated at 50% or less [15]. This discrepancy is largely due to sample size considerations. Prior work included fewer evidence-informed programs and, as such, each site was considered to be independent, making large programs disproportionately influential. In contrast, the present study was able to nest sites within programs, which meant that each evidence-informed program contributed an equal amount of information. Indeed, in the current analyses, program specific estimates were consistent with prior research: specifically, for half of all sites, the predicted probability of achieving program start-up at 100% activity completion for the Engagement stage was less than 40%. Regardless, the current outcome suggests the opportunity for sharing of knowledge across programs to increase successful rates of engagement.

Discontinuing implementation

Importantly, analyses from this study highlight the high degree of discontinuation (62%) among sites attempting to implement a new EIP—the large majority discontinuing in Pre-Implementation (96%). Yet, of those that do complete Pre-Implementation, nearly all (93%) achieve program start-up, providing further support for the value of completing Engagement, Consideration of Feasibility, and Readiness activities with quality when preparing sites for successful program start-up. This suggests that implementation interventions should be developed to support the completion of Pre-Implementation activities. To develop such interventions, there are two important considerations from the present findings. One is that Readiness activities offered the most robust contribution to program start-up (followed by Consideration of Feasibility, then Engagement). However, the other is that the majority of sites discontinued in the Engagement stage. Because of this, interventions targeting the Engagement stage—such as decision-making to adopt evidence-informed programs—although as noted above are insufficient alone, could be particularly useful. There are a range of web-based tools that provide sites with menus of evidence-informed programs for implementation (e.g., Administration for Community Living [38]; California Evidence-Based Clearinghouse for Child Welfare [39]; Evidence-Based Cancer Control Programs [40]; [41]). However, low-burden engagement interventions are less readily available. Such interventions could aid sites during early implementation by providing context-specific guidance. Methods such as multi-criteria decision analysis tools provide a promising avenue for assisting stakeholders in identifying the evidence-informed programs that are most likely to yield successful engagement [42]. Recent analyses suggest that when developing such tools, decision-makers report wanting to know whether or not the program provides support with the implementation process as an essential piece of information for their consideration before deciding whether or not to complete engagement [43]. An operationalized implementation process, such as that provided by the SIC, could aid decision-makers in understanding what activities are necessary to implement the program successfully and to identify what implementation supports are necessary for the site to complete each activity.


The SIC was designed to measure the implementation process for behavioral (e.g., child externalizing problems) and physical healthcare (e.g., hypertension) programs; therefore, conclusions from the present study are constrained to these types of programs. An additional limitation is that the outcomes of interest, program start-up and competency, are only two of several possible outcomes for the implementation process. For instance, these analyses did not explore the influence of Pre-Implementation on long-term sustainment of the intervention. It is possible that early features of the implementation process may influence distal implementation outcomes, and these are ripe avenues for future research (see [16]). Next, the present analyses evaluated each Pre-Implementation stage as a stand-alone predictor of program start-up. This was important for addressing the primary study aims but did not extend to inter-relationships in Pre-Implementation fidelity across Stages 1, 2, and 3. Additionally, proportion scores for the three Pre-Implementation stages did not have accompanying psychometric data because they were based on a modest number of activities. However, there is evidence that the Pre-Implementation phase—rather than individual stages—forms a distinct dimension, with activities sufficient for reliable measurement. The present focus on stage scores was for the purpose of addressing targeted research questions for this study. The results should be interpreted in the context of this limitation; however, it is important to note that the stage-specific SIC activities reflect the population—rather than a small sub-sample—of universal implementation activities. Finally, the number of EIPs was modest for mixed-effects regression models: 27 for program start-up and 19 for competency in program delivery. Additionally, some programs had a small number of sites—for example, two programs had only one implementing site. However, prior simulation studies [28, 29] have supported the accuracy of the model parameters for the primary research questions, though the program-level variance components should be interpreted with caution.


The present study leveraged the largest known repository of independent implementation attempts to investigate the association between implementation process fidelity and achievement of two implementation milestones: program start-up and competency in program delivery. There were three main conclusions: First, although rates of achieving implementation milestones varied significantly across EIPs, they generally were quite low. Second, the probability of achieving those milestones was significantly higher when a site maintained high implementation process fidelity during the Pre-Implementation stages. Finally, high implementation process fidelity during the Implementation phase was not sufficient to produce positive implementation outcomes—that is, a newly adopting site was more likely to achieve competent delivery of the desired program if they had high Pre-Implementation and Implementation fidelity. This indicates that Pre-Implementation is a critical period in the implementation process. Further research is underway by the investigative team and others in the field (e.g., [30,31,32,33]) to develop implementation interventions and support tools to improve Pre-Implementation fidelity. Furthermore, funders and allies of evidence-informed program adoption should accommodate the timelines and resources necessary for sites to conduct strong Pre-Implementation. In so doing, sites will be well-positioned to achieve program start-up and competency in program delivery, increasing the number of evidence-informed programs provided to patients and communities in need.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.



Evidence informed program


Stages of implementation completion


Exploration, preparation, implementation, and sustainment


  1. Duff J, Cullen L, Hanrahan K, Steelman V. Determinants of an evidence-based practice environment: an interpretive description. Implement Sci Commun. 2020;1(1):85–94.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Tucker S, McNett M, Melnyk BM, Hanrahan K, Hunter SC, Kim B, et al. Implementation science: application of evidence-based practice models to improve healthcare quality. Worldviews Evid Based Nurs. 2021;18(2):76–84.

    Article  PubMed  Google Scholar 

  3. Wong DR, Schaper H, Saldana L. Rates of sustainment in the Universal Stages of Implementation Completion. Implement Sci Commun. 2022;3(1):2.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Mihalic S, Irwin K, Elliott D, Fagan A, Hansen D. Blueprints for violence prevention. for the Study and Prevention of Violence, University of Colorado. National Scientific Council on the Developing Child; 2004.

  5. Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation Hybrid Designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217–26.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Landes SJ, McBain SA, Curran GM. An introduction to effectiveness-implementation hybrid designs. Psychiatry Res. 2019;280:112513.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Beidas RS, Maclean JC, Fishman J, Dorsey S, Schoenwald SK, Mandell DS, et al. A randomized trial to identify accurate and cost-effective fidelity measurement methods for cognitive-behavioral therapy: project FACTS study protocol. BMC Psychiatry. 2016;16(1):323.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Mowbray CT, Holter MC, Teague GB, Bybee D. fidelity criteria: development, measurement, and validation. Am J Eval. 2003;24(3):315–40.

    Article  Google Scholar 

  9. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health Ment Health Serv Res. 2011;38(2):65–76.

    Article  Google Scholar 

  10. Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health Ment Health Serv Res. 2011;38(1):4–23.

    Article  Google Scholar 

  11. Goodrich DE, Miake-Lye I, Braganza MZ, Wawrin N, Kilbourne AM. The QUERI Roadmap for Implementation and Quality Improvement. Washington (DC): Department of Veterans Affairs (US); 2020. Available from:

  12. Brown CH, Chamberlain P, Saldana L, Padgett C, Wang W, Cruden G. Evaluation of two implementation strategies in 51 child county public service systems in two states: results of a cluster randomized head-to-head implementation trial. Implement Sci. 2014;9(1):134.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Nadeem E, Saldana L, Chapman J, Schaper H. A mixed methods study of the stages of implementation for an evidence-based trauma intervention in schools. Behav Ther. 2018;49(4):509–24.

    Article  PubMed  Google Scholar 

  14. Saldana L, Chamberlain P, Wang W, Brown CH. Predicting program start-up using the stages of implementation measure. Adm Policy Ment Health Ment Health Serv Res. 2012;39(6):419–25.

    Article  Google Scholar 

  15. Saldana L, Chapman J, Schaper H, Campbell M. Facilitating sustainable collaborative care programs in rural settings using the stages of implementation completion (SIC). Implement Sci. 2018;13(Suppl4):S150.

    Google Scholar 

  16. Sterrett-Hong EM, Saldana L, Burek J, Schaper H, Karam E, Verbist AN, et al. An exploratory study of a training team-coordinated approach to implementation. Glob Implement Res Appl. 2021;1(1):17–29.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Zhang Y, Cook CR, Azad GF, Larson M, Merle JL, Thayer J, et al. A pre-implementation enhancement strategy to increase the yield of training and consultation for school-based behavioral preventive practices: a triple-blind randomized controlled trial. Prev Sci. 2022. Cited 2023 Jan 11. Available from:

  18. Saldana L. The stages of implementation completion for evidence-based practice: protocol for a mixed methods study. Implement Sci. 2014;9(1):43.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Stages of Implementation Completion. Oregon Social Learning Center. 2022. Available from:

  20. Saldana L, Bennett I, Powers D, Vredevoogd M, Grover T, Schaper H, et al. Scaling implementation of collaborative care for depression: adaptation of the Stages of Implementation Completion (SIC). Adm Policy Ment Health Ment Health Serv Res. 2020;47(2):188–96.

    Article  Google Scholar 

  21. Watson DP, Snow-Hill N, Saldana L, Walden AL, Staton M, Kong A, et al. A longitudinal mixed method approach for assessing implementation context and process factors: Comparison of three sites from a Housing First implementation strategy pilot. Implement Res Pract. 2020;1:1–13.

    Google Scholar 

  22. Frank HE, Saldana L, Kendall PC, Schaper HA, Norris LA. Bringing evidence-based interventions into the schools: an examination of organizational factors and implementation outcomes. Child Youth Serv. 2022;43(1):28–52.

  23. Aalsma MC, Dir AL, Zapolski TCB, Hulvershorn LA, Monahan PO, Saldana L, et al. Implementing risk stratification to the treatment of adolescent substance use among youth involved in the juvenile justice system: protocol of a hybrid type I trial. Addict Sci Clin Pract. 2019;14(1):36.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Dubowitz H, Saldana L, Magder LA, Palinkas LA, Landsverk JA, Belanger RL, et al. Protocol for comparing two training approaches for primary care professionals implementing the Safe Environment for Every Kid (SEEK) model. Implement Sci Commun. 2020;1(1):78.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Saldana, L, Chapman, J. E.. Measurement properties of the Universal Stages of Implementation Completion (SIC): Current performance and future directions. manuscript in preparation. 2023

  26. Raudenbush SW, Bryk AS. Hierarchical linear models: Applications and data analysis methods. 2nd ed. Sage Publications; 2002.

  27. Hedeker D, Gibbons R, du Toit M, Patterson D. SuperMix. Scientific Software International; 2016.

  28. Bell BA, Ferron JM, Kromrey JD. Cluster size in multilevel models: The impact of sparse data structures on point and interval estimates in two-level models. JSM Proc Sect Surv Res Methods. 2008:1122–9.

  29. Maas CJM, Hox JJ. Sufficient sample sizes for multilevel modeling. Methodology. 2005;1(3):86–92.

    Article  Google Scholar 

  30. Holt DT, Helfrich CD, Hall CG, Weiner BJ. Are you ready? How health professionals can comprehensively conceptualize readiness for change. J Gen Intern Med. 2010;25(S1):50–5.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Livet M, Blanchard C, Richard C. Readiness as a precursor of early implementation outcomes: an exploratory study in specialty clinics. Implement Sci Commun. 2022;3(1):94.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Scaccia JP, Cook BS, Lamont A, Wandersman A, Castellow J, Katz J, et al. A practical implementation science heuristic for organizational readiness: R = MC2. J Community Psychol. 2015;43(4):484–501.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Weiner BJ. A theory of organizational readiness for change. Implement Sci. 2009;4(1):67.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Helfrich CD, Li YF, Sharp ND, Sales AE. Organizational readiness to change assessment (ORCA): development of an instrument based on the Promoting Action on Research in Health Services (PARIHS) framework. Implement Sci. 2009;4(1):38.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Shea CM, Jacobs SR, Esserman DA, Bruce K, Weiner BJ. Organizational readiness for implementing change: a psychometric assessment of a new measure. Implement Sci. 2014;9(1):7.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10(1):21.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Powell BJ, Fernandez ME, Williams NJ, Aarons GA, Beidas RS, Lewis CC, et al. Enhancing the impact of implementation strategies in healthcare: a research agenda. Front Public Health. 2019;7:3.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Administration for Community Living. U.S. Department of Health and Human Services. n.d. Available from:

  39. California Evidence-Based Clearinghouse for Child Welfare. California Department of Social Services: Office of Child Abuse Prevention. n.d. Available from:

  40. Evidence-Based Cancer Control Programs. National Cancer Institute. n.d. Available from:

  41. Interagency Working Group on Youth Programs. n.d. Available from:

  42. Thokala P, Devlin N, Marsh K, Baltussen R, Boysen M, Kalo Z, et al. Multiple criteria decision analysis for health care decision making—an introduction: report 1 of the ISPOR MCDA Emerging Good Practices Task Force. Value Health. 2016;19(1):1–13.

    Article  PubMed  Google Scholar 

  43. Cruden G, Frerichs L, Powell BJ, Lanier P, Brown CH, Lich KH. Developing a multi-criteria decision analysis tool to support the adoption of evidence-based child maltreatment prevention programs. Prev Sci. 2020;21(8):1059–64.

    Article  PubMed  PubMed Central  Google Scholar 

Download references


The authors thank the participating evidence-informed programs and study sites for contributing their data to the SIC repository.

Work on this manuscript was completed while the author team worked at OSLC. JEC, HS, and LS have moved institutions to Chestnut Health Systems, with the contact information provided accordingly. ZMA has moved to a position with the State of Oregon. The Stages of Implementation Completion is trademarked and copyrighted by Oregon Social Learning Center. 


Funding for the study was provided by Grant R01DA044745-05 from the National Institute on Drug Abuse.

Author information

Authors and Affiliations



ZMA conducted the primary analyses with the assistance of JEC, sharing first authorship. LS developed research questions and conceptualized the overall study. JEC, HS, and ZMA designed the statistical formulation for data analysis. All authors wrote, read, and approved the final manuscript.

Corresponding author

Correspondence to Lisa Saldana.

Ethics declarations

Ethics approval and consent to participate

Although data collected for these analyses are from multiple trials and data collection efforts, all SIC website data and its use is overseen and protected by the Oregon Social Learning Center Institutional Review Board [see also protocol NCT03799302].

Consent for publication

Not applicable.

Competing interests

LS is the primary developer of the Stages of Implementation Completion (SIC) and holds a license agreement with OSLC to access, utilize, and grant permission to use the SIC repository and its associated data. As such, Dr. Saldana is not involved in the data collection, entry, management, manipulation, or analysis for this or other manuscripts.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Alley, Z.M., Chapman, J.E., Schaper, H. et al. The relative value of Pre-Implementation stages for successful implementation of evidence-informed programs. Implementation Sci 18, 30 (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: