Skip to main content

Development and psychometric evaluation of the Implementation Support Competencies Assessment

Abstract

Background

Implementation support practitioners (ISPs) are professionals that support others to implement evidence-informed practices, programs, and policies in various service delivery settings to achieve population outcomes. Measuring the use of competencies by ISPs provides a unique opportunity to assess an understudied facet of implementation science—how knowledge, attitudes, and skills used by ISPs affects sustainable change in complicated and complex service systems. This study describes the development and validation of a measure—the Implementation Support Competencies Assessment (ISCA)—that assesses implementation support competencies, with versatile applications across service contexts.

Methods

Recently developed practice guide materials included operationalizations of core competencies for ISPs across three domains: co-creation and engagement, ongoing improvement, and sustaining change. These operationalizations, in combination with recent empirical and conceptual work, provided an initial item pool and foundation on which to advance measurement development, largely from a confirmatory perspective (as opposed to exploratory). The measure was further refined through modified cognitive interviewing with three highly experienced ISPs and pilot-testing with 39 individuals enrolled in a university-based certificate program in implementation practice. To recruit a sample for validation analyses, we leveraged a listserv of nearly 4,000 individuals who have registered for or expressed interest in various events and trainings focused on implementation practice offered by an implementation science collaborative housed within a research-intensive university in the Southeast region of the United States. Our final analytic sample included 357 participants who self-identified as ISPs.

Results

Assessments of internal consistency reliability for each competency-specific item set yielded evidence of strong reliability. Results from confirmatory factor analyses provided evidence for the factorial and construct validity of all three domains and associated competencies in the ISCA.

Conclusions

The findings suggest that one’s possession of high levels of competence across each of the three competency domains is strongly associated with theorized outcomes that can promote successful and sustainable implementation efforts among those who receive implementation support from an ISP. The ISCA serves as a foundational tool for workforce development to formally measure and assess improvement in the skills that are required to tailor a package of implementation strategies situated in context.

Peer Review reports

Background

Implementation support practitioners (ISPs) are professionals that support others to implement evidence-informed practices, programs, and policies to achieve population outcomes [1,2,3,4,5]. Several influences have contributed to the increasing attention given to describing and understanding the role of ISPs in building implementation capacity. These factors include: 1) interest in building a competent workforce for supporting implementation and evidence use [1,2,3,4,5,6,7,8]; 2) recent publications describing the competencies needed to be effective in an implementation support role [1,2,3,4,5, 9]; 3) growing calls from the field of implementation science to address the emerging gap between implementation research and implementation practice—referred to as the paradoxical, ironic, or secondary gap [10,11,12]; and 4) emerging evidence that the use of multi-faceted implementation strategies to support innovations in health and social care has had limited effects on population outcomes [13].

Combined, these factors point to a need to understand how other aspects of implementation processes, beyond the use of specific implementation strategies, can contribute to improved implementation and population outcomes. Implementation support competencies could be a promising focal point on this front, which can be conceptualized as mechanisms that support professionals in providing high-quality implementation support and capacity-building; competencies represent an integration of an ISP’s values, knowledge, attitudes, and skills [14]. Measuring the use of competencies by ISPs provides a unique opportunity to assess an understudied facet of implementation science—how values, knowledge, attitudes, and skills used by ISPs affects sustainable change in service systems.

ISPs rely on technical and relational skills to identify, tailor, and improve evidence-based implementation strategies in different service contexts to ensure high-quality, consistent implementation of evidence-informed practices. To understand how ISPs do this work, it is important to systematically gather data from ISPs on (a) the skills they use to support change and (b) how confident and competent they are in using these skills to build implementation capacity [15]. Previous research foregrounded the critical question of “what it takes” to build sustainable implementation outcomes that contribute to improved and more equitable outcomes for people and communities [15]. The identification and explication of competencies for ISPs represents progress in the field toward understanding “what it takes;” however, there has remained a gap in how to measure these competencies well. This study describes the development and validation of a measure—the Implementation Support Competencies Assessment (ISCA) [16]—that assesses implementation support competencies, with versatile applications across service contexts.

The work of ISPs must account for the dynamic and highly relational nature of implementation that involves the integration of multiple stakeholder perspectives, the identification of crucial barriers to implementation that are often invisible to observers, and the assessment of available resources to address challenges and enhance facilitators [1]. Developing a workforce that can provide implementation support will require the field of implementation science to look beyond theories, models, and frameworks, and to more deeply understand how to assess and cultivate the competencies required by professionals working to promote and sustain evidence use in human service systems. Studying implementation capacity-building approaches, such as those used by ISPs, that are situated within contexts and emphasize the relational support needed to build organizational capability for service change might be a promising method for understanding how we can achieve improved implementation and population outcomes [13]. The ISCA is a tool that could support these efforts, and the specific aims of the current study are (a) to test whether the items for each ISCA competency offer a consistent and accurate representation of that competency (i.e., reliability); (b) confirm the hypothesized factor-structure of the competencies and domains within which they are nested (i.e., factorial validity), and (c) assess whether the measures are significantly associated with hypothesized outcomes of implementation support (i.e., construct validity).

Methods

Measurement development process

Our process of developing the ISCA was informed by DeVellis [17], whereby we engaged in a systematic and rigorous process of measurement development. To begin, we leveraged recent scholarship that offers clear and rich descriptions of the constructs intended for measurement—the 15 core competencies posited to undergird effective implementation support [1,2,3,4,5]. Recently developed practice guide materials intended to inform the work of ISPs also include operationalizations or core activities for each core competency [18]. These operationalizations, in combination with recent empirical and conceptual work noted above, provided an initial item pool (116 items across the 15 competencies) and foundation on which to advance measurement development, largely from a confirmatory perspective (as opposed to exploratory).

Next, we sought to identify an optimal format for measurement. This process was informed by other extant competency measures and our desire to balance parsimony (low respondent burden) with informativeness. Ultimately, we selected an ordinal-level response-option set whereby individuals could self-report their level of perceived competence with respect to each item. Consistent with other existing competency self-assessments [19], we selected the following response-option set: 1 = not at all competent, 2 = slightly competent, 3 = moderately competent, 4 = very competent, and 5 = extremely competent. The research team then initiated a three-stage process for item review and refinement. The first stage involved members of the research team identifying opportunities to simplify and consolidate possible items in the item pool. This led to a slight reduction in items (now 113) and item simplification.

The second stage involved use of modified cognitive interviewing with three experienced ISPs. The three participants were invited to review the assessment items in preparation for their interview, and during their interview (about 60 minutes) they were asked the following questions for each competency item set: (a) how clear are the items for this competency? (b) how accessible do the items feel for potential users? (c) what changes, if any, would you recommend for these items? Feedback from respondents led to several minor edits, shifts in terminology (e.g., use of “partner” instead of “stakeholder”), and opportunities to further clarify language used in some items (e.g., defining “champions”). All potential item revisions were reviewed and accepted by two research team members with extensive implementation research and practice experience.

The third stage involved pilot-testing the assessment with a group of professionals who were enrolled in a university-based certificate program focused on cultivating ISP core competencies. Prior to the delivery of certificate program content, participants were asked to complete the ISCA. Following the completion of each competency-specific item set, participants were given the following open-ended prompts: (a) please identity any items that felt unclear or confusing; (b) please identify any language used in these items that was difficult to understand; and (c) please provide any other thoughts or insights you would like to share about these items. The assessment was completed by 39 individuals, enabling us to tentatively assess internal consistency reliability for each competency item set (Cronbach’s alpha values ranged from .70 to .94; McDonald’s omega values ranged from .70 to .95), as well as the distributional properties of item responses (results indicated the items were not burdened significantly by skewness or kurtosis). We were also able to leverage open-ended feedback to incorporate several minor item edits, which were again reviewed and approved by the same two members of the research team.

Our next step was to prepare the assessment for validation analyses. In addition to the assessment items, we developed a set of items intended to measure two core constructs posited to be associated with the ISP core competencies [2]. One construct represented ISP gains, or the extent to which ISPs report receiving recognition, credibility, and respect from those who receive their implementation support. The second construct represented recipient benefits, or the extent to which ISPs perceive the recipients of their support experiencing increases in (a) relational capacities with the ISP, (b) implementation capability, (c) implementation opportunities, and (d) implementation motivation [2]. More details about the specific items used to measure these constructs and the ISCA are provided in the Final Measures subsection.

Data collection and sample

To recruit a sample for validation analyses, we leveraged a listserv of nearly 4,000 individuals who have registered for or expressed interest in various events and trainings focused on implementation practice offered by an implementation science collaborative housed within a research-intensive university in the Southeast region of the United States. A series of emails were sent to members of this listserv describing our efforts to validate the ISCA, with an invitation to participate. Voluntary responses (no participation incentives were offered) were collected between June and November 2023 using Qualtrics, a web-based survey platform. The survey included informed consent materials, items to collect information about respondent sociodemographic and professional characteristics, the ISCA items, and validation items. The median completion time for the survey was 22.7 minutes among the 357 participants in our final analytic sample.

Table 1 features an overview of participant characteristics. The majority of participants identified as women (84%), with 15% identifying as men, 1% identifying as gender nonconforming, and 1% preferring not to provide information about their gender identity (percentages are rounded, resulting in the possibility that the total exceeds 100%). Participants could select all racial and ethnic identifies that applied to them; 76% identified as White, 11% identified as Black, 9% identified as Asian, 7% identified as Hispanic, 1% identified as Native American/American Indian/Alaska Native, 0.3% identified as Pacific Islander, 3% identified as other, and 2% preferred not to provide information about their racial/ethnic identity. Six continents of residence were represented among participants, with 78% of participants residing in North America, 7% in Europe, 6% in Australia, 4% in Asia, 4% in Africa, and 2% in South America. Thirty-eight percent indicated having more than 15 years of professional experience, 23% indicated having one-to-five years of experience, 22% indicated have six-to-ten years of experience, and the remaining 17% indicated having between 11 and 15 years of experience. The following service types were well represented among participants (more than one type could be indicated by participants): public health (32%), health (31%), mental and behavioral health (26%), child welfare (22%), and K-12 education (18%), among others. The three most common work settings were non-profit organizations (36%), higher education (27%), and state government (20%; more than one setting could be indicated by participants). See Table 1 for more details.

Table 1 Participant characteristics (N = 357)

Final measures

Implementation Support Competencies Assessment (ISCA)

Rooted in recent scholarship and foundational steps of measurement development described earlier, the ISCA included item sets (ranging from 5 to 15 items and totaling 113 items) intended to measure each of 15 core competencies posited to undergird effective implementation support, with competencies nested within one of three overarching domains: co-creation and engagement, ongoing improvement, and sustaining change. The co-creation and engagement domain included items designed to measure the following five competencies: co-learning (6 items), brokering (6 items), address power differentials (7 items), co-design (6 items), and tailoring support (7 items). See Appendix 1 for a list of all items associated with this domain. The ongoing improvement domain included items designed to measure the following six competencies: assess needs and assets (6 items); understand context (6 items); apply and integrate implementation frameworks, strategies, and approaches (5 items); facilitation (9 items); communication (6 items); and conduct improvement cycles (6 items). See Appendix 2 for a list of all items associated with this domain. The sustaining change domain included items designed to measure the following four competencies: grow and sustain relationships (11 items), develop teams (15 items), build capacity (8 items), and cultivate leaders and champions (9 items). See Appendix 3 for a list of all items associated with this domain. Information about internal consistency reliability for each item set is featured in the Results section as a key component of the psychometric evaluation of the ISCA.

When completing the ISCA, participants were instructed to reflect on their experiences supporting implementation in various settings, review each item, and assess their level of competence by selecting one of the following response options: not at all competent (1), slightly competent (2), moderately competent (3), very competent (4), or extremely competent (5). If participants did not have direct experience with a particular item, they were instructed to indicate how competent they would expect themselves to be if they were to conduct the activity described in the item.

Validation constructs

Consistent with the mechanisms of implementation support articulated by Albers et al. [2], we developed and refined multi-item scales intended to measure two constructs theorized to be byproducts of ISPs possessing proficiency across the 15 core competencies of implementation support provision. Specifically, we developed three items intended to measure ISP gains; or the extent to which ISPs receive recognition, credibility, and respect from those who receive their implementation support. Specifically, participants were asked to indicate their level of agreement (ranging from 1 = Strongly Disagree to 5 = Strongly Agree) with the following three statements: “I have credibility among those who receive my implementation support,” “I am respected by those who receive my implementation support,” and “My expertise is recognized by those who receive my implementation support.”

We also developed ten items intended to measure recipient benefits, or the extent to which ISPs perceive the recipients of their support experiencing increases in (a) relational capacities with the ISP, (b) implementation capability, (c) implementation opportunities, and (d) implementation motivation [2]. Specifically, participants were asked to indicate their level of agreement (ranging from 1 = Strongly Disagree to 5 = Strongly Agree) with the following ten statements: “I am trusted by those who receive my implementation support;” “Those who receive my implementation support feel safe trying new things, making mistakes, and asking questions;” “Those who receive my implementation support increase their ability to address implementation challenges;” “Those who receive my implementation support gain competence in implementing evidence-informed interventions in their local settings;” “I provide opportunities for continued learning to those who receive my implementation support;” “I promote implementation friendly environments for those who receive my implementation support;” “Those who receive my implementation support strengthen commitment to their implementation work;” “Those who receive my implementation support feel empowered to engage in their implementation work;” “Those who receive my implementation support demonstrate accountability in their implementation work;” and “Those who receive my implementation support develop an interest in regularly reflecting on their own implementation work.” Information about internal consistency reliability for item sets related to the two validation constructs is featured in the Results section.

Data analysis

To generate evidence of the internal consistency reliability of competency-specific item sets, we estimated Cronbach’s alpha, McDonald’s omega, and Raykov’s rho coefficients for each of the 15 competencies [20, 21]. To generate evidence of the factorial and construct validity of the ISCA, we then employed confirmatory factor analysis (CFA) in Mplus 8.6 [22]. Consistent with our hypothesized model, we estimated three separate second-order CFA models, one for each of the three competency domains: co-creation and engagement, ongoing improvement, and sustaining change. The first CFA model specified the co-creation and engagement domain as a second-order latent factor with the following five competencies specified as first-order latent factors: co-learning, brokering, address power differentials, co-design, and tailoring support. The second CFA model focused on the ongoing improvement domain as a second-order latent factor with the following six competencies specified as first-order latent factors: assess needs and assets; understand context; apply and integrate implementation frameworks, strategies, and approaches; facilitation; communication; and conduct improvement cycles. The third CFA model focused on the sustaining change domain as a second-order latent factor with the following four competencies specified as first-order latent factors: grow and sustain relationships, develop teams, build capacity, and cultivate leaders and champions. In all three models, ISP gains and recipient benefits were regressed on the second-order domain factor, and the error terms for the validation constructs were allowed to covary.

For purposes of model identification and calibrating the latent-factor metrics, we fixed first- and second-order factor means to a value of 0 and variances to a value of 1. To accommodate the ordinal-level nature of the ISCA items (and items used to measure the validation constructs), we employed the means- and variance-adjusted weighted least squares (WLSMV) estimator and incorporated a polychoric correlation input matrix [23]. Some missing values were present in the data, generally reflecting a steady rate of attrition as participants progressed through the ISCA. Consequently, the analytic sample for each second-order factor model varied, such that the model for the co-creation and engagement domain possessed all 357 participants, the model for the ongoing improvement domain possessed 316 participants, and the model for the sustaining change domain possessed 296 participants. Within each model, pairwise deletion was used to handle missing data, which enables the flexible use of partial responses across model variables to estimate model parameters. Missing values were shown to meet the assumption of Missing Completely at Random (MCAR) per Little’s multivariate test of MCAR (\({\chi }^{2}\)[94] = 83.47, p = 0.77), a condition under which pairwise deletion performs well [24, 25].

To assess model fit, the following indices and associated values were prespecified as being indicative of good model fit: Comparative Fit Index (CFI) and Tucker-Lewis Index (TLI) values greater than 0.95, standardized root mean square residual (SRMR) values less than .08, and root mean square error of approximation (RMSEA) values less than or equal to 0.06 (including the upper-level 90% confidence interval) [26, 27]. Each factor-analytic model was over-identified and sufficiently powered to detect not-close model fit [28].

Ethics approval

We submitted our study proposal (study #: 23-0958) to our university’s Office of Human Research Ethics, whereby our study was approved and determined to be exempt from further review.

Results

Internal consistency reliability

Assessments of internal consistency reliability for each competency-specific item set yielded evidence of strong reliability. Specifically, Cronbach’s alpha, McDonald’s omega, and Raykov’s rho ranged from 0.82 to 0.96 across competencies. Internal consistency reliability was also strong for the two validation constructs. For ISP gains, Cronbach’s alpha and Raykov’s rho were 0.86; McDonald’s omega was 0.87. For recipient benefits, Cronbach’s alpha and Raykov’s rho were 0.91; McDonald’s omega was 0.92. See Table 2 for a detailed overview of reliability estimates across competencies and validation constructs.

Table 2 Internal consistency reliability estimates

Factorial and construct validity

Co-creation and engagement domain

Figure 1 features the second-order factor model with co-creation and engagement specified as a second-order factor and the five corresponding competencies specified as first-order factors. ISP gains and recipient benefits were also regressed on the co-creation and engagement factor. This model yielded good model fit (\({\chi }^{2}\)[937] = 1857.16, p < .001; CFI = 0.95; TLI = 0.95; SRMR = 0.05; RMSEA = 0.052 [upper-level 90% confidence interval: 0.056]). All first-order standardized factor loadings were statistically significant and valued between 0.66 and 0.87. All standardized second-order factor loadings were statistically significant and valued between 0.89 and 0.93. Per standardized regression coefficients, the co-creation and engagement domain also was significantly and positively associated with ISP gains (\(\beta\) = 0.62, p < .001; R2 = 0.38) and recipient benefits (\(\beta\) = 0.66, p < .001; R2 = 0.44).

Fig. 1
figure 1

Second-Order Confirmatory Factor Analysis of Domain 1 and Construct Validation (Standardized Parameters). Note: Error terms for observed indicators and full measurement models for the two focal endogenous constructs are omitted to retain visual parsimony. All parameter estimates are standardized. ***p < .001. All first-order and second-order factor loadings are significant at p < .001 level. ISP = Implementation support practitioner

Ongoing improvement domain

Figure 2 features the second-order factor model with ongoing improvement specified as a second-order factor and the six corresponding competencies specified as first-order factors. ISP gains and recipient benefits were also regressed on the ongoing improvement factor. This model yielded good model fit, overall (\({\chi }^{2}\)[1215] = 2707.55, p < .001; CFI = 0.95; TLI = 0.95; SRMR = 0.06; RMSEA = 0.062 [upper-level 90% confidence interval: 0.065]). All first-order standardized factor loadings were statistically significant and valued between 0.68 and 0.96. All second-order standardized factor loadings were statistically significant and valued between 0.80 and 0.95. Per standardized regression coefficients, the ongoing improvement domain also was significantly and positively associated with ISP gains (\(\beta\) = 0.61, p < .001; ; R2 = 0.37) and recipient benefits (\(\beta\) = 0.64, p < .001; R2 = 0.41).

Fig. 2
figure 2

Second-Order Confirmatory Factor Analysis of Domain 2 and Construct Validation (Standardized Parameters). Note: Error terms for observed indicators and full measurement models for the two focal endogenous constructs are omitted to retain visual parsimony. All parameter estimates are standardized. ***p < .001. All first-order and second-order factor loadings are significant at p < .001 level. FSA = frameworks, strategies, and approaches; ISP = implementation support practitioner

Sustaining change domain

Figure 3 features the second-order factor model with sustaining change specified as a second-order factor and the four corresponding competencies specified as first-order factors. ISP gains and recipient benefits were also regressed on the sustaining change factor. This model yielded good model fit (\({\chi }^{2}\)[1477] = 2927.16, p < .001; CFI = 0.96; TLI = 0.96; SRMR = 0.05; RMSEA = 0.058 [upper-level 90% confidence interval: 0.061]). All first-order standardized factor loadings were statistically significant and valued between 0.79 and 0.94. All second-order standardized factor loadings were statistically significant and valued between 0.88 and 0.94. Per standardized regression coefficients, the sustaining change domain also was significantly and positively associated with ISP gains (\(\beta\) = 0.69, p < .001; R2 = 0.48) and recipient benefits (\(\beta\) = 0.75, p < .001; R2 = 0.57).

Fig. 3
figure 3

Second-Order Confirmatory Factor Analysis of Domain 3 and Construct Validation (Standardized Parameters). Note: Error terms for observed indicators and full measurement models for the two focal endogenous constructs are omitted to retain visual parsimony. All parameter estimates are standardized. ***p < .001. All first-order and second-order factor loadings are significant at p < .001 level. ISP = Implementation support practitioner

Across all three models, standardized factor loadings associated with the validation constructs were statistically significant and ranged between 0.72 and 0.95. These details were omitted from figures to preserve visual parsimony. Taken together, results from the three models provided evidence for the factorial and construct validity of all three domains and associated competencies in the ISCA. See Appendices 1, 2, and 3 for summaries of standardized factor loadings, item communalities (i.e., proportion of item variance attributable to its corresponding latent factor), and item response frequencies. A correlation matrix of all study variables is available upon request.

Tests of alternative models

With respect to alternative models, we compared the second-order factor specification for each domain with models in which only first-order factors were specified (and allowed to correlate). We then assessed differences in model fit between the first-order and second-order factor specifications. Leveraging the guidelines provided by Cheung and Rensvold [29], we specifically assessed differences in CFI values to determine whether alternative models differed significantly. Decreases in CFI values of more than 0.01-units between an original model and alternative model would indicate a significant worsening of model fit. For all three domains, the first-order specification and second-order specification did not differ significantly (i.e., changes in CFI did not exceed 0.003-units in any case; more details about these analyses are available upon request). When alternative models yield statistically negligible differences in model fit, it is good practice to favor the more parsimonious specification (i.e., the model with fewer parameter estimates and more degrees of freedom). Because second-order factor structures are more parsimonious than first-order factor structures (with all possible first-order factor correlations), we retained the second-order factor models as optimal.

Response rates and evidence of acceptability

As noted earlier, response rates steadily declined as participants progressed through the ISCA. As reported in Table 2, the number of responses provided for the items associated with each competency ranged from a high of 357 (the first competency) and decreased linearly to a low of 290 (the fifteenth and final competency). The average attrition rate from competency-to-competency was 1.5%. Moreover, we did not observe any anomalous are unexpected levels of data missingness for any particular item.

Open-ended feedback from pilot-test participants also provided evidence of the acceptability of the ISCA. Pilot-test participants described the ISCA as thorough, clear, easy to understand, and applicable to their work. The ISCA also was described as a tool that could support self-reflection and guide professional development efforts. One pilot-test participant even stated that they “really enjoyed” completing the ISCA.

Discussion

The purpose of the current study was to psychometrically evaluate the ISCA, a promising assessment instrument intended to measure levels of competence across 15 core competencies posited to undergird the effective provision of implementation support in various service delivery settings. Our results offer evidence of the internal consistency reliability and factorial validity of the ISCA, including its three specific domains and associated competencies. The strength of relationships between each domain and the specified validation constructs—ISP gains and recipient benefits—also provide notable evidence of the construct validity of the ISCA. In alignment with the mechanisms of implementation support articulated by Albers and colleagues [2], our findings suggest that one’s possession of high levels of competence across each of the three competency domains is strongly associated with theorized outcomes that can promote successful and sustainable implementation efforts among those who receive implementation support from an ISP.

It is important to highlight that previous research undergirding the identification and operationalization of the ISCA competencies included an integrated systematic review of strategies used by ISPs and the skills needed to use these strategies in practice, along with survey research and interviews that centered the experiences of professionals providing implementation support in diverse service sectors and geographic regions [1,2,3,4,5]. This previous research on ISPs identified the high level of skill required by those providing implementation support, leading to questions about how to select, recruit, and build the capacity of these professionals.

The ISCA serves as a foundational tool for workforce development to measure and improve the skills that are required to both engage in the relational and complex processes involved in building implementation capacity and to tailor a package of implementation strategies situated in context. As we seek to understand how the strategies and skills used by ISPs bring about change in service systems, the ISCA can be used to answer key questions posed by Albers and colleagues [3] including (a) how these competencies can be built and maintained in service settings in ways that activate mechanisms for change, (b) how different skills may be needed in different settings and contexts, and (c) how the roles of ISPs can be supported to establish cultures of learning and evidence use.

As we seek to understand how ISPs activate mechanisms of change to support implementation and evidence use, the ISCA can be used to support self-assessments that identify areas of strength and professional development opportunities for growing the skills needed to build implementation capacity. Supervisors can use the ISCA to inform professional development and decisions around recruitment and hiring. Taken together, the ISCA can be used to further define the role of key actors in the field of implementation science who represent the implementation support system [30].

Future directions and limitations

The ISCA is foundational for future research on the role of implementation support and can be used for evidence-building related to implementation practice. For example, the ISCA can be used to assess whether trainings and other implementation support capacity-building activities promote gains in core competencies. The ISCA also can be used for basic implementation research including assessing the extent to which possession and use of particular competencies is associated with implementation progress across implementation stages [31] and long-term implementation outcomes in real-world settings [32, 33].

As we seek to understand the characteristics of effective implementation teams and champions, the ISCA also can be used to identify “clusters” of competencies that appear to bolster specific support roles in various implementation efforts. Moreover, research leveraging the ISCA might be well positioned to identify the presence of specific competency portfolios possessed by members of implementation teams, highlighting the potential for teams to assemble a group of ISPs who, when brought together, provide coverage of the various competencies that undergird effective implementation support. Research on this front seems promising, as it is unlikely that any single ISP would possess high levels of competence across all 15 competencies reflected in the ISCA.

The current study possesses some limitations that should shape interpretations and conclusions. First, although there was notable diversity in the analytic sample with respect to sociodemographic and professional characteristics, study findings likely generalize best to ISPs who reside in North American and identify as White women. Second, the total analytic sample size was insufficiently large to support multiple group comparison analyses (i.e., tests of measurement invariance), whereby the psychometric properties of the ISCA could be compared across meaningful subgroups (e.g., continent of residence, gender identity, racial/ethnic identity, service type, service setting). Future research should seek to recruit very large samples that would support such analyses, which could highlight whether the ISCA performs equivalently across various respondent characteristics. Third, as a self-assessment tool, the ISCA is potentially subject to the common biases inherent in self-report data. Moreover, study participants provided both their competency assessments and responses to the outcome measures used for validation purposes. Consequently, associations between the ISCA constructs and validation constructs could be inflated due to common method variance. Future research should endeavor to validate the ISCA using outcome measures collected from the recipients of implementation support. Indeed, we view the current study as a launching point for a larger body of work intended to robustly validate the ISCA.

Conclusions

This study brings together several years of theory development and research on the role of ISPs and the competencies that are needed for them to be successful in their role. To date, a psychometrically validated measure of implementation support competencies has not been available. Results from the current study showcase a promising, psychometrically robust assessment tool on this front—the Implementation Support Competencies Assessment (ISCA). As a whole, results from this study also provide compelling evidence of reliability and validity with respect to the implementation support competencies identified by Metz and colleagues. Using the ISCA can shed light on the black box of many current implementation studies that fail to show positive effects of specific implementation strategies on implementation outcomes [13]. The ISCA enables understanding of the level of competency with which implementation strategies are selected, tailored, and delivered, which may be as important as the specific strategy or package of strategies selected. At the very least, the ISCA can support efforts to understand the impact that competent (or less competent) implementation support has on the outcomes of a particular implementation effort.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.

Abbreviations

ISP:

Implementation Support Practitioner

ISCA:

Implementation Support Competencies Assessment

CFA:

Confirmatory Factor Analysis

CFI:

Comparative Fit Index

TLI:

Tucker-Lewis Index

RMSEA:

Root Mean Error Square of Approximation

WLSMV:

Weighted least squares, means- and variance-adjusted

MCAR:

Missing Completely at Random

References

  1. Albers B, Metz A, Burke K. Implementation support practitioners- A proposal for consolidating a diverse evidence base. BMC Health Serv Res. 2020;20(1):368.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Albers B, Metz A, Burke K, Bührmann L, Bartley L, Driessen P, et al. The Mechanisms of Implementation Support - Findings from a Systematic Integrative Review. Res Soc Work Pract. 2022;32(3):259–80.

    Article  Google Scholar 

  3. Albers B, Metz A, Burke K, Bührmann L, Bartley L, Driessen P, et al. Implementation Support Skills: Findings From a Systematic Integrative Review. Res Soc Work Pract. 2021;31(2):147–70.

    Article  Google Scholar 

  4. Metz A, Albers B, Burke K, Bartley L, Louison L, Ward C, et al. Implementation Practice in Human Service Systems: Understanding the Principles and Competencies of Professionals Who Support Implementation. Hum Serv Organ Manag Leadersh Gov. 2021;45(3):238–59.

    Google Scholar 

  5. Bührmann L, Driessen P, Metz A, Burke K, Bartley L, Varsi C, et al. Knowledge and attitudes of Implementation Support Practitioners—Findings from a systematic integrative review. Ochodo E, editor. PLoS One. 2022;17(5):e0267533.

  6. Moore JE, Rashid S, Park JS, Khan S, Straus SE. Longitudinal evaluation of a course to build core competencies in implementation practice. Implement Sci. 2018;13(1):1–13.

    Article  Google Scholar 

  7. Mosson R, Augustsson H, Bäck A, Åhström M, Von Thiele Schwarz U, Richter A, et al. Building implementation capacity (BIC): A longitudinal mixed methods evaluation of a team intervention. BMC Health Serv Res. 2019;19(1):1–12.

    Article  Google Scholar 

  8. Park JS, Moore JE, Sayal R, Holmes BJ, Scarrow G, Graham ID, et al. Evaluation of the “Foundations in Knowledge Translation” training initiative: preparing end users to practice KT. Implement Sci. 2018;13(1):63.

    Article  PubMed  PubMed Central  Google Scholar 

  9. William A, Aldridge I, Roppolo RH, Brown J, Bumbarger BK, Boothroyd RI. Mechanisms of change in external implementation support: A conceptual model and case examples to  guide research and practice. 2023 Jun 21. https://doi.org/10.1177/26334895231179761.

  10. Westerlund A, Sundberg L, Nilsen P. Implementation of Implementation Science Knowledge: The Research-Practice Gap Paradox. Worldviews Evid Based Nurs. 2019;16(5):332–4.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Juckett LA, Bunger AC, McNett MM, Robinson ML, Tucker SJ. Leveraging academic initiatives to advance implementation practice: a scoping review of capacity building interventions. Implement Sci. 2022;17(1):1–14.

    Article  Google Scholar 

  12. Jensen TM, Metz AJ, Disbennett ME, Farley AB. Developing a practice-driven research agenda in implementation science: Perspectives from experienced implementation support practitioners. Implement Res Pract. 2023;3:4.

    Google Scholar 

  13. Boaz A, Baeza J, Fraser A, Persson E. ‘It depends’: what 86 systematic reviews tell us about what strategies to use to support the use of research in clinical practice. Implement Sci. 2024;19(1):1–30.

    Article  Google Scholar 

  14. Mazurek Melnyk B, Gallagher-Ford L, English Long L, Fineout-Overholt E. The establishment of evidence-based practice competencies for practicing registered nurses and advanced practice nurses in real-world clinical settings: Proficiencies to improve healthcare quality, reliability, patient outcomes, and costs. Worldviews Evid Based Nurs. 2014;11(1):5–15.

    Article  Google Scholar 

  15. Metz A, Jensen T, Farley A, Boaz A. Is implementation research out of step with implementation practice? Pathways to effective implementation support over the last decade. Implement Res Pract. 2022;3:263348952211055.

    Article  Google Scholar 

  16. Metz A, Albers B, Jensen T. Implementation support competencies assessment (ISCA). https://doi.org/10.17605/OSF.IO/ZH82F. 2024.

  17. DeVellis R. Scale development: theory and applications. 3rd ed. Thousand Oaks, CA: Sage; 2012.

    Google Scholar 

  18. Metz A, Louison L, Burke K, Albers B, Ward C. Implementation support practitioner profile: Guiding principles and core competencies for implementation practice. University of North Carolina at Chapel Hill; 2020.

  19. Wilson J, Ward C, Fetvadjiev VH, Bethel A. Measuring Cultural Competencies: The Development and Validation of a Revised Measure of Sociocultural Adaptation. J Cross Cult Psychol. 2017;48(10):1475–506.

    Article  Google Scholar 

  20. McNeish D. Thanks coefficient alpha, we’ll take it from here. Psychol Methods. 2018;23(3):412–33.

    Article  PubMed  Google Scholar 

  21. Padilla MA, Divers J. A Comparison of Composite Reliability Estimators: Coefficient Omega Confidence Intervals in the Current Literature. Educ Psychol Meas. 2016;76(3):436–53.

    Article  PubMed  Google Scholar 

  22. Muthén L, Muthén B. Mplus User’s Guide. 8th ed. Los Angeles, CA: Muthén & Muthén; 2017.

    Google Scholar 

  23. Flora DB, Curran PJ. An Empirical Evaluation of Alternative Methods of Estimation for Confirmatory Factor Analysis With Ordinal Data. Psychol Methods. 2004;9(4):466–91.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Shi D, Lee T, Fairchild AJ, Maydeu-Olivares A. Fitting Ordinal Factor Analysis Models With Missing Data: A Comparison Between Pairwise Deletion and Multiple Imputation. Educ Psychol Meas. 2020;80(1):41–66.

    Article  PubMed  Google Scholar 

  25. Li C. Little’s test of missing completely at random. Stata J. 2013;13(4):795–809.

    Article  Google Scholar 

  26. Browne MW, Cudeck R. Alternative Ways of Assessing Model Fit. Sociol Methods Res. 1992;21(2):230–58.

    Article  Google Scholar 

  27. Hu LT, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct Equ Modeling. 1999;6(1):1–55.

    Article  Google Scholar 

  28. MacCallum RC, Browne MW, Sugawara HM. Power analysis and determination of sample size for covariance structure modeling. Psychol Methods. 1996;1(2):130–49.

    Article  Google Scholar 

  29. Cheung GW, Rensvold RB. Evaluating Goodness-of-Fit Indexes for Testing Measurement Invariance. Struct Equ Modeling. 2002;9(2):233–55.

    Article  Google Scholar 

  30. Wandersman A, Duffy J, Flaspohler P, Noonan R, Lubell K, Stillman L, et al. Bridging the Gap Between Prevention Research and Practice: The Interactive Systems Framework for Dissemination and Implementation. Am J Comm Psychol. 2008;41(3–4):171–81.

    Article  Google Scholar 

  31. McGuier EA, Kolko DJ, Stadnick NA, Brookman-Frazee L, Wolk CB, Yuan CT, et al. Advancing research on teams and team effectiveness in implementation science: An application of the Exploration, Preparation, Implementation, Sustainment (EPIS) framework. Implement Res Pract. 2023;1:4.

    Google Scholar 

  32. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76.

    Article  PubMed  Google Scholar 

  33. Proctor EK, Bunger AC, Lengnick-Hall R, Gerke DR, Martin JK, Phillips RJ, et al. Ten years of implementation outcomes research: a scoping review. Implement Sci. 2023;18(1):1–19.

    Article  Google Scholar 

Download references

Acknowledgements

The authors wish to thank Amanda Farley for her support in preparation of the measure and data collection. The authors also wish to thank Mackensie Disbennett for her support in the process of reviewing and refining measurement items.

Funding

No external funding sources supported this study.

Author information

Authors and Affiliations

Authors

Contributions

The first author led analysis and drafted the methods and results sections and reviewed and edited all sections. The second author co-led conceptualization of items and drafted the background and discussion sections and reviewed and edited all sections. The third author co-led conceptualization of items and reviewed and edited all sections.

Corresponding author

Correspondence to Todd M. Jensen.

Ethics declarations

Ethics approval and consent to participate

The Institutional Review Board of the primary authors’ university reviewed this study and designated it as exempt (IRB #23-0958).

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1

Implementation Support Competencies Assessment (ISCA) (Metz, Albers, & Jensen, 2024) – Items for the Co-Creation and Engagement Domain, Standardized Factor Loadings, Item Communalities, and Item Response Frequencies

 

Response Frequencies

#

Competency and Item

FL

Com.

1

2

3

4

5

Competency 1: Co-Learning

C1.1

Obtain clear understanding of the system, organizational context, and culture in which implementation will take place

0.78

0.61

6%

33%

47%

14%

0%

C1.2

Create opportunities for new ideas to emerge

0.73

0.53

0%

9%

28%

47%

16%

C1.3

Build trust and respect for all perspectives involved in supporting implementation

0.73

0.54

5%

21%

54%

20%

0%

C1.4

Communicate and listen so that you can integrate different perspectives and types of knowledge

0.69

0.48

2%

20%

52%

27%

0%

C1.5

Provide interactive and educational trainings on implementation science

0.66

0.44

7%

23%

28%

28%

14%

C1.6

Tailor approaches to enhance implementation readiness at individual, organizational, and system levels

0.84

0.70

2%

16%

37%

33%

13%

Competency 2: Brokering

C2.1

Identify individuals or groups that should be involved in implementation and seek to understand why they were not yet included

0.74

0.55

1%

14%

34%

37%

14%

C2.2

Connect individuals or groups who have been disconnected in the system by serving as a relational resource

0.72

0.51

4%

17%

36%

34%

10%

C2.3

Develop and regularly convene implementation groups and teams with diverse partners

0.80

0.64

2%

13%

29%

41%

16%

C2.4

Connect people strategically in a variety of ways when there is a potential for mutual benefit

0.79

0.62

1%

12%

32%

41%

15%

C2.5

Support the use of evidence and data with implementation partners to support implementation

0.77

0.59

1%

7%

26%

39%

28%

C2.6

Promote opportunities for implementation partners to engage with each other in the use of evidence and data

0.83

0.69

1%

12%

31%

41%

16%

Competency 3: Address Power Differentials

C3.1

Put the experiences of end users (e.g., service recipients) at the center of decisions about implementation

0.72

0.51

1%

10%

30%

41%

19%

C3.2

Identify how different partners can influence implementation

0.85

0.72

0%

9%

33%

42%

15%

C3.3

Identify existing power structures in the implementation setting

0.80

0.64

1%

15%

31%

39%

14%

C3.4

Use facilitation techniques to honor all voices involved in implementation

0.77

0.59

2%

13%

25%

39%

20%

C3.5

Carefully attend to which partners hold the most and least power to influence implementation

0.85

0.72

2%

16%

32%

39%

11%

C3.6

Seek and gain buy-in from formal and informal leaders (e.g., champions, opinion leaders, or others potentially influencing the implementation because of their reputation or credibility) to include diverse expertise in team discussions

0.84

0.70

2%

13%

35%

37%

13%

C3.7

Support partners in developing an authentic and evolving shared understanding about implementation

0.87

0.76

1%

14%

35%

36%

14%

Competency 4: Co-Design

C4.1

Work with partners to build a strong fit between a selected intervention and its implementation

0.85

0.72

3%

8%

34%

39%

16%

C4.2

Support collaborative implementation planning involving all partners

0.87

0.76

1%

8%

30%

43%

18%

C4.3

To the extent possible, enable implementation partners to co-design any implementation tools, products, processes, governance structures, service models, strategies, and policies

0.80

0.65

4%

15%

31%

36%

14%

C4.4

Promote ongoing testing of implementation tools, products, and processes to improve them

0.77

0.59

3%

15%

30%

37%

17%

C4.5

Support the modification of specific implementation strategies based on local context

0.85

0.71

2%

12%

27%

40%

19%

C4.6

Facilitate activities that prioritize the needs of people who are intended to benefit from the intervention being implemented

0.83

0.70

2%

12%

30%

36%

20%

Competency 5: Tailoring Support

C5.1

Regularly assess the implementation support needs and assets of different partner groups

0.84

0.71

2%

15%

41%

29%

13%

C5.2

Facilitate agreement on the implementation supports that will be offered to different partner groups

0.84

0.71

2%

18%

37%

33%

10%

C5.3

Develop a plan for meetings and activities (virtual or onsite) based on the goals of implementation partners

0.80

0.64

2%

7%

23%

44%

25%

C5.4

Be responsive to “ad hoc”/ “just in time” support needs of implementation partners

0.76

0.58

1%

8%

25%

44%

23%

C5.5

Regularly assess whether your level of support matches the needs, goals, and context of implementation

0.78

0.60

1%

15%

35%

36%

13%

C5.6

Work with partners to tailor implementation strategies to meet local needs and assets

0.87

0.76

1%

10%

34%

40%

15%

C5.7

Continuously promote the adaptability of implementation strategies used by partners

0.86

0.75

1%

14%

33%

37%

16%

  1. FL = standardized factor loading; Com. = item communality values. 1 = not at all competent, 2 = slightly competent, 3 = moderately competent, 4 = very competent, and 5 = extremely competent. The full measure is available via the Open Science Foundation repository with the following reference: Metz, A., Albers, B., & Jensen., T. (2024). Implementation Support Competencies Assessment (ISCA). https://doi.org/10.17605/OSF.IO/ZH82F

Appendix 2

Implementation Support Competencies Assessment (ISCA) (Metz, Albers, & Jensen, 2024) – Items for the Ongoing Improvement Domain, Standardized Factor Loadings, Item Communalities, and Item Response Frequencies

 

Response Frequencies

#

Competency and Item

FL

Com.

1

2

3

4

5

 

Competency 6: Assess Needs and Assets

C6.1

Collaborate with partners to identify the needs and assets of different individuals and groups involved in implementation

0.83

0.68

1%

8%

35%

41%

15%

C6.2

Engage people with lived experience to discover needs and assets

0.68

0.47

3%

16%

28%

35%

18%

C6.3

Facilitate the identification of relevant resources to be used in implementation

0.81

0.66

1%

10%

29%

44%

17%

C6.4

Support implementation partners to understand each other’s perspectives on the need for change

0.88

0.77

2%

10%

36%

39%

12%

C6.5

Use a variety of data sources to highlight needs and assets related to implementation

0.85

0.71

2%

11%

28%

37%

22%

C6.6

Use data to explore the unique needs of specific populations (e.g., race, ethnicity, gender, socioeconomic status, geography, ability status)

0.78

0.60

3%

13%

32%

32%

20%

Competency 7: Understand Context

C7.1

Involve diverse partners from throughout the system to identify and understand the implications of implementation

0.87

0.76

1%

15%

35%

35%

14%

C7.2

Review available evidence to determine the relevance and fit of the proposed intervention to be implemented

0.80

0.64

1%

10%

27%

41%

20%

C7.3

Assess the fit of the proposed intervention with the values, needs, and resources of the service setting

0.89

0.79

2%

11%

32%

38%

18%

C7.4

Assess the fit of the proposed intervention with the current political, financial, and organizational contexts

0.83

0.69

5%

14%

37%

32%

12%

C7.5

Continuously identify and respond to changes in the systems which affect implementation

0.88

0.77

3%

13%

34%

37%

13%

C7.6

Identify and support actions that manage risks and assumptions for implementation

0.88

0.78

5%

16%

40%

30%

8%

Competency 8: Apply and Integrate Implementation Frameworks, Strategies, and Approaches

C8.1

Remain up to date on evidence developed through implementation research and practice

0.91

0.82

4%

15%

31%

35%

16%

C8.2

Remain up to date on knowledge about implementation frameworks, models, theories, and strategies

0.90

0.81

3%

18%

33%

32%

14%

C8.3

Educate partners about the best available evidence on implementation frameworks, strategies, and approaches that could be used to support implementation

0.86

0.74

7%

19%

32%

31%

10%

C8.4

Include all relevant partners in the selection, combination, and co-design of implementation strategies and approaches

0.93

0.86

6%

13%

37%

35%

9%

C8.5

In collaboration with partners, support the use of implementation frameworks, approaches, and strategies that are best suited for the specific service setting

0.93

0.87

6%

13%

35%

34%

12%

Competency 9: Facilitation

C9.1

Ensure that meetings and convenings to support implementation are welcoming and engaging for all participants

0.75

0.57

1%

7%

20%

51%

21%

C9.2

Support relevant partners in identifying barriers to implementation

0.88

0.77

1%

7%

24%

48%

21%

C9.3

Facilitate the identification of partners needed to develop and execute strategies for addressing barriers to implementation

0.89

0.80

2%

8%

31%

41%

18%

C9.4

Serve as a formal and informal facilitator as determined by an analysis of the implementation challenge and context

0.86

0.74

4%

12%

23%

41%

20%

C9.5

Support implementation partners to generate and prioritize ideas to address barriers to implementation

0.87

0.76

1%

9%

27%

44%

19%

C9.6

Support partners to evaluate alternatives, summarize key points, sort ideas, and exercise judgment in the face of simple challenges with easy solutions

0.88

0.77

2%

12%

29%

41%

16%

C9.7

Support partners to generate alternatives, facilitate open discussion, gather different points of view, and delay quick decision-making in the face of complex challenges with no easy solutions

0.87

0.76

3%

14%

32%

36%

15%

C9.8

Use facilitation methods (e.g., action planning, brainstorming, role playing, ranking, scenario development) that match the implementation challenge

0.80

0.63

3%

17%

25%

38%

17%

C9.9

Respond to emergent implementation challenges with flexibility and adaptability

0.87

0.76

2%

10%

27%

42%

19%

Competency 10: Communication

C10.1

Work with partners to develop communication protocols that facilitate engagement with each other.

0.95

0.90

4%

17%

32%

35%

12%

C10.2

Work with partners to develop communication protocols that communicate and celebrate implementation progress

0.94

0.89

4%

14%

35%

35%

12%

C10.3

Work with partners to develop communication protocols that report barriers hindering implementation

0.96

0.93

5%

19%

33%

32%

12%

C10.4

Work with partners to develop communication protocols that periodically review past decisions to continually assess their appropriateness

0.92

0.84

7%

21%

35%

26%

11%

C10.5

Support the development of tailored communication protocols for different audiences

0.88

0.77

3%

18%

31%

34%

14%

C10.6

Encourage partners to regularly communicate with and gather feedback from individuals inside and outside the implementing system

0.89

0.79

3%

16%

30%

35%

17%

Competency 11: Conduct Improvement Cycles

C11.1

Facilitate the identification of relevant quantitative and qualitative data about implementation activities and outcomes

0.88

0.77

3%

12%

37%

26%

22%

C11.2

Support the development of processes and structures for the routine collection, analysis, and interpretation of implementation data

0.89

0.79

3%

17%

32%

31%

18%

C11.3

Ensure that different partners have access to relevant, valid, and reliable data to help guide implementation decision-making

0.90

0.80

3%

17%

30%

36%

15%

C11.4

Encourage the collection and use of data to explore the impact of implementation on different subgroups

0.88

0.77

3%

12%

31%

35%

20%

C11.5

Develop partners’ capacity to continuously use data for implementation decision-making through modeling, instruction, and coaching

0.90

0.81

4%

17%

31%

33%

15%

C11.6

Help create structures that ensure that crucial information about implementation and improvement is circulated among all partners

0.95

0.89

5%

19%

33%

30%

14%

  1. FL = standardized factor loading; Com. = item communality values. 1 = not at all competent, 2 = slightly competent, 3 = moderately competent, 4 = very competent, and 5 = extremely competent. The full measure is available via the Open Science Foundation repository with the following reference: Metz, A., Albers, B., & Jensen., T. (2024). Implementation Support Competencies Assessment (ISCA). https://doi.org/10.17605/OSF.IO/ZH82F

Appendix 3

Implementation Support Competencies Assessment (ISCA) (Metz, Albers, & Jensen, 2024) – Items for the Sustaining Change Domain, Standardized Factor Loadings, Item Communalities, and Item Response Frequencies

 

Response Frequencies

#

Competency and Item

FL

Com.

1

2

3

4

5

Competency 12: Grow and Sustain Relationships

C12.1

Build trust with implementation partners by being transparent and accountable in all actions

0.87

0.75

1%

2%

19%

48%

29%

C12.2

Build relationships with implementation partners from all parts of the implementation setting

0.87

0.76

1%

5%

25%

46%

23%

C12.3

Continuously evaluate the strengths and weaknesses of your relationships with implementation partners

0.86

0.74

2%

14%

31%

39%

14%

C12.4

Seek and incorporate feedback from implementation partners about the strengths and weaknesses of your relationships with them

0.80

0.64

3%

21%

26%

36%

15%

C12.5

Facilitate open communication that enables difficult conversations with implementation partners, when needed, to regulate distress in your relationships with them

0.81

0.66

3%

12%

33%

36%

16%

C12.6

Demonstrate your competency to implementation partners

0.87

0.76

2%

9%

32%

43%

15%

C12.7

Enter the implementation setting with humility as a learner

0.73

0.53

5%

17%

46%

32%

0%

C12.8

Demonstrate commitment and persistence in the face of complex challenges

0.86

0.74

0%

4%

17%

47%

32%

C12.9

Encourage and enable implementation partners to share their perspectives openly and honestly

0.87

0.75

1%

4%

17%

51%

27%

C12.10

Normalize implementation challenges; ask questions; ask for support from implementation partners

0.89

0.78

1%

6%

19%

43%

32%

C12.11

Support implementation partners to understand each other’s perspective; highlight areas of shared understanding and common goals

0.90

0.81

1%

10%

23%

45%

21%

Competency 13: Develop Teams

C13.1

Guide efforts to assemble implementation teams

0.83

0.70

2%

11%

36%

37%

15%

C13.2

Facilitate the development of clear governance structures for implementation teams

0.84

0.71

7%

21%

36%

27%

10%

C13.3

Support teams to select, operationalize, tailor, and adapt interventions

0.88

0.77

2%

14%

35%

37%

12%

C13.4

Support teams to develop operational processes and resources for building staff competency

0.87

0.76

3%

16%

37%

32%

13%

C13.5

Support teams to identify, collect, analyze, and monitor meaningful data

0.79

0.62

1%

14%

30%

36%

18%

C13.6

Support teams to engage leadership, staff, and partners in using data for improvement

0.85

0.72

2%

14%

28%

38%

18%

C13.7

Support teams to build capacity for sustained implementation

0.90

0.80

1%

14%

34%

38%

13%

C13.8

Support teams to build cross-sector collaborations that are aligned with new ways of work

0.80

0.64

3%

21%

34%

31%

11%

C13.9

Support teams to develop effective team meeting processes, including the establishment of consistent meeting schedules and standing agendas

0.78

0.61

1%

12%

31%

37%

19%

C13.10

Ensure implementation teams have sufficient support from organizational leadership to promote successful implementation

0.82

0.67

2%

18%

40%

31%

10%

C13.11

Help to develop communication protocols that ensure relevant information about implementation is circulated among implementation teams and their members

0.82

0.67

2%

18%

34%

36%

10%

C13.12

Develop processes for ongoing assessment and improvement of implementation team functioning

0.84

0.70

4%

21%

34%

32%

9%

C13.13

Support implementation teams in providing opportunities for learning and professional development to its members

0.85

0.73

2%

17%

32%

36%

13%

C13.14

Work to enhance cohesion and trust among implementation team members

0.84

0.71

1%

12%

31%

40%

16%

C13.15

Help manage and resolve conflict among implementation team members

0.81

0.65

4%

19%

35%

34%

9%

Competency 14: Build Capacity

C14.1

At the outset of implementation, model with implementation partners the changes that will be implemented

0.86

0.74

5%

19%

33%

34%

10%

C14.2

Work with implementation partners to assess capacity for sustained implementation, including budget considerations

0.83

0.69

4%

23%

38%

25%

10%

C14.3

Facilitate implementation partners’ access to capacity-building training, modeling, or coaching for implementation

0.84

0.70

3%

16%

35%

32%

13%

C14.4

Model with implementation partners relevant knowledge, skills, behaviors, and practices

0.91

0.82

4%

11%

29%

41%

15%

C14.5

Coach implementation partners in their use of relevant knowledge, skills, behaviors, and practices

0.89

0.80

5%

14%

32%

34%

16%

C14.6

Help identify and shape organizational processes needed to build capacity for implementation

0.86

0.74

3%

19%

34%

32%

13%

C14.7

Support implementation partners in identifying and addressing future challenges or barriers to sustained implementation

0.87

0.76

3%

13%

36%

35%

13%

C14.8

Promote collaboration and new partnerships that will build the capacity of implementation partners

0.84

0.70

3%

11%

38%

34%

14%

Competency 15: Cultivate Leaders and Champions

C15.1

Identify existing leaders who can support implementation

0.85

0.72

1%

11%

27%

43%

18%

C15.2

Help build the capacity of leaders to lead implementation

0.92

0.85

4%

17%

30%

36%

14%

C15.3

Support partners in developing processes for regular coordination meetings with leaders related to implementation

0.92

0.84

3%

13%

32%

38%

13%

C15.4

Help identify and involve emerging leaders who can support implementation

0.88

0.78

3%

16%

31%

34%

16%

C15.5

Build the capacity of emerging leaders to support implementation

0.92

0.84

4%

16%

32%

34%

14%

C15.6

Support implementation partners in navigating any transitions in organizational leadership

0.85

0.72

7%

23%

32%

30%

7%

C15.7

Support implementation partners in identifying champions who can support implementation (champions are professionals or lay persons who volunteer or are appointed to enthusiastically promote and support the implementation of an innovation)

0.91

0.83

3%

15%

29%

39%

15%

C15.8

Support implementation partners in involving champions throughout the course of implementation

0.94

0.89

3%

15%

28%

41%

13%

C15.9

Support implementation partners in reviewing and strengthening champion roles

0.93

0.86

6%

17%

29%

38%

10%

  1. FL = standardized factor loading; Com. = item communality values. 1 = not at all competent, 2 = slightly competent, 3 = moderately competent, 4 = very competent, and 5 = extremely competent. The full measure is available via the Open Science Foundation repository with the following reference: Metz, A., Albers, B., & Jensen., T. (2024). Implementation Support Competencies Assessment (ISCA). https://doi.org/10.17605/OSF.IO/ZH82F

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jensen, T.M., Metz, A.J. & Albers, B. Development and psychometric evaluation of the Implementation Support Competencies Assessment. Implementation Sci 19, 58 (2024). https://doi.org/10.1186/s13012-024-01390-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13012-024-01390-8