Skip to main content

The implementation leadership scale (ILS): development of a brief measure of unit level implementation leadership



In healthcare and allied healthcare settings, leadership that supports effective implementation of evidenced-based practices (EBPs) is a critical concern. However, there are no empirically validated measures to assess implementation leadership. This paper describes the development, factor structure, and initial reliability and convergent and discriminant validity of a very brief measure of implementation leadership: the Implementation Leadership Scale (ILS).


Participants were 459 mental health clinicians working in 93 different outpatient mental health programs in Southern California, USA. Initial item development was supported as part of a two United States National Institutes of Health (NIH) studies focused on developing implementation leadership training and implementation measure development. Clinician work group/team-level data were randomly assigned to be utilized for an exploratory factor analysis (n = 229; k = 46 teams) or for a confirmatory factor analysis (n = 230; k = 47 teams). The confirmatory factor analysis controlled for the multilevel, nested data structure. Reliability and validity analyses were then conducted with the full sample.


The exploratory factor analysis resulted in a 12-item scale with four subscales representing proactive leadership, knowledgeable leadership, supportive leadership, and perseverant leadership. Confirmatory factor analysis supported an a priori higher order factor structure with subscales contributing to a single higher order implementation leadership factor. The scale demonstrated excellent internal consistency reliability as well as convergent and discriminant validity.


The ILS is a brief and efficient measure of unit level leadership for EBP implementation. The availability of the ILS will allow researchers to assess strategic leadership for implementation in order to advance understanding of leadership as a predictor of organizational context for implementation. The ILS also holds promise as a tool for leader and organizational development to improve EBP implementation.

Peer Review reports


The adoption, implementation, and sustainment of evidenced-based practices (EBPs) are becoming increasingly important for health and allied healthcare organizations and providers, and widespread adoption of EBPs holds promise to improve quality of care and patient outcomes [1, 2]. Considerable resources are being allocated to increase the implementation of EBPs in community care settings with support for activities such as training service providers and increased staffing to support monitoring of implementation-related activities [3]. Although there are calls for increased attention to organizational context in EBP dissemination and implementation [4, 5], there are gaps in examining how organizational context affects EBP implementation. Most relevant for this research is the need for development of measures to assess organizational constructs likely to impact implementation process and outcomes. One organizational factor in need of greater attention is that of leadership for EBP implementation [6].

Leaders can positively or negatively impact the capacity to foster change and innovation [710] and therefore are instrumental in facilitating a positive climate for innovation and positive attitudes toward EBP during implementation [6, 11]. Although the role of leadership in EBP implementation is often discussed, it is rarely empirically examined. The limited empirical research in this area supports the presence of a relationship between general leadership ability and implementation of innovative practices [12], but focuses less on identifying specific behaviors that leaders may enact to facilitate EBP implementation. To stimulate and support additional empirical work in this area, there is a need for brief and efficient measures to assess specific actions leaders may engage in to influence the success of implementation efforts in their organizations or programs.

Both implementation and leadership theories emphasize the importance of leadership in supporting implementation of innovative practices such as EBP. For example, implementation scholars have asserted the importance of leadership in terms of obtaining funding, dispersing resources, and enforcing policies in support of implementation [13]. Research from the Collaboration for Leadership in Applied Health Research and Care has addressed the importance of leaders serving as clinical opinion leaders, managing implementation projects, fostering organizational learning climates, and obtaining senior management support [14]. Other research suggests that managers are responsible for interpreting research evidence, applying it to organizational contexts, and making research-informed implementation decisions [15]. Weiner’s organizational theory of innovation implementation suggests that leaders play a critical role in creating readiness for change, ensuring innovation-values fit, and developing plans, practices, structures, and strategies to support implementation [16].

There is also empirical evidence for the importance of leadership in predicting the success of implementation efforts. For example, transformational leadership (i.e., the degree to which a leader can inspire and motivate others) has been shown to predict employees’ reported use of an innovative practice being implemented in their organization [12, 17]. Consistent with transactional leadership (e.g., providing contingent rewards) [18] perceived support from one’s supervisor has been associated with employees’ participation in implementation [19]. Much of the empirical research on leadership and implementation has focused on identifying mechanisms through which leaders affect implementation. These include a positive organizational climate [20], supportive team climate [21], and positive work attitudes [22]. Research has also focused on the role of leaders in influencing employee attitudes toward EBP [11] and commitment to organizational change [23].

Although general leadership is held to play an important role in implementation, research in this area has not necessarily outlined specific behaviors that leaders may enact in order to strategically influence followers to support the larger goal of successful implementation. Insight into such behaviors can be garnered from existing literature demonstrating that strategically-focused leadership predicts the achievement of specific goals. For example, a recent meta-analysis confirmed the relative advantage of strategic leadership—compared to general leadership—for specific organizational change initiatives [24]. Recent organizational research in climate for customer service [25] and climate for safety [26, 27] has shown that strategically-focused leadership is a critical precursor to building a strategic climate, which subsequently predicts strategic outcomes such as increased customer satisfaction or decreased accidents, respectively.

Although more than 60 implementation strategies were identified in a recent review of the implementation literature [28], few focused on leadership as an implementation factor and none focused mainly on leader development to support EBP implementation. Of those identified, extant strategies involve the recruitment and training of leaders and involving leaders at different organizational levels [2931]. Hence, we argue that leadership focused on a specific strategic imperative, such as adoption and use of EBP, can influence employee attitudes and behavior regarding the imperative. This is consistent with research demonstrating that leader and management support for implementation is a significant and strong predictor of positive implementation climate [32]. Thus, there is a need to identify those behaviors that leaders may enact to create a strategic EBP implementation climate in their teams and better facilitate the implementation and sustainment of EBP.

The goals of the present study were to develop a scale that focused on strategic leadership for EBP implementation and to examine its factor structure, reliability, and convergent and discriminant validity. We drew from strategic climate and leadership theory, implementation research and theory, implementation climate literature, and feedback from subject matter experts to develop items for the implementation leadership scale (ILS) to extend work on management support for implementation. In particular, we focused on leader behaviors related to organizational culture and climate embedding mechanisms that promote strategic climates [33]. In line with this literature, items were developed to assess the degree to which a leader is proactive with regard to EBP implementation, leader knowledge of EBP and implementation, leader support for EBP implementation, leader perseverance in the EBP implementation process, and leader attention to and role modeling effective EBP implementation. Through a process of exploratory factor analysis (EFA) followed by confirmatory factor analysis (CFA), we expected to find empirical support for the conceptual areas identified above. We also expected the final scale and subscales to demonstrate high internal consistency reliability. In regard to convergent validity, we expected that the derived leadership scale would have moderate to high correlations with other measures of leadership (i.e., transformational and transactional leadership). Finally, in regard to discriminant validity, we expected to find low to moderate correlations between the derived leadership scale and a measure of general organizational climate.


Item generation

Item generation and domain identification proceeded in three phases. First, as part of a study focused on developing an intervention to improve leadership for evidence-based practice implementation [18], the investigative team developed items based on review of literature relating leader behaviors to implementation and organizational climate and culture change [32, 33]. Second, items were reviewed for relevance and content by subject matter experts, including a mental health program leader, an EBP trainer and Community Development Team consultant from the California Institute for Mental Health, and four mental health program managers. Third, potential items were reviewed by the investigative team and program managers for face validity and content validity. Twenty-nine items were developed that represented five potential content domains of implementation leadership: proactive EBP leadership, leader knowledge of EBP, leader support for EBP, perseverance in the face of EBP implementation challenges, and attention and role modeling related to EBP implementation.


Participants were 459 mental health clinicians working in 93 different outpatient mental health programs in Southern California, USA. Of the 573 clinicians eligible to participate in this research, 459 participated (80.1% response rate). Participant mean age was 36.5 years (SD = 10.7; Range = 21 to 66) and the majority of respondents were female (79%). The racial/ethnic distribution of the sample was 54% Caucasian, 23.4% Hispanic, 6.7% African American, 5% Asian American, 0.5% American Indian, and 10% ‘other’. Participants had worked in the mental health services field for a mean of 8.5 years (SD = 7.7; Range = 1 week to 43 years), in child and/or adolescent mental health services for a mean of 7.5 years (SD = 7.6; Range = 1 week to 43 years), and in their present agency for 3.4 years (SD = 4.3; Range = 1 week to 28.1 years). Highest level of education consisted of 7% Ph.D./M.D. or equivalent, 68% master’s degree, 6.5% graduate work but no degree, 12.2% bachelor’s degree, 3% some college but no degree, and 0.7% no college. The primary discipline of the sample was 47% marriage and family therapy, 26% social work, 16% psychology, 3% child development, 2% human relations, 1% nursing, and 4.8% other (e.g., drug/alcohol counseling, probation, psychiatry).


The study was approved by the appropriate Institutional Review Boards prior to clinician recruitment and informed consent was obtained prior to administering surveys. The research team first obtained permission from agency executive directors or their designees to recruit their clinicians for participation in the study. Clinicians were then contacted either via email or in-person for recruitment to the study. Data were collected using online surveys or in-person paper-and-pencil surveys.

For online surveys, each participant was e-mailed an invitation to participate including a unique username and password as well as a link to the web survey. Participants reviewed informed consent and after agreeing to participate were able to access the survey and proceed to the survey items. Once participants logged in to the online survey, they were able to answer questions and could pause and resume at any time. The online survey took approximately 30 to 40 minutes to complete and incentive vouchers ($15 USD) were sent by email after survey completion.

In-person data collection occurred for those teams in which in-person data collection was preferred or would be more efficient. Paper surveys were administered during meetings at each of the participating program locations. In most cases, the research team reserved one hour for data collection during a regular clinical work group or team meeting. Research staff obtained informed consent, handed out surveys to all eligible participants, checked the returned surveys for completeness, and then provided an incentive voucher to each participant. For participants not present at in-person meetings, paper surveys were provided and were returned to the research team in pre-paid envelopes.

Teams were identified in close collaboration with agency administrators. It was of utmost importance that team members shared a single direct supervisor to properly account for dependence in the data for variables pertaining to leadership. It was also important that participants completed the survey questions pertaining to leadership based on the proper supervisor as identified by the agency administrators. Participants completing the online survey selected their supervisor from a dropdown menu of supervisors within their agency in the beginning of the survey. The supervisor’s name was then automatically inserted into all questions regarding leadership in order to ensure clarity in the target of all leadership questions. The research team verified the identified leader with organization charts.

For in-person data collection, participants were given paper-and-pencil surveys with their supervisor’s name pre-printed on the front page of the survey and in sections pertaining to general leadership and implementation leadership. Participants were instructed to answer all leadership questions about the supervisor whose name was printed on their survey. In cases where a participant noted that they reported to a different supervisor, this was clarified and the survey was adjusted if deemed appropriate.


Implementation leadership scale (ILS)

Item development for the ILS is described above. All 29 ILS items were scored on a 0 (‘not at all’) to 4 (‘to a very great extent’) scale.

Multifactor leadership questionnaire (MLQ)

The MLQ [34] is one of the most widely researched measures of leadership in organizations. The MLQ includes the assessment of transformational leadership, which has been found in numerous studies to be associated with organizational performance and success (including attitudes toward EBP), as well as transactional leadership [11]. The MLQ has good psychometric properties including internal consistency reliability and concurrent and predictive validity. All items were scored on a 0 (‘not at all’) to 4 (‘frequently, if not always’) scale. Transformational leadership was measured with four subscales: idealized influence (α = 0.87, 8 items), inspirational motivation (α = 0.91, 4 items), intellectual stimulation (α = 0.90, 4 items), and individualized consideration (α = 0.90, 4 items). The MLQ also includes one subscale identified as best representing transactional leadership: contingent reward (α = 0.87, 4 items).

Organizational climate

The Organizational Climate Measure (OCM) [35] consists of 17 scales capturing the four domains of the competing values framework [36]: human relations, internal process, open systems, and rational goal. We utilized the autonomy (α = 0.67, 5 items) scale from the human relations domain, the formalization scale (α = 0.77, 5 items) from the internal process domain, and the efficiency (α = 0.80, 4 items) and performance feedback (α = 0.79, 5 items) scales of the rational goal domain [35] as measures for assessing discriminant validity of the ILS. All OCM items were scored on a 0 (‘definitely false) to 3 (‘definitely true’) scale.

Statistical analyses

In order to determine whether the data represented a unit-level construct (in this case, clinical treatment work groups or teams), we examined intraclass correlations (ICCs) and the average correlation within group (a wg ) for each item. Agreement indices are used to assess the appropriate level of aggregation for nested data. Higher levels of agreement suggest that the higher level of aggregation is supported. This is relevant for the current study as clinicians were working within clinical work groups or teams led by a single supervisor.

Work group/team-level data was randomized within organization to be utilized for either the EFA (n = 229; k = 46 teams) or CFA (n = 230; k = 47 teams). Exploratory factor analysis was used to derive and evaluate the factor structure of the scale using IBM SPSS. Principal axis factoring was selected for factor extraction because it allows for consideration of both systematic and random error [37] and Promax oblique rotation was utilized for factor rotation as we assumed that derived factors would be correlated. Item inclusion or exclusion was based on an iterative process in which items with relatively low primary loadings (e.g., < 0.40) or high cross-loadings (e.g., > .30) were removed [37]. The number of factors to be retained was determined based on parallel analysis, factor loadings, and interpretability of the factor structure as indicated in the rotated solution. Parallel analysis is among the better methods for determining the number of factors based on simulation studies [38]. Parallel analysis was based on estimation of 1,000 random data matrices with values that correspond to the 95th percentile of the distribution of random data eigenvalues [39, 40]. The random values were then compared with derived eigenvalues to determine the number of factors. Confirmatory factor analysis was conducted using Mplus [41] statistical software adjusting for the nested data structure using maximum likelihood estimation with robust standard errors (MLR), which appropriately adjusts standard errors and chi-square values. Missing data were handled through full information maximum likelihood (FIML) estimation. Model fit was assessed using several empirically supported indices: the comparative fit index (CFI), the Tucker-Lewis index (TLI), the root mean square error of approximation (RMSEA), and the standardized root mean square residual (SRMR). CFI and TLI values greater than 0.90, RMSEA values less than 0.10, and SRMR values less than 0.08 indicate acceptable model fit [4144]. Type two error rates tend to be low when multiple fit indices are used in studies where sample sizes are large and non-normality is limited, as in the present study [45].

Reliability was assessed by examining Cronbach’s alpha internal consistency for each of the subscales and the total scale. Item analyses were also conducted, including an examination of inter-item correlations and alpha if item removed. Convergent and discriminant validity were assessed by computing Pearson Product Moment Correlations of ILS subscale and total scale scores with MLQ and OCM subscale scores.


Examination of distributions for all scale items indicated that data were generally normally distributed with no extreme skewness. Thus, we treated variables as continuous in our analyses. The presence of missing data was minimal. For example, among the 459 respondents, only 26 (6%) had any missing data. Of those with missing data, 17 of the 26 (65%) had missing information on only one item, two had two or three items missing (8%), and the remaining 7 (27%) had more than three items missing. For the EFA, we used bivariate (rather than listwise) deletion in order to minimize the number of excluded cases and used FIML estimation to address missing values in the CFA.

Aggregation analyses

We first examined the amount of dependency among observations within groups using intraclass correlations (ICC, type 1) [46]. As shown in Table 1, the ICCs indicated a moderate degree of dependency among service provider responses within the same team. Nevertheless, the true variance tends to be underestimated whenever ICCs take on non-zero values, an effect that is magnified with increasing average cluster size [47]. However, the average cluster size was relatively small in this study (mean = 6), mitigating this concern.

Table 1 Implementation leadership scale, subscale and item statistics

We next examined the average agreement within clinical work group for individual items and scales using a wg(1) and a wg(J), respectively [4850]. a wg ranges from 1 to −1, with a wg(1) calculated as one minus the quotient of two times the observed variance divided by the maximum possible variance, and a wg(J) is the sum of a wg(1) values for items divided by the number of items for a scale. These statistics have the advantage over r wg [48, 49] of not being scale and sample size dependent, and not assuming a uniform distribution [48, 49]. Values of a wg greater than 0.60 represent acceptable agreement and values of 0.80 and above represent strong agreement [4850]. As shown in Table 1, considering ICCs and a wg , ILS items and scales should be considered as representing unit-level (i.e., clinical work group or team) constructs in this study.

Exploratory factor analysis

An iterative approach was taken to conducting the factor analyses and item reduction. In the first iteration and consistent with our hypotheses, five factors were specified and all 29 items were included. The EFA results showed that no items met the factor loading criteria for a proposed fifth factor (i.e., no loadings > 0.40). That, coupled with the parallel analysis, suggested a four factor solution. Thus, we conducted the next EFA specifying four factors. The results suggested the removal of 15 items. Thirteen items were removed because of low primary factor loadings and/or high cross loadings, and two items were removed because of overlapping content with other items. Thus, 14 items were retained. The next EFA included 14 items and specified four factors. Based on those results, two additional items were removed due to statistical (i.e., lower relative factor loadings) and conceptual (i.e., item content less directly consistent with other items) criteria. The final EFA included 12 items with three items loading on each of four factors.

Table 1 displays the factor means, item means, ICC, awg, initial eigenvalues, variance accounted for by each factor, internal consistency reliabilities, and rotated factor loadings. Internal consistencies were high, ranging from 0.95 to 0.98. Item analyses indicated that inter-item correlations were high, (range = 0.83 to 0.92) and the alpha for the subscales would not be improved by removing any items. As shown in Table 2, factor correlations ranged from 0.73 to 0.80, suggesting a higher order implementation leadership factor. The results of the CFA testing this higher order factor structure are provided in the next section. Subscale labels were created based on an examination of the items and factor loadings presented in Table 1. The first factor was labeled ‘Proactive Leadership’ as it indicated the degree to which the leader anticipates and addresses implementation challenges. Factor two addressed ‘Knowledgeable Leadership’ or the degree to which a leader has a deep understanding of EBP and implementation issues. Factor three was labeled ‘Supportive Leadership’ because it represented the leader’s support of clinicians’ adoption and use of EBP. Finally, factor four reflected ‘Perseverant Leadership’ or the degree to which the leader is consistent, unwavering, and responsive to EBP implementation challenges and issues.

Table 2 Implementation leadership scale factor intercorrelations

Confirmatory factor analysis

Confirmatory factor analysis was used with a sample independent of the EFA sample in order to evaluate the factor structure identified in the EFA above. In addition, because we proposed a higher-order factor model in which each subscale was considered an indicator of an overall implementation leadership latent construct, we evaluated the higher order model. We also controlled for the nested data structure (i.e., clinicians within clinical work groups or teams). The higher order factor model demonstrated excellent fit as indicated by multiple fit indicators (n = 230; χ 2(50) = 117.255, p < 0.001; CFI = 0.973, TLI = 0.964; RMSEA = 0.076; SRMR = 0.034). Figure 1 displays the standardized factor loadings for the higher-order factor model. First-order factor loadings ranged from 0.90 to 0.97, second-order factor loadings ranged from 0.90 to 0.94, and all factor loadings were statistically significant (p’s < 0.001).

Figure 1
figure 1

Second-order confirmatory factor analysis factor loadings for the implementation leadership scale. Note: n = 230; All factor loadings are standardized and are statistically significant, p < 0.001; χ 2(50) = 117.255, p < 0.001; CFI = 0.973, TLI = 0.964; RMSEA = 0.076; SRMR = 0.034.

Convergent validity

Table 3 shows that, as predicted, the ILS scale scores had moderate to high correlations with MLQ subscales representing transformational and transactional leadership. Correlations ranged from 0.62 to 0.75 indicating convergent validity. The magnitude of the correlations suggests that leadership is being assessed by the ILS and that transformational leaders are likely to perform the behaviors necessary for effective EBP implementation, but not so high as to suggest that the MLQ and ILS scales are measuring identical constructs.

Table 3 Pearson product moment correlations of implementation leadership scale scores with multifactor leadership questionnaire [convergent validity] and organizational climate measure [discriminant validity] scores

Discriminant validity

Table 3 shows the results of the discriminant validity analyses. As predicted, the ILS scale scores had low correlations with OCM subscales representing aspects of general organizational climate. Correlations ranged from 0.050 to 0.406 indicating strong support for the discriminant validity of the ILS in contrast to general organizational climate.


The current study describes the development of the first measure of strategic leadership for evidence-based practice implementation, the ILS. We used an iterative process to develop items representing implementation leadership and then used quantitative data reduction techniques to develop a brief measure that may be easily and efficiently used for research and applied purposes. Such brief measures are needed to improve the efficiency of services and implementation research [51].

Although we originally proposed five factors of implementation leadership, quantitative analyses supported a four-factor model. The identified factors correspond to four of the original five subdomains originally conceived by the research team. The factors or subscales of the ILS represent Proactive Leadership, Supportive Leadership, Knowledgeable Leadership, and Perseverant Leadership. The factor that was not supported in these analyses had to do with the events and practices that leaders pay deliberate attention to as well as the extent to which a leader models effective EBP implementation. It may be that these behaviors are more akin to a strategic climate for EBP implementation and thus may have been less relevant for the core focus on leadership in the ILS. In addition, employees may not consciously recognize the specific targets of their leaders’ intentions. Conversely, it may be that the items that were developed did not sufficiently capture this aspect of leadership. Future studies should examine the degree to which leader attention and role modeling can be captured through the development of measures of organizational climate for EBP implementation.

The ILS demonstrated strong internal consistency reliability, convergent validity, and discriminant validity. Given that the ILS is very brief (i.e., 12 items), administration and use in health services and implementation studies can be very efficient with little respondent burden. It generally takes less than five minutes to complete scales of this length. The practicality of this brief scale is consistent with calls for measures that can be utilized in real-world settings where the efficiency of the research process is paramount [52].

This is the first scale development study for implementation leadership and thus represents the first few phases (i.e., qualitative item generation, exploratory factor analysis, confirmatory factor analysis, reliability assessment, validity assessment) of this line of research. However, the item and scale development was based on extant literature as well as investigator and practitioner knowledge and experience with leadership development and EBPs in community-based mental health service settings. Further research is needed to determine the utility of the measure for research and practice in this and other health and allied health care settings and contexts.

This study raises additional directions for future research. The factor analytic approach utilized here was highly rigorous. Not only did we randomize respondent data, but we randomly assigned data at the work group/team level to either the EFA or CFA analyses. Thus, there is no overlap in team membership across the two phases of this study. In addition, our examination of scale reliability and convergent and discriminant validity in this study confirmed expected relationships between the ILS and other constructs. For example, the moderate to high correlations with other leadership scales affirms that there is some overlap between implementation leadership and effective general leadership (i.e., transformational and transactional leadership) but that unique aspects of leadership are also being captured. On the other hand, we had only one other measure of leadership in the study and future research should examine the degree to which other conceptual approaches and measures of leadership are associated with the ILS and its subscales [53]. In addition, the low association with general organizational climate suggests that the ILS dimensions are distinct from common measures of general organizational climate. Future research should examine the association of ILS scales with other measures of organizational climate and strategic climate for EBP implementation.

The ILS may help to inform our understanding of the influences and effects of leadership focused on EBP implementation. The ILS could also be utilized as a measure to identify leaders or to identify areas to develop in existing leaders. This is in keeping with an implementation strategy recently developed and pilot tested by the authors focused specifically on leadership and organizational change for implementation (LOCI) [18]. The LOCI implementation strategy utilizes data to support leader development and cross-level congruence of leadership and organizational strategies that support a first-level leader in creating a positive EBP implementation climate and implementation effectiveness [32, 54]. Such strategies address calls for leadership and organizational change strategies to facilitate EBP implementation and sustainment [18].

The ILS is a brief tool that may be used in implementation research to assess the extent to which leaders support their staff in implementing EBP. The ILS and scoring instructions can be found in Additional files 1 and 2, or may be obtained from GAA. After establishing this baseline level of implementation leadership, researchers and/or organizations may apply this knowledge to the identification of areas for implementation leadership development. Because the scale is comprised of behaviorally focused items, results of the assessment may be used to guide leadership development. Thus, not only does this measure allow for assessment of implementation leadership, it has the potential to serve as a developmental tool to improve both leadership and EBP implementation success within organizations.


The current study builds on previous research by extending the general concept of leadership to a new construct: strategic leadership for EBP implementation. This study suggests that effective leaders of EBP implementation should be proactive, knowledgeable, supportive, and perseverant in the implementation process. The extent to which these newly identified aspects of EBP leadership can impact individual factors (e.g., employee behaviors), organizational factors (e.g., implementation climate), and implementation outcomes should be the subject of future studies [54]. More immediately, strategies for improving leadership knowledge, skills, abilities, and behaviors in order to promote strategic climates that will improve the efficiency of EBP implementation should be developed and tested. In addition, the extent to which leadership influences fidelity and adoption of EBPs should be examined to increase our understanding of the complex ways in which leadership may affect clinician behavior in healthcare organizations. Pursuing such a research agenda has the potential to improve the efficiency and effectiveness of implementation efforts and to improve the reach and public health impact of evidence-based treatments and practices.


  1. Hoagwood K: Family-based services in children’s mental health: a research review and synthesis. J Child Psychol Psyc. 2005, 46: 690-713. 10.1111/j.1469-7610.2005.01451.x.

    Article  Google Scholar 

  2. Aarons GA, Hurlburt M, Horwitz SM: Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Hlth. 2011, 38: 4-23. 10.1007/s10488-010-0327-7.

    Article  Google Scholar 

  3. Magnabosco JL: Innovations in mental health services implementation: a report on state-level data from the U. S. evidence-based practices project. Implement Sci. 2006, 1: 1-11. 10.1186/1748-5908-1-1.

    Article  Google Scholar 

  4. Chaffin M: Organizational culture and practice epistemologies. Clin Psychol-Sci Pr. 2006, 13: 90-93. 10.1111/j.1468-2850.2006.00009.x.

    Article  Google Scholar 

  5. Kessler ML, Gira E, Poertner J: Moving best practice to evidence-based practice in child welfare. Fam Soc. 2005, 86: 244-250. 10.1606/1044-3894.2459.

    Article  Google Scholar 

  6. Aarons GA, Sommerfeld DH: Leadership, innovation climate, and attitudes toward evidence-based practice during a statewide implementation. J Am Acad Child Psy. 2012, 51: 423-431. 10.1016/j.jaac.2012.01.018.

    Article  Google Scholar 

  7. Damanpour F, Schneider M: Phases of the adoption of innovation in organizations: effects of environment, organization and top managers. Brit J Manage. 2006, 17: 215-236. 10.1111/j.1467-8551.2006.00498.x.

    Article  Google Scholar 

  8. Jung DI, Chow C, Wu A: The role of transformational leadership in enhancing organizational innovation: hypotheses and some preliminary findings. Leadership Quart. 2003, 14: 525-544. 10.1016/S1048-9843(03)00050-X.

    Article  Google Scholar 

  9. Gumusluoglu L, Ilsev A: Transformational leadership, creativity, and organizational innovation. J Bus Res. 2009, 62: 461-473. 10.1016/j.jbusres.2007.07.032.

    Article  Google Scholar 

  10. Scott SG, Bruce RA: Determinants of innovative behavior: a path model of individual innovation in the workplace. Acad Manage J. 1994, 37: 580-607. 10.2307/256701.

    Article  Google Scholar 

  11. Aarons GA: Transformational and transactional leadership: association with attitudes toward evidence-based practice. Psychiatr Serv. 2006, 57: 1162-1169. 10.1176/

    Article  PubMed  PubMed Central  Google Scholar 

  12. Michaelis B, Stegmaier R, Sonntag K: Shedding light on followers’ innovation implementation behavior: the role of transformational leadership, commitment to change, and climate for initiative. J Manage Psychol. 2010, 25: 408-429. 10.1108/02683941011035304.

    Article  Google Scholar 

  13. Aarons GA, Horowitz JD, Dlugosz LR, Ehrhart MG: The role of organizational processes in dissemination and implementation research. Dissemination and Implementation Research in Health: Translating science to practice. Edited by: Brownson RC, Colditz GA, Proctor EK. 2012, New York, NY: Oxford University Press

    Google Scholar 

  14. Harvey G, Fitzgerald L, Fielden S, McBride A, Waterman H, Bamford D, Kislov R, Boaden R: The NIHR collaboration for leadership in applied health research and care (CLAHRC) for Greater Manchester: combining empirical, theoretical and experiential evidence to design and evaluate a large-scale implementation strategy. Implement Sci. 2011, 6: 96-10.1186/1748-5908-6-96.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Kyratsis Y, Ahmad R, Holmes A: Making sense of evidence in management decisions: the role of research-based knowledge on innovation adoption and implementation in healthcare. Study protocol. Implement Sci. 2012, 7: 22-10.1186/1748-5908-7-22.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Weiner BJ: A theory of organizational readiness for change. Implement Sci. 2009, 4: 67-10.1186/1748-5908-4-67.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Michaelis B, Stegmaier R, Sonntag K: Affective commitment to change and innovation implementation behavior: the role of charismatic leadership and employees’ trust in top management. J Change Manage. 2009, 9: 399-417. 10.1080/14697010903360608.

    Article  Google Scholar 

  18. Aarons GA, Ehrhart MG, Farahnak LR, Hurlburt M: Leadership and organizational change for implementation (LOCI): A mixed-method pilot study of a leadership and organization development intervention for evidence-based practice implementation. Manuscript submitted for publication. In review

  19. Sloan R, Gruman J: Participation in workplace health promotion programs: the contribution of health and organizational factors. Health Educ Behav. 1988, 15: 269-288. 10.1177/109019818801500303.

    Article  CAS  Google Scholar 

  20. Aarons GA, Sommerfeld DH, Willging CE: The soft underbelly of system change: the role of leadership and organizational climate in turnover during statewide behavioral health reform. Psychol Serv. 2011, 8: 269-281.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Bain PG, Mann L, Pirola-Merlo A: The innovative imperative: the relationships between team climate, innovation, and performance in research and development teams. Small Group Res. 2001, 32: 55-73. 10.1177/104649640103200103.

    Article  Google Scholar 

  22. Kinjerski V, Skrypnek BJ: The promise of spirit at work: increasing job satisfaction and organizational commitment and reducing turnover and absenteeism in long-term care. J Gerontol Nurs. 2008, 34: 17-25. 10.3928/00989134-20081001-03.

    Article  PubMed  Google Scholar 

  23. Hill N, Seo M, Kang J, Taylor M: Building employee commitment to change across organizational levels: the influence of hierarchical distance and direct managers’ transformational leadership. Organ Sci. 2012, 23: 758-777. 10.1287/orsc.1110.0662.

    Article  Google Scholar 

  24. Hong Y, Liao H, Hu J, Jiang K: Missing link in the service profit chain: a meta-analytic review of the antecedents, consequences, and moderators of service climate. J Appl Psychol. 2013, 98: 237-267.

    Article  PubMed  Google Scholar 

  25. Schneider B, Ehrhart MG, Mayer DM, Saltz JL, Niles-Jolly K: Understanding organization-customer links in service settings. Acad Manage J. 2005, 48: 1017-1032. 10.5465/AMJ.2005.19573107.

    Article  Google Scholar 

  26. Barling J, Loughlin C, Kelloway EK: Development and test of a model linking safety-specific transformational leadership and occupational safety. J Appl Psychol. 2002, 87: 488-496.

    Article  PubMed  Google Scholar 

  27. Zohar D: Modifying supervisory practices to improve subunit safety: a leadership-based intervention model. J Appl Psychol. 2002, 87: 156-163.

    Article  PubMed  Google Scholar 

  28. Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, Glass JE, York JL: A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev. 2012, 69: 123-157. 10.1177/1077558711430690.

    Article  PubMed  Google Scholar 

  29. Leeman J, Baernholdt M, Sandelowski M: Developing a theory-based taxonomy of methods for implementing change in practice. J Adv Nurs. 2007, 58: 191-200. 10.1111/j.1365-2648.2006.04207.x.

    Article  PubMed  Google Scholar 

  30. Massoud M, Nielsen G, Nolan K, Nolan TW, Schall MW, Sevin C: A Framework for Spread: From Local Improvements to System-wide Change. 2006, Cambridge, MA: Institute for Healthcare Improvement

    Google Scholar 

  31. Wensing M, Bosch MC, Grol R: Selecting, tailoring, and implementing knowledge translation interventions. Knowledge Translation in Health Care: Moving from Evidence to Practice. Edited by: Straus S, Tetroe J, Graham ID. 2009, Oxford, England: Wiley-Blackwell, 94-113.

    Google Scholar 

  32. Klein KJ, Conn AB, Sorra JS: Implementing computerized technology: an organizational analysis. J Appl Psychol. 2001, 86: 811-824.

    Article  CAS  PubMed  Google Scholar 

  33. Schein E: Organizational Culture and Leadership. 2010, San Francisco, CA: John Wiley and Sons

    Google Scholar 

  34. Bass BM, Avolio BJ: MLQ: Multifactor leadership questionnaire (Technical Report). 1995, Binghamton University, NY: Center for Leadership Studies

    Google Scholar 

  35. Patterson MG, West MA, Shackleton VJ, Dawson JF, Lawthom R, Maitlis S, Robinson DL, Wallace AM: Validating the organizational climate measure: links to managerial practices, productivity and innovation. J Organ Behav. 2005, 26: 379-408. 10.1002/job.312.

    Article  Google Scholar 

  36. Quinn R, Rohrbaugh J: A spatial model of effectiveness criteria: towards a competing values approach to organizational analysis. Manage Sci. 1983, 29: 363-377. 10.1287/mnsc.29.3.363.

    Article  Google Scholar 

  37. Fabrigar LR, Wegener DT, MacCallum RC, Strahan EJ: Evaluating the use of exploratory factor analysis in psychological research. Psychol Methods. 1999, 4: 272-299.

    Article  Google Scholar 

  38. Zwick WR, Velicer WF: Comparison of five rules for determining the number of components to retain. Psychol Bull. 1986, 99: 432-442.

    Article  Google Scholar 

  39. Patil VH, Singh SN, Mishra S, Donovan T: Efficient theory development and factor retention criteria: a case for abandoning the ‘Eigenvalue Greater Than One’ criterion. J Bu Res. 2008, 61 (2): 162-170. 10.1016/j.jbusres.2007.05.008.

    Article  Google Scholar 

  40. Horn JL: A rationale and test for the number of factors in factor analysis. Psychometrika. 1965, 30: 179-185. 10.1007/BF02289447.

    Article  CAS  PubMed  Google Scholar 

  41. Dunn G, Everitt B, Pickles A: Modeling Covariances and Latent Variables Using EQS. 1993, London: Chapman & Hall/CRC Press

    Google Scholar 

  42. Hu L-T, Bentler PM: Fit indices in covariance structure modeling: sensitivity to underparameterized model misspecification. Psychol Methods. 1998, 3: 424-453.

    Article  Google Scholar 

  43. Hu L-T, Bentler PM: Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Modeling. 1999, 6: 1-55. 10.1080/10705519909540118.

    Article  Google Scholar 

  44. Kelloway EK: Using Lisrel for Structural Equation Modeling: A Researcher’s Guide. 1998, Thousand Oaks, CA: Sage

    Google Scholar 

  45. Guo Q, Li F, Chen X, Wang W, Meng Q: Performance of fit indices in different conditions and selection of cut-off values. Acta Psychologica Sinica. 2008, 40: 109-118. 10.3724/SP.J.1041.2008.00109.

    Article  Google Scholar 

  46. Shrout PE, Fleiss JL: Intraclass correlations: uses in assessing rater reliability. Psychol Bull. 1979, 86: 420-428.

    Article  CAS  PubMed  Google Scholar 

  47. Cochran WG: Sampling Techniques. 1977, New York: Wiley and Sons, 3

    Google Scholar 

  48. James LR, Demaree RG, Wolf G: Estimating within-group interrater reliability with and without response bias. J Appl Psychol. 1984, 69: 85-98.

    Article  Google Scholar 

  49. James LR, Demaree RG, Wolf G: rwg: an assessment of within-group interrater agreement. J Appl Psychol. 1993, 78: 306-309.

    Article  Google Scholar 

  50. Brown RD, Hauenstein NMA: Interrater agreement reconsidered: an alternative to the rwg indices. Organ Res Methods. 2005, 8: 165-184. 10.1177/1094428105275376.

    Article  Google Scholar 

  51. Lagomasino IT, Zatzick DF, Chambers DA: Efficiency in mental health practice and research. Gen Hosp Psychiat. 2010, 32: 477-483. 10.1016/j.genhosppsych.2010.06.005.

    Article  Google Scholar 

  52. Chambers DA, Wang PS, Insel TR: Maximizing efficiency and impact in effectiveness and services research. Gen Hosp Psychiat. 2010, 32: 453-455. 10.1016/j.genhosppsych.2010.07.011.

    Article  Google Scholar 

  53. Kouzes JM, Posner BZ: The Leadership Challenge. 2007, San Francisco, CA: Jossey-Bass

    Google Scholar 

  54. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons GA, Bunger A, Griffey R, Hensley M: Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Hlth. 2011, 38: 65-76. 10.1007/s10488-010-0319-7.

    Article  Google Scholar 

Download references


Preparation of this paper was supported by National Institute of Mental Health grants R21MH082731 (PI: Aarons), R21MH098124 (PI: Ehrhart), R01MH072961 (PI: Aarons), P30MH074678 (PI: Landsverk), R25MH080916 (PI: Proctor), and by the Child and Adolescent Services Research Center (CASRC) and the Center for Organizational Research on Implementation and Leadership (CORIL). The authors thank the community-based organizations, clinicians, and supervisors that made this study possible.

The Implementation Leadership Scale (ILS) and scoring instructions are available from GAA at no cost or may be obtained as additional files accompanying this article.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Gregory A Aarons.

Additional information

Competing interests

GAA is an Associate Editor of Implementation Science; all decisions on this paper were made by another editor. The authors declare that they have no other competing interests.

Authors’ contributions

GAA and MGE were study principal investigators and contributed to the theoretical background and conceptualization of the study, item development, study design, writing, data analysis, and editing. LRF contributed to the item development, study design, data collection, writing, and editing. All authors read and approved the final manuscript.

Electronic supplementary material

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Aarons, G.A., Ehrhart, M.G. & Farahnak, L.R. The implementation leadership scale (ILS): development of a brief measure of unit level implementation leadership. Implementation Sci 9, 45 (2014).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: