- Open Access
- Open Peer Review
The implementation leadership scale (ILS): development of a brief measure of unit level implementation leadership
© Aarons et al.; licensee BioMed Central Ltd. 2014
- Received: 31 July 2013
- Accepted: 26 March 2014
- Published: 14 April 2014
In healthcare and allied healthcare settings, leadership that supports effective implementation of evidenced-based practices (EBPs) is a critical concern. However, there are no empirically validated measures to assess implementation leadership. This paper describes the development, factor structure, and initial reliability and convergent and discriminant validity of a very brief measure of implementation leadership: the Implementation Leadership Scale (ILS).
Participants were 459 mental health clinicians working in 93 different outpatient mental health programs in Southern California, USA. Initial item development was supported as part of a two United States National Institutes of Health (NIH) studies focused on developing implementation leadership training and implementation measure development. Clinician work group/team-level data were randomly assigned to be utilized for an exploratory factor analysis (n = 229; k = 46 teams) or for a confirmatory factor analysis (n = 230; k = 47 teams). The confirmatory factor analysis controlled for the multilevel, nested data structure. Reliability and validity analyses were then conducted with the full sample.
The exploratory factor analysis resulted in a 12-item scale with four subscales representing proactive leadership, knowledgeable leadership, supportive leadership, and perseverant leadership. Confirmatory factor analysis supported an a priori higher order factor structure with subscales contributing to a single higher order implementation leadership factor. The scale demonstrated excellent internal consistency reliability as well as convergent and discriminant validity.
The ILS is a brief and efficient measure of unit level leadership for EBP implementation. The availability of the ILS will allow researchers to assess strategic leadership for implementation in order to advance understanding of leadership as a predictor of organizational context for implementation. The ILS also holds promise as a tool for leader and organizational development to improve EBP implementation.
- Exploratory Factor Analysis
- Standardize Root Mean Square Residual
- Organizational Climate
- Transactional Leadership
- Implementation Climate
The adoption, implementation, and sustainment of evidenced-based practices (EBPs) are becoming increasingly important for health and allied healthcare organizations and providers, and widespread adoption of EBPs holds promise to improve quality of care and patient outcomes [1, 2]. Considerable resources are being allocated to increase the implementation of EBPs in community care settings with support for activities such as training service providers and increased staffing to support monitoring of implementation-related activities . Although there are calls for increased attention to organizational context in EBP dissemination and implementation [4, 5], there are gaps in examining how organizational context affects EBP implementation. Most relevant for this research is the need for development of measures to assess organizational constructs likely to impact implementation process and outcomes. One organizational factor in need of greater attention is that of leadership for EBP implementation .
Leaders can positively or negatively impact the capacity to foster change and innovation [7–10] and therefore are instrumental in facilitating a positive climate for innovation and positive attitudes toward EBP during implementation [6, 11]. Although the role of leadership in EBP implementation is often discussed, it is rarely empirically examined. The limited empirical research in this area supports the presence of a relationship between general leadership ability and implementation of innovative practices , but focuses less on identifying specific behaviors that leaders may enact to facilitate EBP implementation. To stimulate and support additional empirical work in this area, there is a need for brief and efficient measures to assess specific actions leaders may engage in to influence the success of implementation efforts in their organizations or programs.
Both implementation and leadership theories emphasize the importance of leadership in supporting implementation of innovative practices such as EBP. For example, implementation scholars have asserted the importance of leadership in terms of obtaining funding, dispersing resources, and enforcing policies in support of implementation . Research from the Collaboration for Leadership in Applied Health Research and Care has addressed the importance of leaders serving as clinical opinion leaders, managing implementation projects, fostering organizational learning climates, and obtaining senior management support . Other research suggests that managers are responsible for interpreting research evidence, applying it to organizational contexts, and making research-informed implementation decisions . Weiner’s organizational theory of innovation implementation suggests that leaders play a critical role in creating readiness for change, ensuring innovation-values fit, and developing plans, practices, structures, and strategies to support implementation .
There is also empirical evidence for the importance of leadership in predicting the success of implementation efforts. For example, transformational leadership (i.e., the degree to which a leader can inspire and motivate others) has been shown to predict employees’ reported use of an innovative practice being implemented in their organization [12, 17]. Consistent with transactional leadership (e.g., providing contingent rewards)  perceived support from one’s supervisor has been associated with employees’ participation in implementation . Much of the empirical research on leadership and implementation has focused on identifying mechanisms through which leaders affect implementation. These include a positive organizational climate , supportive team climate , and positive work attitudes . Research has also focused on the role of leaders in influencing employee attitudes toward EBP  and commitment to organizational change .
Although general leadership is held to play an important role in implementation, research in this area has not necessarily outlined specific behaviors that leaders may enact in order to strategically influence followers to support the larger goal of successful implementation. Insight into such behaviors can be garnered from existing literature demonstrating that strategically-focused leadership predicts the achievement of specific goals. For example, a recent meta-analysis confirmed the relative advantage of strategic leadership—compared to general leadership—for specific organizational change initiatives . Recent organizational research in climate for customer service  and climate for safety [26, 27] has shown that strategically-focused leadership is a critical precursor to building a strategic climate, which subsequently predicts strategic outcomes such as increased customer satisfaction or decreased accidents, respectively.
Although more than 60 implementation strategies were identified in a recent review of the implementation literature , few focused on leadership as an implementation factor and none focused mainly on leader development to support EBP implementation. Of those identified, extant strategies involve the recruitment and training of leaders and involving leaders at different organizational levels [29–31]. Hence, we argue that leadership focused on a specific strategic imperative, such as adoption and use of EBP, can influence employee attitudes and behavior regarding the imperative. This is consistent with research demonstrating that leader and management support for implementation is a significant and strong predictor of positive implementation climate . Thus, there is a need to identify those behaviors that leaders may enact to create a strategic EBP implementation climate in their teams and better facilitate the implementation and sustainment of EBP.
The goals of the present study were to develop a scale that focused on strategic leadership for EBP implementation and to examine its factor structure, reliability, and convergent and discriminant validity. We drew from strategic climate and leadership theory, implementation research and theory, implementation climate literature, and feedback from subject matter experts to develop items for the implementation leadership scale (ILS) to extend work on management support for implementation. In particular, we focused on leader behaviors related to organizational culture and climate embedding mechanisms that promote strategic climates . In line with this literature, items were developed to assess the degree to which a leader is proactive with regard to EBP implementation, leader knowledge of EBP and implementation, leader support for EBP implementation, leader perseverance in the EBP implementation process, and leader attention to and role modeling effective EBP implementation. Through a process of exploratory factor analysis (EFA) followed by confirmatory factor analysis (CFA), we expected to find empirical support for the conceptual areas identified above. We also expected the final scale and subscales to demonstrate high internal consistency reliability. In regard to convergent validity, we expected that the derived leadership scale would have moderate to high correlations with other measures of leadership (i.e., transformational and transactional leadership). Finally, in regard to discriminant validity, we expected to find low to moderate correlations between the derived leadership scale and a measure of general organizational climate.
Item generation and domain identification proceeded in three phases. First, as part of a study focused on developing an intervention to improve leadership for evidence-based practice implementation , the investigative team developed items based on review of literature relating leader behaviors to implementation and organizational climate and culture change [32, 33]. Second, items were reviewed for relevance and content by subject matter experts, including a mental health program leader, an EBP trainer and Community Development Team consultant from the California Institute for Mental Health, and four mental health program managers. Third, potential items were reviewed by the investigative team and program managers for face validity and content validity. Twenty-nine items were developed that represented five potential content domains of implementation leadership: proactive EBP leadership, leader knowledge of EBP, leader support for EBP, perseverance in the face of EBP implementation challenges, and attention and role modeling related to EBP implementation.
Participants were 459 mental health clinicians working in 93 different outpatient mental health programs in Southern California, USA. Of the 573 clinicians eligible to participate in this research, 459 participated (80.1% response rate). Participant mean age was 36.5 years (SD = 10.7; Range = 21 to 66) and the majority of respondents were female (79%). The racial/ethnic distribution of the sample was 54% Caucasian, 23.4% Hispanic, 6.7% African American, 5% Asian American, 0.5% American Indian, and 10% ‘other’. Participants had worked in the mental health services field for a mean of 8.5 years (SD = 7.7; Range = 1 week to 43 years), in child and/or adolescent mental health services for a mean of 7.5 years (SD = 7.6; Range = 1 week to 43 years), and in their present agency for 3.4 years (SD = 4.3; Range = 1 week to 28.1 years). Highest level of education consisted of 7% Ph.D./M.D. or equivalent, 68% master’s degree, 6.5% graduate work but no degree, 12.2% bachelor’s degree, 3% some college but no degree, and 0.7% no college. The primary discipline of the sample was 47% marriage and family therapy, 26% social work, 16% psychology, 3% child development, 2% human relations, 1% nursing, and 4.8% other (e.g., drug/alcohol counseling, probation, psychiatry).
The study was approved by the appropriate Institutional Review Boards prior to clinician recruitment and informed consent was obtained prior to administering surveys. The research team first obtained permission from agency executive directors or their designees to recruit their clinicians for participation in the study. Clinicians were then contacted either via email or in-person for recruitment to the study. Data were collected using online surveys or in-person paper-and-pencil surveys.
For online surveys, each participant was e-mailed an invitation to participate including a unique username and password as well as a link to the web survey. Participants reviewed informed consent and after agreeing to participate were able to access the survey and proceed to the survey items. Once participants logged in to the online survey, they were able to answer questions and could pause and resume at any time. The online survey took approximately 30 to 40 minutes to complete and incentive vouchers ($15 USD) were sent by email after survey completion.
In-person data collection occurred for those teams in which in-person data collection was preferred or would be more efficient. Paper surveys were administered during meetings at each of the participating program locations. In most cases, the research team reserved one hour for data collection during a regular clinical work group or team meeting. Research staff obtained informed consent, handed out surveys to all eligible participants, checked the returned surveys for completeness, and then provided an incentive voucher to each participant. For participants not present at in-person meetings, paper surveys were provided and were returned to the research team in pre-paid envelopes.
Teams were identified in close collaboration with agency administrators. It was of utmost importance that team members shared a single direct supervisor to properly account for dependence in the data for variables pertaining to leadership. It was also important that participants completed the survey questions pertaining to leadership based on the proper supervisor as identified by the agency administrators. Participants completing the online survey selected their supervisor from a dropdown menu of supervisors within their agency in the beginning of the survey. The supervisor’s name was then automatically inserted into all questions regarding leadership in order to ensure clarity in the target of all leadership questions. The research team verified the identified leader with organization charts.
For in-person data collection, participants were given paper-and-pencil surveys with their supervisor’s name pre-printed on the front page of the survey and in sections pertaining to general leadership and implementation leadership. Participants were instructed to answer all leadership questions about the supervisor whose name was printed on their survey. In cases where a participant noted that they reported to a different supervisor, this was clarified and the survey was adjusted if deemed appropriate.
Implementation leadership scale (ILS)
Item development for the ILS is described above. All 29 ILS items were scored on a 0 (‘not at all’) to 4 (‘to a very great extent’) scale.
Multifactor leadership questionnaire (MLQ)
The MLQ  is one of the most widely researched measures of leadership in organizations. The MLQ includes the assessment of transformational leadership, which has been found in numerous studies to be associated with organizational performance and success (including attitudes toward EBP), as well as transactional leadership . The MLQ has good psychometric properties including internal consistency reliability and concurrent and predictive validity. All items were scored on a 0 (‘not at all’) to 4 (‘frequently, if not always’) scale. Transformational leadership was measured with four subscales: idealized influence (α = 0.87, 8 items), inspirational motivation (α = 0.91, 4 items), intellectual stimulation (α = 0.90, 4 items), and individualized consideration (α = 0.90, 4 items). The MLQ also includes one subscale identified as best representing transactional leadership: contingent reward (α = 0.87, 4 items).
The Organizational Climate Measure (OCM)  consists of 17 scales capturing the four domains of the competing values framework : human relations, internal process, open systems, and rational goal. We utilized the autonomy (α = 0.67, 5 items) scale from the human relations domain, the formalization scale (α = 0.77, 5 items) from the internal process domain, and the efficiency (α = 0.80, 4 items) and performance feedback (α = 0.79, 5 items) scales of the rational goal domain  as measures for assessing discriminant validity of the ILS. All OCM items were scored on a 0 (‘definitely false) to 3 (‘definitely true’) scale.
In order to determine whether the data represented a unit-level construct (in this case, clinical treatment work groups or teams), we examined intraclass correlations (ICCs) and the average correlation within group (a wg ) for each item. Agreement indices are used to assess the appropriate level of aggregation for nested data. Higher levels of agreement suggest that the higher level of aggregation is supported. This is relevant for the current study as clinicians were working within clinical work groups or teams led by a single supervisor.
Work group/team-level data was randomized within organization to be utilized for either the EFA (n = 229; k = 46 teams) or CFA (n = 230; k = 47 teams). Exploratory factor analysis was used to derive and evaluate the factor structure of the scale using IBM SPSS. Principal axis factoring was selected for factor extraction because it allows for consideration of both systematic and random error  and Promax oblique rotation was utilized for factor rotation as we assumed that derived factors would be correlated. Item inclusion or exclusion was based on an iterative process in which items with relatively low primary loadings (e.g., < 0.40) or high cross-loadings (e.g., > .30) were removed . The number of factors to be retained was determined based on parallel analysis, factor loadings, and interpretability of the factor structure as indicated in the rotated solution. Parallel analysis is among the better methods for determining the number of factors based on simulation studies . Parallel analysis was based on estimation of 1,000 random data matrices with values that correspond to the 95th percentile of the distribution of random data eigenvalues [39, 40]. The random values were then compared with derived eigenvalues to determine the number of factors. Confirmatory factor analysis was conducted using Mplus  statistical software adjusting for the nested data structure using maximum likelihood estimation with robust standard errors (MLR), which appropriately adjusts standard errors and chi-square values. Missing data were handled through full information maximum likelihood (FIML) estimation. Model fit was assessed using several empirically supported indices: the comparative fit index (CFI), the Tucker-Lewis index (TLI), the root mean square error of approximation (RMSEA), and the standardized root mean square residual (SRMR). CFI and TLI values greater than 0.90, RMSEA values less than 0.10, and SRMR values less than 0.08 indicate acceptable model fit [41–44]. Type two error rates tend to be low when multiple fit indices are used in studies where sample sizes are large and non-normality is limited, as in the present study .
Reliability was assessed by examining Cronbach’s alpha internal consistency for each of the subscales and the total scale. Item analyses were also conducted, including an examination of inter-item correlations and alpha if item removed. Convergent and discriminant validity were assessed by computing Pearson Product Moment Correlations of ILS subscale and total scale scores with MLQ and OCM subscale scores.
Examination of distributions for all scale items indicated that data were generally normally distributed with no extreme skewness. Thus, we treated variables as continuous in our analyses. The presence of missing data was minimal. For example, among the 459 respondents, only 26 (6%) had any missing data. Of those with missing data, 17 of the 26 (65%) had missing information on only one item, two had two or three items missing (8%), and the remaining 7 (27%) had more than three items missing. For the EFA, we used bivariate (rather than listwise) deletion in order to minimize the number of excluded cases and used FIML estimation to address missing values in the CFA.
Implementation leadership scale, subscale and item statistics
EFA factor loadings
ILS items, subscales, and total
1. Proactive leadership
Established clear standards for implementation of EBP
Developed a plan to facilitate EBP implementation
Removed obstacles to implementation of EBP
2. Knowledgeable leadership
Knows what he/she is taking about when it comes to EBP
Is knowledgeable about EBP
Is able to answer staff questions about EBP
3. Supportive leadership
Supports employee efforts to use EBP
Supports employee efforts to learn more about EBP
Recognizes and appreciates employee efforts
4. Perseverant leadership
Perseveres through the ups and downs of implementing
Carries on through the challenges of implementing EBP
Reacts to critical issues regarding implementation of EBP
Implementation leadership scale total
We next examined the average agreement within clinical work group for individual items and scales using a wg(1) and a wg(J), respectively [48–50]. a wg ranges from 1 to −1, with a wg(1) calculated as one minus the quotient of two times the observed variance divided by the maximum possible variance, and a wg(J) is the sum of a wg(1) values for items divided by the number of items for a scale. These statistics have the advantage over r wg [48, 49] of not being scale and sample size dependent, and not assuming a uniform distribution [48, 49]. Values of a wg greater than 0.60 represent acceptable agreement and values of 0.80 and above represent strong agreement [48–50]. As shown in Table 1, considering ICCs and a wg , ILS items and scales should be considered as representing unit-level (i.e., clinical work group or team) constructs in this study.
Exploratory factor analysis
An iterative approach was taken to conducting the factor analyses and item reduction. In the first iteration and consistent with our hypotheses, five factors were specified and all 29 items were included. The EFA results showed that no items met the factor loading criteria for a proposed fifth factor (i.e., no loadings > 0.40). That, coupled with the parallel analysis, suggested a four factor solution. Thus, we conducted the next EFA specifying four factors. The results suggested the removal of 15 items. Thirteen items were removed because of low primary factor loadings and/or high cross loadings, and two items were removed because of overlapping content with other items. Thus, 14 items were retained. The next EFA included 14 items and specified four factors. Based on those results, two additional items were removed due to statistical (i.e., lower relative factor loadings) and conceptual (i.e., item content less directly consistent with other items) criteria. The final EFA included 12 items with three items loading on each of four factors.
Implementation leadership scale factor intercorrelations
1. Proactive leadership
2. Knowledgeable leadership
3. Supportive leadership
4. Perseverant leadership
Confirmatory factor analysis
Pearson product moment correlations of implementation leadership scale scores with multifactor leadership questionnaire [convergent validity] and organizational climate measure [discriminant validity] scores
Implementation leadership scales
Table 3 shows the results of the discriminant validity analyses. As predicted, the ILS scale scores had low correlations with OCM subscales representing aspects of general organizational climate. Correlations ranged from 0.050 to 0.406 indicating strong support for the discriminant validity of the ILS in contrast to general organizational climate.
The current study describes the development of the first measure of strategic leadership for evidence-based practice implementation, the ILS. We used an iterative process to develop items representing implementation leadership and then used quantitative data reduction techniques to develop a brief measure that may be easily and efficiently used for research and applied purposes. Such brief measures are needed to improve the efficiency of services and implementation research .
Although we originally proposed five factors of implementation leadership, quantitative analyses supported a four-factor model. The identified factors correspond to four of the original five subdomains originally conceived by the research team. The factors or subscales of the ILS represent Proactive Leadership, Supportive Leadership, Knowledgeable Leadership, and Perseverant Leadership. The factor that was not supported in these analyses had to do with the events and practices that leaders pay deliberate attention to as well as the extent to which a leader models effective EBP implementation. It may be that these behaviors are more akin to a strategic climate for EBP implementation and thus may have been less relevant for the core focus on leadership in the ILS. In addition, employees may not consciously recognize the specific targets of their leaders’ intentions. Conversely, it may be that the items that were developed did not sufficiently capture this aspect of leadership. Future studies should examine the degree to which leader attention and role modeling can be captured through the development of measures of organizational climate for EBP implementation.
The ILS demonstrated strong internal consistency reliability, convergent validity, and discriminant validity. Given that the ILS is very brief (i.e., 12 items), administration and use in health services and implementation studies can be very efficient with little respondent burden. It generally takes less than five minutes to complete scales of this length. The practicality of this brief scale is consistent with calls for measures that can be utilized in real-world settings where the efficiency of the research process is paramount .
This is the first scale development study for implementation leadership and thus represents the first few phases (i.e., qualitative item generation, exploratory factor analysis, confirmatory factor analysis, reliability assessment, validity assessment) of this line of research. However, the item and scale development was based on extant literature as well as investigator and practitioner knowledge and experience with leadership development and EBPs in community-based mental health service settings. Further research is needed to determine the utility of the measure for research and practice in this and other health and allied health care settings and contexts.
This study raises additional directions for future research. The factor analytic approach utilized here was highly rigorous. Not only did we randomize respondent data, but we randomly assigned data at the work group/team level to either the EFA or CFA analyses. Thus, there is no overlap in team membership across the two phases of this study. In addition, our examination of scale reliability and convergent and discriminant validity in this study confirmed expected relationships between the ILS and other constructs. For example, the moderate to high correlations with other leadership scales affirms that there is some overlap between implementation leadership and effective general leadership (i.e., transformational and transactional leadership) but that unique aspects of leadership are also being captured. On the other hand, we had only one other measure of leadership in the study and future research should examine the degree to which other conceptual approaches and measures of leadership are associated with the ILS and its subscales . In addition, the low association with general organizational climate suggests that the ILS dimensions are distinct from common measures of general organizational climate. Future research should examine the association of ILS scales with other measures of organizational climate and strategic climate for EBP implementation.
The ILS may help to inform our understanding of the influences and effects of leadership focused on EBP implementation. The ILS could also be utilized as a measure to identify leaders or to identify areas to develop in existing leaders. This is in keeping with an implementation strategy recently developed and pilot tested by the authors focused specifically on leadership and organizational change for implementation (LOCI) . The LOCI implementation strategy utilizes data to support leader development and cross-level congruence of leadership and organizational strategies that support a first-level leader in creating a positive EBP implementation climate and implementation effectiveness [32, 54]. Such strategies address calls for leadership and organizational change strategies to facilitate EBP implementation and sustainment .
The ILS is a brief tool that may be used in implementation research to assess the extent to which leaders support their staff in implementing EBP. The ILS and scoring instructions can be found in Additional files 1 and 2, or may be obtained from GAA. After establishing this baseline level of implementation leadership, researchers and/or organizations may apply this knowledge to the identification of areas for implementation leadership development. Because the scale is comprised of behaviorally focused items, results of the assessment may be used to guide leadership development. Thus, not only does this measure allow for assessment of implementation leadership, it has the potential to serve as a developmental tool to improve both leadership and EBP implementation success within organizations.
The current study builds on previous research by extending the general concept of leadership to a new construct: strategic leadership for EBP implementation. This study suggests that effective leaders of EBP implementation should be proactive, knowledgeable, supportive, and perseverant in the implementation process. The extent to which these newly identified aspects of EBP leadership can impact individual factors (e.g., employee behaviors), organizational factors (e.g., implementation climate), and implementation outcomes should be the subject of future studies . More immediately, strategies for improving leadership knowledge, skills, abilities, and behaviors in order to promote strategic climates that will improve the efficiency of EBP implementation should be developed and tested. In addition, the extent to which leadership influences fidelity and adoption of EBPs should be examined to increase our understanding of the complex ways in which leadership may affect clinician behavior in healthcare organizations. Pursuing such a research agenda has the potential to improve the efficiency and effectiveness of implementation efforts and to improve the reach and public health impact of evidence-based treatments and practices.
Preparation of this paper was supported by National Institute of Mental Health grants R21MH082731 (PI: Aarons), R21MH098124 (PI: Ehrhart), R01MH072961 (PI: Aarons), P30MH074678 (PI: Landsverk), R25MH080916 (PI: Proctor), and by the Child and Adolescent Services Research Center (CASRC) and the Center for Organizational Research on Implementation and Leadership (CORIL). The authors thank the community-based organizations, clinicians, and supervisors that made this study possible.
The Implementation Leadership Scale (ILS) and scoring instructions are available from GAA at no cost or may be obtained as additional files accompanying this article.
- Hoagwood K: Family-based services in children’s mental health: a research review and synthesis. J Child Psychol Psyc. 2005, 46: 690-713. 10.1111/j.1469-7610.2005.01451.x.View ArticleGoogle Scholar
- Aarons GA, Hurlburt M, Horwitz SM: Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Hlth. 2011, 38: 4-23. 10.1007/s10488-010-0327-7.View ArticleGoogle Scholar
- Magnabosco JL: Innovations in mental health services implementation: a report on state-level data from the U. S. evidence-based practices project. Implement Sci. 2006, 1: 1-11. 10.1186/1748-5908-1-1.View ArticleGoogle Scholar
- Chaffin M: Organizational culture and practice epistemologies. Clin Psychol-Sci Pr. 2006, 13: 90-93. 10.1111/j.1468-2850.2006.00009.x.View ArticleGoogle Scholar
- Kessler ML, Gira E, Poertner J: Moving best practice to evidence-based practice in child welfare. Fam Soc. 2005, 86: 244-250. 10.1606/1044-3894.2459.View ArticleGoogle Scholar
- Aarons GA, Sommerfeld DH: Leadership, innovation climate, and attitudes toward evidence-based practice during a statewide implementation. J Am Acad Child Psy. 2012, 51: 423-431. 10.1016/j.jaac.2012.01.018.View ArticleGoogle Scholar
- Damanpour F, Schneider M: Phases of the adoption of innovation in organizations: effects of environment, organization and top managers. Brit J Manage. 2006, 17: 215-236. 10.1111/j.1467-8551.2006.00498.x.View ArticleGoogle Scholar
- Jung DI, Chow C, Wu A: The role of transformational leadership in enhancing organizational innovation: hypotheses and some preliminary findings. Leadership Quart. 2003, 14: 525-544. 10.1016/S1048-9843(03)00050-X.View ArticleGoogle Scholar
- Gumusluoglu L, Ilsev A: Transformational leadership, creativity, and organizational innovation. J Bus Res. 2009, 62: 461-473. 10.1016/j.jbusres.2007.07.032.View ArticleGoogle Scholar
- Scott SG, Bruce RA: Determinants of innovative behavior: a path model of individual innovation in the workplace. Acad Manage J. 1994, 37: 580-607. 10.2307/256701.View ArticleGoogle Scholar
- Aarons GA: Transformational and transactional leadership: association with attitudes toward evidence-based practice. Psychiatr Serv. 2006, 57: 1162-1169. 10.1176/appi.ps.57.8.1162.View ArticlePubMedPubMed CentralGoogle Scholar
- Michaelis B, Stegmaier R, Sonntag K: Shedding light on followers’ innovation implementation behavior: the role of transformational leadership, commitment to change, and climate for initiative. J Manage Psychol. 2010, 25: 408-429. 10.1108/02683941011035304.View ArticleGoogle Scholar
- Aarons GA, Horowitz JD, Dlugosz LR, Ehrhart MG: The role of organizational processes in dissemination and implementation research. Dissemination and Implementation Research in Health: Translating science to practice. Edited by: Brownson RC, Colditz GA, Proctor EK. 2012, New York, NY: Oxford University PressGoogle Scholar
- Harvey G, Fitzgerald L, Fielden S, McBride A, Waterman H, Bamford D, Kislov R, Boaden R: The NIHR collaboration for leadership in applied health research and care (CLAHRC) for Greater Manchester: combining empirical, theoretical and experiential evidence to design and evaluate a large-scale implementation strategy. Implement Sci. 2011, 6: 96-10.1186/1748-5908-6-96.View ArticlePubMedPubMed CentralGoogle Scholar
- Kyratsis Y, Ahmad R, Holmes A: Making sense of evidence in management decisions: the role of research-based knowledge on innovation adoption and implementation in healthcare. Study protocol. Implement Sci. 2012, 7: 22-10.1186/1748-5908-7-22.View ArticlePubMedPubMed CentralGoogle Scholar
- Weiner BJ: A theory of organizational readiness for change. Implement Sci. 2009, 4: 67-10.1186/1748-5908-4-67.View ArticlePubMedPubMed CentralGoogle Scholar
- Michaelis B, Stegmaier R, Sonntag K: Affective commitment to change and innovation implementation behavior: the role of charismatic leadership and employees’ trust in top management. J Change Manage. 2009, 9: 399-417. 10.1080/14697010903360608.View ArticleGoogle Scholar
- Aarons GA, Ehrhart MG, Farahnak LR, Hurlburt M: Leadership and organizational change for implementation (LOCI): A mixed-method pilot study of a leadership and organization development intervention for evidence-based practice implementation. Manuscript submitted for publication. In reviewGoogle Scholar
- Sloan R, Gruman J: Participation in workplace health promotion programs: the contribution of health and organizational factors. Health Educ Behav. 1988, 15: 269-288. 10.1177/109019818801500303.View ArticleGoogle Scholar
- Aarons GA, Sommerfeld DH, Willging CE: The soft underbelly of system change: the role of leadership and organizational climate in turnover during statewide behavioral health reform. Psychol Serv. 2011, 8: 269-281.View ArticlePubMedPubMed CentralGoogle Scholar
- Bain PG, Mann L, Pirola-Merlo A: The innovative imperative: the relationships between team climate, innovation, and performance in research and development teams. Small Group Res. 2001, 32: 55-73. 10.1177/104649640103200103.View ArticleGoogle Scholar
- Kinjerski V, Skrypnek BJ: The promise of spirit at work: increasing job satisfaction and organizational commitment and reducing turnover and absenteeism in long-term care. J Gerontol Nurs. 2008, 34: 17-25. 10.3928/00989134-20081001-03.View ArticlePubMedGoogle Scholar
- Hill N, Seo M, Kang J, Taylor M: Building employee commitment to change across organizational levels: the influence of hierarchical distance and direct managers’ transformational leadership. Organ Sci. 2012, 23: 758-777. 10.1287/orsc.1110.0662.View ArticleGoogle Scholar
- Hong Y, Liao H, Hu J, Jiang K: Missing link in the service profit chain: a meta-analytic review of the antecedents, consequences, and moderators of service climate. J Appl Psychol. 2013, 98: 237-267.View ArticlePubMedGoogle Scholar
- Schneider B, Ehrhart MG, Mayer DM, Saltz JL, Niles-Jolly K: Understanding organization-customer links in service settings. Acad Manage J. 2005, 48: 1017-1032. 10.5465/AMJ.2005.19573107.View ArticleGoogle Scholar
- Barling J, Loughlin C, Kelloway EK: Development and test of a model linking safety-specific transformational leadership and occupational safety. J Appl Psychol. 2002, 87: 488-496.View ArticlePubMedGoogle Scholar
- Zohar D: Modifying supervisory practices to improve subunit safety: a leadership-based intervention model. J Appl Psychol. 2002, 87: 156-163.View ArticlePubMedGoogle Scholar
- Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, Glass JE, York JL: A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev. 2012, 69: 123-157. 10.1177/1077558711430690.View ArticlePubMedGoogle Scholar
- Leeman J, Baernholdt M, Sandelowski M: Developing a theory-based taxonomy of methods for implementing change in practice. J Adv Nurs. 2007, 58: 191-200. 10.1111/j.1365-2648.2006.04207.x.View ArticlePubMedGoogle Scholar
- Massoud M, Nielsen G, Nolan K, Nolan TW, Schall MW, Sevin C: A Framework for Spread: From Local Improvements to System-wide Change. 2006, Cambridge, MA: Institute for Healthcare ImprovementGoogle Scholar
- Wensing M, Bosch MC, Grol R: Selecting, tailoring, and implementing knowledge translation interventions. Knowledge Translation in Health Care: Moving from Evidence to Practice. Edited by: Straus S, Tetroe J, Graham ID. 2009, Oxford, England: Wiley-Blackwell, 94-113.Google Scholar
- Klein KJ, Conn AB, Sorra JS: Implementing computerized technology: an organizational analysis. J Appl Psychol. 2001, 86: 811-824.View ArticlePubMedGoogle Scholar
- Schein E: Organizational Culture and Leadership. 2010, San Francisco, CA: John Wiley and SonsGoogle Scholar
- Bass BM, Avolio BJ: MLQ: Multifactor leadership questionnaire (Technical Report). 1995, Binghamton University, NY: Center for Leadership StudiesGoogle Scholar
- Patterson MG, West MA, Shackleton VJ, Dawson JF, Lawthom R, Maitlis S, Robinson DL, Wallace AM: Validating the organizational climate measure: links to managerial practices, productivity and innovation. J Organ Behav. 2005, 26: 379-408. 10.1002/job.312.View ArticleGoogle Scholar
- Quinn R, Rohrbaugh J: A spatial model of effectiveness criteria: towards a competing values approach to organizational analysis. Manage Sci. 1983, 29: 363-377. 10.1287/mnsc.29.3.363.View ArticleGoogle Scholar
- Fabrigar LR, Wegener DT, MacCallum RC, Strahan EJ: Evaluating the use of exploratory factor analysis in psychological research. Psychol Methods. 1999, 4: 272-299.View ArticleGoogle Scholar
- Zwick WR, Velicer WF: Comparison of five rules for determining the number of components to retain. Psychol Bull. 1986, 99: 432-442.View ArticleGoogle Scholar
- Patil VH, Singh SN, Mishra S, Donovan T: Efficient theory development and factor retention criteria: a case for abandoning the ‘Eigenvalue Greater Than One’ criterion. J Bu Res. 2008, 61 (2): 162-170. 10.1016/j.jbusres.2007.05.008.View ArticleGoogle Scholar
- Horn JL: A rationale and test for the number of factors in factor analysis. Psychometrika. 1965, 30: 179-185. 10.1007/BF02289447.View ArticlePubMedGoogle Scholar
- Dunn G, Everitt B, Pickles A: Modeling Covariances and Latent Variables Using EQS. 1993, London: Chapman & Hall/CRC PressGoogle Scholar
- Hu L-T, Bentler PM: Fit indices in covariance structure modeling: sensitivity to underparameterized model misspecification. Psychol Methods. 1998, 3: 424-453.View ArticleGoogle Scholar
- Hu L-T, Bentler PM: Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Modeling. 1999, 6: 1-55. 10.1080/10705519909540118.View ArticleGoogle Scholar
- Kelloway EK: Using Lisrel for Structural Equation Modeling: A Researcher’s Guide. 1998, Thousand Oaks, CA: SageGoogle Scholar
- Guo Q, Li F, Chen X, Wang W, Meng Q: Performance of fit indices in different conditions and selection of cut-off values. Acta Psychologica Sinica. 2008, 40: 109-118. 10.3724/SP.J.1041.2008.00109.View ArticleGoogle Scholar
- Shrout PE, Fleiss JL: Intraclass correlations: uses in assessing rater reliability. Psychol Bull. 1979, 86: 420-428.View ArticlePubMedGoogle Scholar
- Cochran WG: Sampling Techniques. 1977, New York: Wiley and Sons, 3Google Scholar
- James LR, Demaree RG, Wolf G: Estimating within-group interrater reliability with and without response bias. J Appl Psychol. 1984, 69: 85-98.View ArticleGoogle Scholar
- James LR, Demaree RG, Wolf G: rwg: an assessment of within-group interrater agreement. J Appl Psychol. 1993, 78: 306-309.View ArticleGoogle Scholar
- Brown RD, Hauenstein NMA: Interrater agreement reconsidered: an alternative to the rwg indices. Organ Res Methods. 2005, 8: 165-184. 10.1177/1094428105275376.View ArticleGoogle Scholar
- Lagomasino IT, Zatzick DF, Chambers DA: Efficiency in mental health practice and research. Gen Hosp Psychiat. 2010, 32: 477-483. 10.1016/j.genhosppsych.2010.06.005.View ArticleGoogle Scholar
- Chambers DA, Wang PS, Insel TR: Maximizing efficiency and impact in effectiveness and services research. Gen Hosp Psychiat. 2010, 32: 453-455. 10.1016/j.genhosppsych.2010.07.011.View ArticleGoogle Scholar
- Kouzes JM, Posner BZ: The Leadership Challenge. 2007, San Francisco, CA: Jossey-BassGoogle Scholar
- Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons GA, Bunger A, Griffey R, Hensley M: Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Hlth. 2011, 38: 65-76. 10.1007/s10488-010-0319-7.View ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.