Measuring organizational and individual factors thought to influence the success of quality improvement in primary care: a systematic review of instruments
Implementation Science volume 7, Article number: 121 (2012)
Continuous quality improvement (CQI) methods are widely used in healthcare; however, the effectiveness of the methods is variable, and evidence about the extent to which contextual and other factors modify effects is limited. Investigating the relationship between these factors and CQI outcomes poses challenges for those evaluating CQI, among the most complex of which relate to the measurement of modifying factors. We aimed to provide guidance to support the selection of measurement instruments by systematically collating, categorising, and reviewing quantitative self-report instruments.
Data sources: We searched MEDLINE, PsycINFO, and Health and Psychosocial Instruments, reference lists of systematic reviews, and citations and references of the main report of instruments. Study selection: The scope of the review was determined by a conceptual framework developed to capture factors relevant to evaluating CQI in primary care (the InQuIRe framework). Papers reporting development or use of an instrument measuring a construct encompassed by the framework were included. Data extracted included instrument purpose; theoretical basis, constructs measured and definitions; development methods and assessment of measurement properties. Analysis and synthesis: We used qualitative analysis of instrument content and our initial framework to develop a taxonomy for summarising and comparing instruments. Instrument content was categorised using the taxonomy, illustrating coverage of the InQuIRe framework. Methods of development and evidence of measurement properties were reviewed for instruments with potential for use in primary care.
We identified 186 potentially relevant instruments, 152 of which were analysed to develop the taxonomy. Eighty-four instruments measured constructs relevant to primary care, with content measuring CQI implementation and use (19 instruments), organizational context (51 instruments), and individual factors (21 instruments). Forty-one instruments were included for full review. Development methods were often pragmatic, rather than systematic and theory-based, and evidence supporting measurement properties was limited.
Many instruments are available for evaluating CQI, but most require further use and testing to establish their measurement properties. Further development and use of these measures in evaluations should increase the contribution made by individual studies to our understanding of CQI and enhance our ability to synthesise evidence for informing policy and practice.
Continuous quality improvement (CQI) approaches are prominent among strategies to improve healthcare quality. Underpinned by a philosophy that emphasises widespread engagement in improving the systems used to deliver care, CQI teams use measurement and problem solving to identify sources of variation in care processes and test potential improvements. The use of iterative testing (plan-do-study-act cycles) by QI teams to design and implement an evidence-based model of depression care is one example [1, 2]. CQI methods have been used as the main strategy in organisation-wide quality improvement (QI) efforts [3–5], as a tool for implementing specific models of care [1, 6], and as the model for practice change in QI collaboratives . Investment in CQI-related education reflects the increasing emphasis on these methods, with inclusion of QI cycles as modules in continuing medical education curricula  and incorporation of CQI principles as core competencies for graduate medical education .
Despite this widespread emphasis on CQI, research is yet to provide clear guidance for policy and practice on how to implement and optimise the methods in healthcare settings. Evidence of important effects and the factors that modify effects in different contexts remains limited [3, 4, 10–12]. This is particularly the case for primary care, where far less research has been conducted on CQI than in hospital settings [12, 13]. Recent calls to address gaps in knowledge have focused on the need for methodological work to underpin evaluations of QI interventions [14–16]. Priority areas include theory development to explain how CQI works and why it may work in some contexts and not others, and the identification of valid and reliable measures to enable theories to be tested .
The extent to which specific contextual factors influence the use of CQI methods and outcomes in different settings is not well understood [4, 11, 17, 18]. Measuring organizational context in CQI evaluations is key to understanding the conditions for success, and for identifying factors that could be targeted by CQI implementation strategies to enhance uptake and effectiveness [3, 11, 17]. In intervention studies, measuring these factors as intermediate outcomes permits investigation of the mechanisms by which CQI works. Measuring the extent to which CQI methods are used in practice is uncommon in evaluative studies , but provides important data for interpreting effects. Complex interventions such as CQI are not easily replicable or implemented in a way that ensures that intervention components are used as intended . Moreover, adaptation to fit the local context may be necessary [17, 20]. Measures that capture the implementation and use of CQI interventions are required to assess whether observed effects (or the absence thereof) can be attributed to the intervention. These measures of intervention fidelity also permit assessment of the extent to which individual intervention components contribute to effects and whether changes to the intervention have an important influence on effects [17, 20, 21].
Investigating the relationship between context, use of CQI, and outcomes poses practical and methodological challenges for researchers. These challenges include determining which factors to measure and selecting suitable measurement instruments from a large and complex literature. Variability in how contextual factors have been defined and measured adds to these challenges and limits the potential to compare and synthesise findings across studies .
In this paper, we report a systematic review of instruments measuring organizational, process, and individual-level factors thought to influence the success of CQI. This review is part of a larger project that aims to aid the evaluation of CQI in primary care by providing guidance on factors to include in evaluations and the measurement of these factors. The project includes a measurement review (reported in two parts; this paper and a companion review focussing on team-level measures) and development of a conceptual framework, the In forming Quality I mprovement Re search (InQuIRe) in primary care framework. Our initial framework is included in this paper to illustrate the scope of the measurement review and as the basis for assessing the coverage of available instruments. Our analysis of instruments is used to integrate new factors and concepts into the framework. These refinements are reported as taxonomies in the measurement review papers. The development and content of the final InQuIRe framework will be reported in full in a separate publication.
The specific objectives of the measurement review reported in this paper are to: identify measures of organizational, CQI process, and individual factors thought to modify the effect of CQI; determine how the factors measured have been conceptualised in studies of QI and practice change; develop a taxonomy for categorising instruments based on our initial framework and new concepts arising from the measurement review; use the taxonomy to categorise and compare the content of instruments, enabling assessment of the coverage of instruments for evaluating CQI in primary care; and appraise the methods of development and testing of existing instruments, and summarise evidence of their validity, reliability, and feasibility for measurement in primary care settings.
Scope of the review—the InQuIRe framework
Figure 1 depicts the first version of our InQuIRe framework, which we used to set the scope of this measurement review. Development of the InQuIRe framework was prompted by the absence of an integrated model of CQI theory for informing the design of evaluations in primary care. The version of InQuIRe presented in Figure 1 reflects our initial synthesis of CQI theory, models, and frameworks. It aims to capture the breadth of factors that could be measured when evaluating CQI in primary care settings.
The starting point for our synthesis was the landmark papers that spurred the adoption of CQI in healthcare (e.g., [22–29]). From these sources, we identified recurrent themes about the core components of CQI and how it was expected to work. We used snowballing methods to uncover the main bodies of research (including reviews) and prevailing theory on CQI in healthcare. This literature focussed on large organizational settings (e.g., [10, 30–34]) with few models for primary care and limited consideration of team-level factors in CQI theory (exceptions include [1, 35, 36]). We therefore extended our search to identify more general models or theories of QI, practice change, and innovation relevant to primary care (examples in primary care are Cohen’s model for practice change based on complexity theory , Orzano’s model of knowledge management , and Rhydderch’s analysis of organizational change theory ; in other settings [40–42]), and review articles on teamwork theory (e.g., [43–47]). Factors salient to CQI in primary care were collated and grouped thematically to identify content for our framework. O’Brien and Shortell’s model of organizational capability for QI , and Solberg’s conceptual framework for practice improvement  were among the few models that integrated findings across CQI studies to describe relationships between context and outcomes. We used these models as the initial basis for our framework, integrating findings from our thematic analysis of other sources.
To structure our framework, we adopted the inputs-process-outputs (IPO) model that is widely used in research on teams . Although it simplifies the relationship between variables, the IPO model depicts variables in a way that supports the design and interpretation of longitudinal studies. Reporting available instruments using this structure illustrates how the instruments included in this review could be incorporated in an evaluation of the effects of CQI. Contextual factors thought to influence CQI process, and outcomes are presented as antecedents of organizational readiness for change. Organizational readiness, defined here as collective capability and motivation for an imminent change , is hypothesised to mediate the effects of contextual factors on CQI process and outcomes. This is consistent with the view that organizational readiness should be delineated from other contextual factors that make an organisation receptive to change, but which do not reflect an organisation’s readiness to engage in a specific change [41, 49, 50]. Contextual factors that are potentially modifiable are depicted in the framework as both antecedents and proximal outcomes. These factors may be modified by participation in the CQI process itself or by methods used to implement CQI (e.g., improving motivation for CQI by using opinion leaders). In turn, proximal outcomes may mediate the effect of CQI process on more distal outcomes (e.g., structural changes to the process of care, and provider adherence to these changes). Our concept of CQI process focuses on the use of CQI methods most salient to primary care settings . These methods are reflected in Weiner’s operational definition of CQI ‘use of cross-functional teams to identify and solve quality problems, use of scientific methods and statistical tools by these teams to monitor and analyse work processes, and use of process-management tools …’ .
This review focuses on instruments relevant to three domains of the InQuIRe framework (shaded in white and numbered one to three as follows). Broadly, these cover: (1) CQI implementation and use (i.e., measures of the process used to implement CQI and the fidelity with which CQI methods are used); (2) organizational context (e.g., technical capability for CQI and organizational culture); and (3) individual level factors (e.g., knowledge and beliefs about CQI). Figure 2 illustrates terms used throughout the review, with an example from the taxonomy (see Additional file 1 for a glossary of these and other terms).
Methods for the review of measurement instruments are not well established . Figure 3 summarises the stages of this review. Searching and screening (stage one) followed general principals for the conduct of systematic reviews, while data analysis and synthesis methods (stages two to four) were developed to address the objectives of this review. The methods used at each stage of the review are described below.
Stage one: searching and initial screening
Data sources and search methods
To identify papers reporting potentially relevant instruments we searched MEDLINE (from 1950 through December 2010), PsycINFO (from 1967 through December 2008), and Health and Psychosocial Instruments (HaPI) (from 1985 through September 2008) using controlled vocabulary (thesaurus terms and subject headings) and free-text terms for quality improvement and practice change. Scoping searches were used to test terms (e.g., to test retrieval of known reports of instruments) and gauge the likely yield of references. Further details about the scoping searches and the final set of search terms are reported in Additional file 2. Searches were limited to articles published in English language.
Reports of potentially relevant instruments were also identified from systematic reviews identified from the database searches and other sources. These reviews included systematic reviews of measurement instruments, and systematic reviews of QI studies (e.g., reviews of observational studies measuring factors thought to influence QI outcomes).
Snowballing techniques were used to trace the development and use of instruments and to identify related conceptual papers. We identified the main publication(s) reporting initial development of instruments, screened the reference lists of these studies, and conducted citation searches in ISI Web of Science citation databases or Scopus for more recent publications . Snowballing searches were limited to the subset of instruments included in stage four of the review.
Selection of studies for initial inclusion in the review
Titles and abstracts were screened to identify studies for inclusion in the review. Clearly irrelevant papers were excluded and the full text of potentially relevant studies was retrieved and screened for inclusion by one author (SB). Criteria for the selection of studies included in stage two are reported in Figure 3.
Stage two: development of taxonomy for categorising instruments
One review author extracted data from included studies for all three stages (SB). To ensure consistent interpretation of the data extraction guidance and data extraction, a research assistant extracted data from a subsample of included studies (18 instruments, comprising 10% of the instruments included in stage two and 25% of the instruments included in stage four). Data extracted at stage two are summarised in Table 1. This data included information on the purpose and format of the instrument, and data to facilitate analysis and categorisation of the content of each instrument (constructs measured, construct definitions, theoretical basis of instrument).
Methods for developing the taxonomy were based on the framework approach for qualitative data analysis . This approach combines deductive methods (commencing with concepts and themes from an initial framework) with inductive methods (based on themes that emerge from the data). The first version of the InQuIRe framework (Figure 1) was the starting point for the taxonomy, providing its initial structure and content. Content analysis of the instruments included in stage two was used to identify factors that were missing from our initial framework (and hence, the taxonomy) (e.g., we added commitment, goals, and motivation as organisation level-constructs, when in our initial framework they were included only at individual-level) and to determine how factors had been conceptualised. The initial taxonomy was revised to incorporate new factors and prevailing concepts (e.g., we separated dimensions of climate that were prevalent in instruments specific to QI (e.g., emphasis on process improvement), from more general dimensions of climate (e.g., cooperation)). Using this approach enabled us to ensure the taxonomy provided a comprehensive representation of relevant factors.
Instruments confirmed as relevant to one or more of the three domains of our framework were included for content analysis. At this stage, we were aiming to capture the breadth of constructs relevant to evaluating CQI. Hence, we included all measures of potentially relevant constructs irrespective of whether item content was suitable for primary care. The content of each instrument (items, subscales), and associated construct definitions, was compared with the initial taxonomy. Instrument content that matched constructs in the taxonomy was summarised using existing labels. The taxonomy was expanded to include missing constructs and new concepts, initially using the labels and descriptions reported by the instrument developers.
To ensure the taxonomy was consistent with the broader literature, we reviewed definitions extracted from review articles and conceptual papers identified from the search. We also searched for and used additional sources to define constructs when included studies did not provide a definition, a limited number of studies contributed to the definition, or the definition provided appeared inconsistent with the initial framework or with that in other included studies. Following analysis of all instruments and supplementary sources, related constructs were grouped in the taxonomy (as illustrated in Figure 2). Overlapping constructs were then collapsed, distinct constructs were assigned a label that reflected the QI literature, and the dimensions of constructs were specified to create the final taxonomy.
Stage three: categorisation of instrument content
Criteria for the selection of the subset of instruments included in stage three are reported in Figure 3. Categorisation of instrument content was primarily based on the final set of items reported in the main report(s) for each instrument. Construct definitions and labels assigned to scales guided but did not dictate categorisation because labels were highly varied and often not a good indicator of instrument content (e.g., authors used the following construct labels for very similar measures of QI climate: organizational culture that supports QI , organizational commitment to QI , QI implementation , degree of CQI maturity , quality management orientation , and continuous improvement capability ). Instrument content was summarised in separate tables for each of the content domains from the InQuIRe framework: (1) CQI implementation and use, (2) organizational context, and (3) individual level factors.
Stage four: assessment of measurement properties
Criteria for the selection of instruments included in stage four are reported in Figure 3.
We extracted information about the development of the instrument and assessment of its measurement properties from the main and secondary reports. Secondary reports were restricted to studies of greatest relevance to the review, focussing on studies of CQI, QI, or change in primary care. Table 2 summarises the data extracted at stage four. Extracted data was summarised and tabulated, providing a brief description of the methods and findings of assessments that were of most relevance to this review.
Appraisal of evidence supporting measurement properties
Studies included in stage four were appraised using the COSMIN (COnsensus-based Standards for the selection of health status Measurement Instruments) checklist . The COSMIN checklist focuses on the appraisal of the methods used during instrument development and testing, not on the measurement properties of the instrument itself. The COSMIN criteria were intended for studies reporting instruments for the measurement of patient reported outcomes; however, we were unable to identify equivalent appraisal criteria for organizational measures . The checklist has strong evidence for its content validity, having been derived from a systematic review and an international consensus process to determine its content, terminology, and definitions [62, 64]. The terminology and definitions in the COSMIN checklist closely match those adopted by the Joint Committee on Standards for Educational and Psychological Testing , indicating their relevance to measures other than health outcomes. Based on guidance from the organizational science and psychology literature [66–68], and other reviews of organizational measures (e.g., [40, 63, 69]), we added a domain to address issues associated with the measurement of collective constructs (level of analysis) and a criterion to the content validity domain. The appraisal criteria are reported in Additional file 3.
Most instruments had undergone limited testing of their measurement properties and, where properties had been tested, there was often limited reporting of the information required to complete the checklist. Because of the sparse data, for each instrument we tabulated a summary of the extent of evidence available for each property and a description of the instrument’s development and testing. We used appraisal data to provide an overall summary of the methods used to develop and test the measurement properties of instruments included in the review.
Summary of initial screening process for the review
Figure 4 summarises the flow of studies and instruments through the review. A total of 551 articles were included for full text review. This included eight systematic reviews of instruments (two on readiness for change [40, 69], three on organizational culture [63, 70, 71], and one each on quality improvement implementation , organizational assessment , and organizational learning ); and five systematic reviews of observational or effectiveness studies [12, 75–78]. Ninety-one articles were identified from the systematic reviews of instruments, and 60 articles were identified from the other reviews.
Of the 313 papers included for the first stage of data extraction, the majority reported studies in healthcare settings (n = 225), 83 of which were in primary care. Of the included papers, 62 had as their primary aim development of an instrument. Observational designs were most commonly reported in other papers (n = 196), encompassing simple descriptive studies through to testing of theoretical models. Experimental designs were reported in 25 papers, of which five were randomised trials (four in primary care), one a stepped-wedge time series design, and the balance were before-after designs. The remainder of papers were conceptual, including qualitative studies and descriptive papers.
Identification of unique instruments
Individual papers reported between one and four potentially relevant instruments; collectively providing 352 unique reports of development or use of an instrument. One hundred and eighty-six unique, potentially relevant instruments were identified, 34 of which were excluded following initial analysis of the content of all instruments (constructs measured, items), resulting in 152 instruments for review. Reasons for exclusion are reported in Figure 4. Identification of the main and secondary reports required direct comparison of items with previously identified instruments because of attribution of the same instrument to different sources or no source reference, use of different names for the same instrument, and changes to content without reporting that changes had been made. Most instruments were unnamed or had multiple names reported in the literature. We therefore used the first author’s name and year from the index paper to name the instrument (reflected in text and tables, e.g., Solberg 2008). The index paper for the instrument was typically the first of the main reports.
Development of taxonomy and categorisation of instrument content
In stage two, the content of 152 instruments was analysed to inform development of the taxonomy. The breadth of constructs measured and diversity of items used to operationalise constructs was large. Of the 152 instruments, 28 were initially categorised as measuring use or implementation of CQI, 101 measured attributes of organizational context (31 context for CQI or total quality management (TQM)), 46 context for any change or improvement, 17 organizational culture, 7 generic context), 23 measured organizational or individual readiness for change, and 25 individual level factors (some instruments covered more than one domain, hence the total sums to >152). The taxonomy incorporates additional constructs identified during the analysis, makes explicit the dimensions within constructs, and includes some changes to terminology to reflect existing instruments.
At stage three, 84 instruments were categorised using the taxonomy. We included instruments confirmed as measuring a relevant construct with item wording suitable for primary care (41 instruments). These included instruments requiring minor rewording (e.g., ‘hospital’ to ‘practice’). For constructs not adequately covered by suitable instruments, we included instruments with potential for adaptation (43 instruments). Sixty-eight instruments were excluded from stage three because the instrument content was unsuitable for evaluating QI in primary care (n = 48) or the authors reported only a subset of items from the instrument (e.g., example questions) (n = 20). Instruments judged as unsuitable were those with content intended for a specific context of use (e.g., Snyder-Halpern’s instrument measuring readiness for nursing research programs ; Lin’s instrument measuring climate for implementing the chronic care model ), with content intended for large, differentiated settings (e.g., Saraph’s instrument measuring quality management factors ), or with content adequately covered by more suitable instruments (e.g., Chen’s instrument measuring generalised self-efficacy  was excluded because we identified multiple instruments measuring self-efficacy for CQI (categorised as beliefs about capability)).
The categorisation of instruments is presented in Additional file 4: Tables S3-S6. Each table covers a separate domain of the InQuIRe framework. Instruments are grouped by setting to illustrate if they have been used in primary care or if their use has been limited to other settings. The tables enable comparison across instruments and give an overall picture of coverage of the framework. Instruments vary, however, in how comprehensively they measure individual constructs, some providing comprehensive measures (e.g., Oudejans, 2011  includes 36 items in 5 scales measuring capacity for learning) while others include only one or two items (e.g., Apekey, 2011  includes 3 items measuring capacity for learning). Instruments also vary considerably in item wording, influencing their suitability for different purposes. For example, some instruments ask about prior experience of change (i.e., retrospective measurement) while others refer to an imminent change (i.e., prospective measurement). In the next section, we include a brief description of each domain of the InQuIRe framework and highlight instruments that provide good coverage of specific constructs. Used in conjunction with the results tables, this information can help guide the selection of instruments.
Content and coverage of domains of the InQuIRe framework
1) CQI implementation and use
Additional file 4: Table S3 reports the final taxonomy and categorisation of instrument content for the CQI implementation and use domain (boxes numbered ‘1’ in InQuIRe framework).
Description of the CQI implementation and use domain
This domain covers the process used to implement CQI (e.g., training in the use of PDSA cycles, facilitation to help teams apply QI tools, influencing acceptability of CQI as a method for change), organisation wide use of CQI methods (e.g., process improvement, use of teams for QI), and the use of CQI methods by QI teams (e.g., planning and testing changes on a small scale, as done in plan-do-study-act (PDSA) cycles). We adopted Weiner’s operational definition of CQI methods ‘use of cross-functional teams to identify and solve quality problems, use of scientific methods and statistical tools by these teams to monitor and analyse work processes, and use of process-management tools …’ . Instruments that focussed on organizational policies or practices used to support CQI (e.g., leadership practices) were categorised under organizational context (boxes numbered ‘2’ in the InQuIRe framework). We view these instruments as measures of climate for QI rather than measures of the use of CQI methods. This is in line with prevailing definitions of climate as ‘the policies, practices, and procedures as well as the behaviours that get rewarded, supported, and expected in a work setting’ . It is also consistent with recent attempts to identify an operational definition for CQI interventions, which focused on CQI methods such as the use of data to design changes .
Instrument content was categorised as CQI implementation process, organisation-wide use of CQI methods, and use of CQI methods by QI teams. Our concept of the CQI implementation process extends our initial framework by drawing on the analysis of instrument content and review articles (key reviews were [87, 88]). Organisation-wide use of CQI methods covers indicators of the use of CQI methods across an organisation [52, 89]. The use of CQI methods by QI teams encompasses the main components of CQI depicted in our initial framework (e.g., setting aims, structured problem solving, data collection and analysis, use of QI tools) [51, 90–92].
Measures of the CQI implementation process
Duckers 2008  and Schouten 2010  included items measuring methods used to implement CQI, with Schouten 2010 providing a more comprehensive measure of training, facilitation and opinion leader support. Three instruments measured processes used to implement change, but these were not specific to CQI (Gustafson 2003 , Helfrich 2009 , and Øvretveit 2004 ). The instruments were primarily included in the review as measures of organizational context; however, their content is relevant to measuring the process used to implement CQI and the theoretical basis of these instruments is strong.
Measures of the use of CQI methods – organizational and team level
Barsness 1993  was the most widely used indicator of organisation-wide use of CQI methods (e.g., [97, 98]), and the only instrument suitable for smaller healthcare settings. Of the instruments included as measures of the use of CQI methods at team level, most involved dichotomous responses to whether methods were used or not, or rating of frequency of use of CQI methods (e.g. Solberg 1998 , Lemieux-Charles 2002 , Apekey 2011 ). We did not identify any comprehensive self-report instruments for measuring the fidelity with which CQI methods are used, such as measures of the intensity of use of CQI methods. Alemni 2001  was the most comprehensive measure of fidelity, but included response formats that would require modification for use as a quantitative scale. Two instruments developed for QI collaboratives measured the use of CQI methods (Duckers 2008  and Schouten 2010 ), of which Schouten 2010 was the most comprehensive.
2) Organizational context
Additional file 4: Tables S4 and S5 reports the final taxonomy and categorisation of instrument content for the organizational context domain (boxes numbered ‘2’ in the InQuIRe framework).
Description of the organizational context domain
We included instruments in this domain if they measured perceptions of organizational: capability; commitment, goals and motivation; climate for QI or change; generic climate; culture; leadership for QI; resources, supporting systems and structure; and readiness for change. Our concept of each of these categories is reflected in the taxonomy in Additional file 4: Table S4. Capability and commitment reflect perceptions of the collective expertise and motivation to undertake CQI [18, 30, 48]. We distinguish organizational climate (defined in the previous section) from culture, the latter reflecting ‘… core values and underlying ideologies and assumptions …’ in an organisation . In the taxonomy, we delineate dimensions of climate for QI and change reflecting models of QI (e.g., [18, 30, 37, 48]). We adopt a broad definition of leadership for QI, ‘any person, group or organisation exercising influence’ , including formal and informal leaders at all levels.
Measures of capability and commitment
Organisation-level measures of capability and commitment to using CQI methods were uncommon, despite their potential importance as indicators of organizational readiness for CQI . A number of instruments were labelled as measures of organizational commitment to CQI; however, these instruments focussed on practices and policies that reflected management commitment rather than the collective commitment of staff within the organisation. Although not specific to CQI, several instruments designed for primary care included items measuring capability for change and commitment to change (e.g., Bobiak 2003 ; Ohman-Strickland 2007 ). Other relevant instruments included those measuring organizational capacity for learning—a dimension of capability (for examples in primary care, Rushmer 2007 , Sylvester 2003 ).
Measures of climate, culture and leadership for QI
A large number of instruments measured aspects of organizational climate for QI or change, some developed specifically for primary care (e.g., Bobiak 2003  and Ohman-Strickland 2007  are measures of change capacity). Parker 1999  and Shortell 2000  were the most comprehensive instruments developed specifically for QI (rather than change in general) and include scales measuring leadership for QI. The content of these instruments reflects the structure and processes of large organisations; however, no equivalent instruments were found for primary care. Instruments described by developers as measuring culture (e.g., Kralewski, 2005 ), organizational learning (e.g., Marchionni, 2008 ) and readiness for change (e.g., Helfrich, 2009 ) often had considerable overlap in content with instruments measuring QI climate (e.g., Meurer, 2002 ). Culture and climate are related constructs , which was reflected in similar instrument content. Instruments explicitly identified by developers as measuring culture are identified as such in Additional file 4: Table S4 (indicated by E rather than X). The wording of items in instruments measuring culture focused on values, rather than policies and practices. However, most content from instruments measuring culture was categorised under generic climate because the dimensions were the same (e.g., Taveira 2003  and Zeitz 1997 ).
Measures of organizational readiness for change
We categorised instrument content as measuring organizational readiness for change when items or the item context referred to an imminent change (e.g., Gustafson 2003 , Helfrich 2009 , and Øvretveit 2004 ). Content designed to elicit views on change in general was included under other categories of organizational context (e.g., Lehman, 2002  and Levesque, 2001 ). Instruments that were explicitly identified by developers as measuring readiness for change are identified as such in Additional file 4: Table S4 (indicated by E rather than X).
3) Individual level factors
Additional file 4: Table S6 reports the final taxonomy and categorisation of instrument content for the individual level factors domain (boxes numbered ‘3’ in InQuIRe framework).
Description of the individual level factors domain
Instruments were included in this domain if they measured individual: capability and empowerment for QI and change; commitment, goals, and motivation; and readiness for change. Our final taxonomy reflects frameworks for understanding individual level factors thought to influence behaviour change [113, 114], and the results of our content analysis (key sources include [115–118]). We focused on individual capabilities and beliefs hypothesised to directly impact on collective capacity for CQI.
Measures of individual level factors
Within each of the three categories, we identified instruments that referred to CQI and others that referred more generally to QI and change. We focussed on CQI specific measures (e.g., Hill 2001 ; Coyle Shapiro 2003 ; Geboers 2001b ) or measures for which there were few organisation level equivalents. The latter included measures of commitment to change (e.g., Fedor 2006 , Herscovitch 2002a and 2002b ) and readiness to change (e.g., Armenakis 2007 , Holt 2007 ). We identified multiple measures of perceived CQI capability (e.g., Calomeni 1999 , Ogrinc 2004 , Solberg 1998 ) and knowledge ‘tests’ (e.g., Gould 2002 ). Overall, there were few comprehensive, theory-based measures of CQI-specific constructs. Good examples of theory-based measures were instruments measuring empowerment for QI (Irvine 1999 ) and motivation to use CQI methods (Lin 2000 ).
Instrument characteristics, development and measurement properties
In stage four, we reviewed the development and measurement properties of 41 instruments with use or the potential for use in primary care. Additional file 5: Tables S7, S8, and S9 report the main characteristics of each instrument. Each table covers a separate content domain, and the order of instruments matches that used in Additional file 4: Tables S3, S4, S5, and S6. The purpose for which the instrument was first developed and the dimensions as described by the developers are summarised. The number of items, response scale, and modified versions of the instrument are reported, together with examples of use relevant to evaluation of CQI.
Additional file 4: Table S10 gives an overview of the development and testing of measurement properties for each instrument, indicating the extent of evidence reported in the main report(s) and any other studies in relevant contexts (as referenced in Additional file 6: Tables S11, S12, and S13). The development and testing of each instrument is described in Additional file 6: Tables S11, S12, and S13.
Although most papers provided some description of the instrument content and theoretical basis, constructs were rarely defined explicitly and reference to theory was scant. Reports of instruments arising from the healthcare and psychological literatures were notably different in this respect, the latter tending to provide comprehensive operational definitions that reflected related research and theory (e.g. Armenakis, 2007  and Spreitzer 1995 ). Formal assessments of content validity (e.g., using an expert consensus process) were uncommon (examples of comprehensive assessments include Ohman-Strickland, 2007 , Kralewski, 2005 , and Holt, 2007 ). For most instruments, evidence of construct validity (e.g., through hypothesis testing, analysis of the instrument’s structure or both) was derived from one or two studies, and no evidence of construct validity was found for seven of the 41 instruments. Only one study used methods based on item response theory to assess construct validity and refine the instrument (Bobiak 2009 ).
Most studies report Cronbach’s alpha (a measure of internal consistency or the ‘relatedness’ between items) for the scale or, where relevant, subscales; however, it was common that this was done without checks to ensure that the scale was unidimensional (e.g., using factor analysis to ensure that items actually form a single scale and, hence, are expected to be related) [127, 128]. Very few studies reported other assessments of reliability, thus providing limited evidence of the extent to which scores reflect a true measure of the construct rather than measurement error.
Consideration of conceptual and analytical issues associated with measuring collective constructs (e.g., organizational climate) was limited. Few authors discussed whether they intended to measure shared views (i.e., consensus is a pre-requisite for valid measurement of the construct), the diversity of views (i.e., the extent of variation within a group is of interest), or a simple average. Consequently, it was difficult to assess if items were appropriately worded to measure the intended construct and whether subsequent analyses were consistent with the way the construct was interpreted.
Very few studies reported the potential for floor and ceiling effects—which may influence both the instrument’s reliability and its ability to detect change in a construct . None of the studies provided any guidance on what constitutes an important or meaningful change in scores on the instrument. Information about the acceptability of the instrument to potential respondents and feasibility of measurement was provided for less than one quarter of instruments, with most basing assessments on response rate only. Reporting of missing items, assessment of whether items were missing at random or due to other factors, and the potential for response bias [128, 129] was dealt with in only a handful of studies.
This review aimed to provide guidance for researchers seeking to measure factors thought to modify the effect of CQI in primary care settings. These factors include contextual factors at organizational and individual level, and the implementation and use of CQI. We found many potentially relevant instruments—some reflecting pragmatic attempts to measure these factors and others the product of a systematic and theory-based instrument development process. Distinguishing the two was difficult, and the large number of factors measured and highly varied labelling and definition makes the process of selecting appropriate instruments complex. Limited evidence of the measurement properties of most instruments and inconsistent findings across studies increases the complexity. We discuss these findings in more depth, focussing first on the three content domains covered by the review. We then discuss overarching considerations for researchers seeking to measure these factors and explore opportunities to strengthen the measurement methods on which CQI evaluations depend.
Measurement of CQI implementation and use
There were few self-report instruments designed to measure the implementation and use of CQI (fidelity) and most of those identified had undergone limited assessment of their measurement properties. Such measures provide important explanatory data about whether outcomes can be attributed to the intervention and the extent to which individual intervention components contribute to effects [19–21]. They can also provide guidance for implementing CQI in practice. However, there are challenges with developing these instruments.
First, there is limited consensus in the literature on what defines a CQI intervention and its components, and large variability in the content of CQI interventions across studies . In part, this is attributed to the evolution and local adaptation of CQI interventions. However, it also reflects differences in how CQI interventions are conceptualised . For this review, we adopted a definition that encompasses a set of QI methods relevant to teams in any setting, irrespective of size and structure. If we are to develop measures of CQI and accumulate evidence on its effectiveness, then it is essential to agree on the components that comprise the ‘core’ of CQI interventions and to further recent attempts to develop operational definitions of these components .
Second, measures of the use of CQI interventions need to address non-content related dimensions of intervention fidelity. Frameworks for specifying and defining these dimensions exist for health behaviour change interventions. The dimensions covered by these frameworks include intervention intensity (e.g., duration, frequency), quality of delivery, and adherence to protocols [131, 132]. In public health, frameworks such as RE-AIM include assessment of intervention reach (target population participation), implementation (quality and consistency of intervention delivery), and maintenance (use of intervention in routine practice) . These frameworks are broadly relevant, but most assume interventions are ‘delivered’ to a ‘recipient’ by an ‘interventionist’ , which does not reflect how CQI interventions are used. For QI interventions, assessing ‘intensity,’ ‘dose,’ ‘depth,’ and ‘breadth’ has been recommended [134, 135]. Improved measurement will require agreement and definition of the dimensions of fidelity most relevant to CQI.
Finally, the validity of self-report instruments measuring the fidelity of use of CQI methods needs to be assessed against a criterion, or gold standard, measure of actual behaviour . Measures that involve direct observation of CQI teams and expert evaluation of CQI process are likely to be best for this purpose [131, 136] and examples of their use exist . In other contexts, behavioural observation scales (typically, scoring of frequency of behaviours on a Likert scale) and behaviourally anchored rating scales (rating of behaviour based on descriptions of desirable and undesirable behaviours) have been used to facilitate rating of teamwork behaviours by observers . While these methods are not feasible for large-scale evaluation, direct observation of CQI teams could be used to inform development and assess the validity of self-report instruments.
Measurement of organizational context
The wide range of potentially relevant instruments included in the review illustrates the scope of possible measures of organizational context (Additional file 4: Table S4). A positive development is the emergence of instruments for measuring context in small healthcare settings (e.g., [103, 106]). Instruments developed for large organizational settings still dominate the literature. Some have content and item wording that reflects the structure or processes of large organisations; however, there are a number of instruments suitable for small healthcare settings and others that could be adapted. A good example is the primary care organizational assessment instrument  that was adapted from a well-established, theoretically-sound instrument developed for the intensive care unit . Testing is required to ensure the suitability of instruments in new settings. For example, evidence is accumulating that the widely used competing values instrument for measuring organizational culture may not be suitable for discriminating culture types in primary care [140–144].
Measurement of individual level factors
A number of CQI-specific instruments measured individual level factors; among them were several instruments that had a strong theoretical basis and prior use in CQI evaluations (see Additional file 4: Table S6). The relationship between individual level factors and the outcomes of a group level process like CQI is complex [67, 145]. For example, although individual capability and motivation to participate in CQI may influence outcomes, it is unclear how these individual level factors translate to overall CQI team capability and motivation, a factor that may be more likely to predict the performance of the CQI team. Team members often combine diverse skills and knowledge; conceptually, this collective capability is not equivalent to the average of individual members’ capability. This underscores the importance of ensuring item wording and methods of analysis reflect the conceptualisation of the construct and, in turn, the level at which inferences are to be made [67, 146]. Particular care may be required when using and interpreting individual level measures in relation to a collective process such as CQI.
Key considerations for researchers
The review findings highlighted two areas in particular that need careful consideration: the implications of using existing instruments versus developing new ones; and ensuring constructs and associated measures are clearly specified.
Implications of using existing instruments versus developing new ones
Given recent emphasis on measuring context in evaluations of quality improvement, and the concomitant proliferation of new measurement instruments, the value of using existing instruments needs to be emphasised. Streiner and Norman caution against researchers’ tendency to ‘dismiss existing scales too lightly, and embark on the development of a new instrument with an unjustifiably optimistic and naïve expectation that they can do better’ . By using existing instruments, researchers capitalise on prior theoretical work and testing that ensures an instrument is measuring the intended construct and is able to detect changes in the construct or differences between groups. The use of existing instruments can therefore strengthen the findings of individual studies and help accumulate evidence about the instrument itself. The latter is important because many instruments in this review have very little evidence supporting their measurement properties, and less still in contexts relevant to evaluation of CQI in primary care. Investing in new instruments rather than testing existing scales fragments efforts to develop a suite of well-validated measures that could potentially be used as a core set of measures for QI evaluation .
Using existing instruments also increases the potential for new studies to contribute to accumulated knowledge. Many of the ‘theories’ about the influence of specific contextual factors on the use and outcomes of CQI come from the findings of one or two studies, or have not been tested [12, 134]. In organizational psychology, meta-analysis is widely used to investigate the association between contextual factors and outcomes, largely with the aim of testing theories using data from multiple studies. Using existing scales with good evidence of validity and reliability would enhance our ability to synthesise the findings across studies to investigate theories about the relationship between context, use of CQI and outcomes.
An initiative that may provide a model for addressing these issues is the Patient-Reported Outcomes Measurement Information System (PROMIS) . PROMIS is a coordinated effort to improve the measurement of patient reported outcomes through: the development of a framework for defining essential measurement domains; systematically identifying and mapping existing measures to the framework; and using items derived from these measures to develop a bank of highly reliable, valid scales. An equivalent resource in quality improvement could lead to substantive gains.
Ensuring constructs and associated measures are clearly specified
Our attempts to identify relevant instruments underscored the importance of clarity and consistency in the way factors are defined and measured. Consistent labelling of instruments measuring similar constructs aids the indexing of studies, increasing the likelihood that researchers and decision makers will be able to retrieve and compare findings of related studies. Using well-established construct definitions as the basis for instrument development helps ensure that instruments aiming to measure the same construct will have conceptually similar content. This is particularly important for comparison across studies and synthesis because it reduces the chance that readers will erroneously compare findings across studies that appear to be measuring the same construct but are in fact measuring something quite different. Such comparisons have the potential to dilute findings when comparing or pooling across multiple studies and, in areas where there is little comparable evidence, may lead to false associations between a construct and the outcomes of QI.
To address these issues, future research should build on existing theoretical work and place greater emphasis on providing clear concept labels and definitions that reference or extend those in existing research. This is an important but substantial task because of the large body of theoretical and empirical research in psychology and social sciences underpinning many constructs. However, developing a ‘common language’ for contextual factors and intervention components may reveal that there is much less heterogeneity across studies than the literature suggests, and hence, much more potential to synthesise existing research [148, 149].
In developing the taxonomy presented in this review, we aimed to reflect prevailing labelling and conceptualisations of factors that may affect the success of CQI. The starting point for the structure and content of the taxonomy was the initial version of our InQuIRe framework. Our analysis led to elaboration of many of the constructs in our initial framework, some refinement to the categorisation of constructs within domains, but no changes to the overall structure of domains. The resulting taxonomy (Additional file 4: Tables S3, S4, S5, and S6) provides a guide to the factors that could be included in evaluations of CQI in primary care. The refinements and construct definitions derived from our measurement review (reported in this paper and the companion paper on team measures) will be incorporated in the final version of the InQuIRe framework (to be reported separately).
Strengths and limitations
To our knowledge, this is the first attempt to collate and categorise the wide range of instruments relevant to measuring factors thought to influence the success of CQI. We used a broad systematic search and an inclusive approach when screening studies for potentially relevant instruments; however, we cannot rule out that we may have missed some instruments. We limited the review to instruments with information about their development and measurement properties reported in peer-review publications (i.e., not books, theses, or proprietary instruments), reasoning that these instruments were readily available to researchers.
The taxonomy we developed draws upon the wide range of instruments identified and allows comparison of instruments using a ‘common language.’ The process of developing and applying the taxonomy revealed the complexity of comparing existing instruments and the consequent value of taxonomies for helping QI researchers make sense of heterogeneity. Because this is the first application of the taxonomy and categorisation of instruments, refinement of the taxonomy is likely. A single author developed the taxonomy and categorised instruments with input from the other authors. Given the subjectivity inherent in this type of analysis, alternative categorisations of instruments are possible.
Although not a limitation of the review, there is comparatively little research on the measurement of organizational factors in healthcare. This increases the complexity of selecting and reviewing instruments because current evidence on the measurement properties of relevant instruments is limited and heterogeneous. Rising interest in this area means the number of studies will increase, however the heterogeneity is likely to remain because of the diversity of study designs that contribute evidence of an instrument’s measurement properties. Interpreting and synthesising this evidence is complex. Guidance on appraising the methods used in these studies, interpreting the findings, and methods for synthesising findings across studies would aid both the selection and the systematic review of instruments.
Investigating the factors thought to modify the effects of CQI poses practical and methodological challenges for researchers, among the most complex of which relate to measurement. In this review, we aimed to provide guidance to support decisions around the selection of instruments suitable for measuring potential modifying factors. For researchers and those evaluating CQI in practice, this guidance should lessen the burden of locating relevant measures and may enhance the contribution their research makes by increasing the quality of measurement and the potential to synthesise findings across studies. Methodological guidance on measurement underpins our ability to generate better evidence to support policy and practice. While reviews such as this one can make a contribution, identification of a core set of measures for QI could ensure important factors are measured, improve the quality of measurement, and support the accumulation and synthesis of evidence . Ultimately, a coordinated effort to improve measurement, akin to the Patient-Reported Outcomes Measurement Information System , may be required to produce the substantive gains in knowledge needed to inform policy and practice.
Rubenstein LV, Parker LE, Meredith LS, Altschuler A, DePillis E, Hernandez J, Gordon NP: Understanding team-based quality improvement for depression in primary care. Health Serv Res. 2002, 37: 1009-1029. 10.1034/j.1600-0560.2002.63.x.
Rubenstein LV, Meredith LS, Parker LE, Gordon NP, Hickey SC, Oken C, Lee ML: Impacts of evidence-based quality improvement on depression in primary care. J Gen Intern Med. 2006, 21: 1027-1035. 10.1111/j.1525-1497.2006.00549.x.
Øvretveit J: What are the best strategies for ensuring quality in hospitals?. 2003, World Health Organisation Copenhagen: Health Evidence Network synthesis report
Shortell SM, Bennett CL, Byck GR: Assessing the impact of continuous quality improvement on clinical practice: What it will take to accelerate progress. Milbank Q. 1998, 76: 593-624. 10.1111/1468-0009.00107.
Ferlie E, Shortell SM: Improving the quality of health care in the United Kingdom and United States: a framework for change. Milbank Q. 2001, 79: 281-315. 10.1111/1468-0009.00206.
Solberg LI, Kottke TE, Brekke ML: Will primary care clinics organize themselves to improve the delivery of preventive services? A randomized controlled trial. Prev Med. 1998, 27: 623-631. 10.1006/pmed.1998.0337.
Kilo CM: A framework for collaborative improvement: lessons from the Institute for Healthcare Improvement’s Breakthrough Series. Qual Manag Health Care. 1998, 6: 1-13.
Royal Australian College of General Practitioners: Quality Improvement & Continuing Professional Development Program. 2011, Melbourne, Victoria, Australia: Royal Australian College of General Practitioners, [http://qicpd.racgp.org.au/program/overview/quality-improvement].
Windish DM, Reed DA, Boonyasai RT, Chakraborti C, Bass EB: Methodological rigor of quality improvement curricula for physician trainees: a systematic review and recommendations for change. Acad Med. 2009, 84: 1677-1692. 10.1097/ACM.0b013e3181bfa080.
Powell A, Rushmer R, Davies H: Effective quality improvement: TQM and CQI approaches. Br J Healthc Manag. 2009, 15: 114-120.
Schouten LMT, Hulscher MEJL, van Everdingen JJE, Huijsman R, Grol RPTM: Evidence for the impact of quality improvement collaboratives: systematic review. BMJ. 2008, 336: 1491-1494. 10.1136/bmj.39570.749884.BE.
Kaplan HC, Brady PW, Dritz MC, Hooper DK, Linam WM, Froehle CM, Margolis P: The influence of context on quality improvement success in health care: a systematic review of the literature. Milbank Q. 2010, 88: 500-559. 10.1111/j.1468-0009.2010.00611.x.
Alexander JA, Hearld LR: What can we learn from quality improvement research? A critical review of research methods. Med Care Res Rev. 2009, 66: 235-271. 10.1177/1077558708330424.
Robert Wood Johnson Foundation: Improving the Science of Continuous Quality Improvement Program and Evaluation. 2012, Princeton, NJ: Robert Woods Johnson Foundation,http://www.rwjf.org/en/research-publications.html,
Batalden P, Davidoff F, Marshall M, Bibby J, Pink C: So what? Now what? Exploring, understanding and using the epistemologies that inform the improvement of healthcare. BMJ Qual Saf. 2011, 20: i99-i105. 10.1136/bmjqs.2011.051698.
Eccles MP, Armstrong D, Baker R, Cleary K, Davies H, Davies S, Glasziou P, Ilott I, Kinmonth AL, Leng G: An implementation research agenda. Implement Sci. 2009, 4: 18-10.1186/1748-5908-4-18.
Øvretveit J: Understanding the conditions for improvement: research to discover which context influences affect improvement success. BMJ Qual Saf. 2011, 20: i18-i23. 10.1136/bmjqs.2010.045955.
Kaplan H, Provost L, Froehle C, Margolis P: The Model for Understanding Success in Quality (MUSIQ): building a theory of context in healthcare quality improvement. BMJ Qual Saf. 2011, 21: 13-20.
Michie S, Fixsen D, Grimshaw JM, Eccles MP: Specifying and reporting complex behaviour change interventions: the need for a scientific method. Implement Sci. 2009, 4: 40-10.1186/1748-5908-4-40.
Lukas CV, Meterko MM, Mohr D, Seibert MN, Parlier R, Levesque O, Petzel RA: Implementation of a clinical innovation: the case of advanced clinic access in the Department of Veterans Affairs. J Ambul Care Manage. 2008, 31: 94-108.
Shepperd S, Lewin S, Straus S, Clarke M, Eccles MP, Fitzpatrick R, Wong G, Sheikh A: Can we systematically review studies that evaluate complex interventions?. PLoS Med. 2009, 6: e1000086-10.1371/journal.pmed.1000086.
Berwick D: Continuous improvement as an ideal in health care. N Engl J Med. 1989, 320: 53-56. 10.1056/NEJM198901053200110.
Laffel G, Blumenthal D: The case for using industrial quality management science in health care organizations. JAMA. 1989, 262: 2869-2873. 10.1001/jama.1989.03430200113036.
Berwick D, Godfrey BA, Roessner J: Curing health care: new strategies for quality improvement: a report on the National Demonstration Project on Quality Improvement in Health Care. 1990, San Francisco: Jossey-Bass, 1
Batalden PB: The continual improvement of health care. Am J Med Qual. 1993, 8: 29-31. 10.1177/0885713X9300800201.
Batalden PB, Stoltz PK: A framework for the continual improvement of health care: building and applying professional and improvement knowledge to test changes in daily work. Jt Comm J Qual Improv. 1993, 19: 424-447. discussion 448–452
Blumenthal D, Kilo CM: A report card on continuous quality improvement. Milbank Q. 1998, 76: 625-648. 10.1111/1468-0009.00108. 511
Kaluzny AD, McLaughlin CP, Jaeger BJ: TQM as a managerial innovation: research issues and implications. Health Serv Manage Res. 1993, 6: 78-88.
Kaluzny AD, McLaughlin CP, Kibbe DC: Continuous quality improvement in the clinical setting: enhancing adoption. Qual Manag Health Care. 1992, 1: 37-44.
O’Brien JL, Shortell SM, Hughes EF, Foster RW, Carman JM, Boerstler H, O’Connor EJ: An integrative model for organization-wide quality improvement: lessons from the field. Qual Manag Health Care. 1995, 3: 19-30.
Boaden R, Harvey G, Moxham C, Proudlove N: NHS Institute for Innovation and Improvement/Manchester Business School Coventry. Quality Improvement: theory and practice in healthcare. 2008, Coventry: NHS Institute for Innovation and Improvement/Manchester Business School
Gustafson DH, Hundt AS: Findings of innovation research applied to quality management principles for health care. Health Care Manage Rev. 1995, 20: 16-33.
McLaughlin CP, Kaluzny AD: Continuous quality improvement in health care: theory, implementation and applications. 2006, Gaithersburg, Md: Aspen Publishers Inc, 3
Powell A, Rushmer R, Davies H: Effective quality improvement: Some necessary conditions. Br J Healthc Manag. 2009, 15: 62-68.
Lemieux-Charles L, Murray M, Baker G, Barnsley J, Tasa K, Ibrahim S: The effects of quality improvement practices on team effectiveness: a mediational model. J Organ Behav. 2002, 23: 533-553. 10.1002/job.154.
Shortell SM, Marsteller JA, Lin M, Pearson ML, Wu SY, Mendel P, Cretin S, Rosen M: The role of perceived team effectiveness in improving chronic illness care. Med Care. 2004, 42: 1040-1048. 10.1097/00005650-200411000-00002.
Cohen D, McDaniel RR, Crabtree BF, Ruhe MC, Weyer SM, Tallia A, Miller WL, Goodwin MA, Nutting P, Solberg LI: A practice change model for quality improvement in primary care practice. J Healthc Manag. 2004, 49: 155-168. discussion 169–170
Orzano AJ, McInerney CR, Scharf D, Tallia AF, Crabtree BF: A knowledge management model: Implications for enhancing quality in health care. J Am Soc Inf Sci Technol. 2008, 59: 489-505. 10.1002/asi.20763.
Rhydderch M, Elwyn G, Marshall M, Grol R: Organizational change theory and the use of indicators in general practice. Qual Saf Health Care. 2004, 13: 213-217. 10.1136/qshc.2003.006536.
Weiner BJ, Amick H, Lee S-YD: Conceptualization and measurement of organizational readiness for change: a review of the literature in health services research and other fields. Med Care Res Rev. 2008, 65: 379-436. 10.1177/1077558708317802.
Greenhalgh T, Robert G, MacFarlane F, Bate P, Kyriakidou O: Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004, 82: 581-629. 10.1111/j.0887-378X.2004.00325.x.
Gustafson DH, Sainfort F, Eichler M, Adams L, Bisognano M, Steudel H: Developing and testing a model to predict outcomes of organizational change. Health Serv Res. 2003, 38: 751-776. 10.1111/1475-6773.00143.
Ilgen DR, Hollenbeck JR, Johnson M, Jundt D: Teams in organizations: from input-process-output models to IMOI models. Annu Rev Psychol. 2005, 56: 517-543. 10.1146/annurev.psych.56.091103.070250.
Kozlowski SWJ, Ilgen DR: Enhancing the effectiveness of work groups and teams. Psychol Sci Public Interest. 2006, 7: 77-124.
Heinemann GD, Zeiss AM: Team performance in health care: assessment and development. 2002, New York, NY: Kluwer Academic/Plenum Publishers
Mathieu J, Maynard MT, Rapp T, Gilson L: Team effectiveness 1997–2007: a review of recent advancements and a glimpse into the future. J Manage. 2008, 34: 410-476.
Poulton BC, West MA: The determinants of effectivenss in primary health care teams. J Interprof Care. 1999, 13: 7-18. 10.3109/13561829909025531.
Solberg LI: Improving medical practice: a conceptual framework. Ann Fam Med. 2007, 5: 251-256. 10.1370/afm.666.
Holt DT, Helfrich CD, Hall CG, Weiner BJ: Are you ready? How health professionals can comprehensively conceptualize readiness for change. J Gen Intern Med. 2010, 25 (Suppl 1): 50-55.
Weiner BJ: A theory of organizational readiness for change. Implement Sci. 2009, 4: 67-10.1186/1748-5908-4-67.
Geboers H, Grol R, van den Bosch W, van den Hoogen H, Mokkink H, van Montfort P, Oltheten H: A model for continuous quality improvement in small scale practices. Qual Health Care. 1999, 8: 43-48. 10.1136/qshc.8.1.43.
Weiner BJ, Alexander JA, Shortell SM, Baker LC, Becker M, Geppert JJ: Quality improvement implementation and hospital performance on quality indicators. Health Serv Res. 2006, 41: 307-334. 10.1111/j.1475-6773.2005.00483.x.
Mokkink LB, Terwee CB, Stratford PW, Alonso J, Patrick DL, Riphagen I, Knol DL, Bouter LM, de Vet HC: Evaluation of the methodological quality of systematic reviews of health status measurement instruments. Qual Life Res. 2009, 18: 313-333. 10.1007/s11136-009-9451-9.
Falagas ME, Pitsouni EI, Malietzis GA, Pappas G: Comparison of PubMed, Scopus, Web of Science, and Google Scholar: strengths and weaknesses. FASEB J. 2008, 22: 338-342.
Pope C, Mays N: Qualitative research in health care. 2006, Oxford, UK; Malden, Massachusetts: Blackwell Publishing/BMJ Books, 3
Solberg LI, Brekke ML, Kottke TE, Steel RP: Continuous quality improvement in primary care: what’s happening?. Med Care. 1998, 36: 625-635. 10.1097/00005650-199805000-00003.
Shortell SM, Jones RH, Rademaker AW, Gillies RR, Dranove DS, Hughes EF, Budetti PP, Reynolds KS, Huang CF: Assessing the impact of total quality management and organizational culture on multiple outcomes of care for coronary artery bypass graft surgery patients. Med Care. 2000, 38: 207-217. 10.1097/00005650-200002000-00010.
Parker VA, Wubbenhorst WH, Young GJ, Desai KR, Charns MP: Implementing quality improvement in hospitals: the role of leadership and culture. Am J Med Qual. 1999, 14: 64-69. 10.1177/106286069901400109.
Nowinski CJ, Becker SM, Reynolds KS, Beaumont JL, Caprini CA, Hahn EA, Peres A, Arnold BJ: The impact of converting to an electronic health record on organizational culture and quality improvement. Int J Med Inf. 2007, 76 (Suppl 1): S174-S183.
Rondeau KV, Wagar TH: Implementing CQI while reducing the work force: how does it influence hospital performance?. Healthc Manage Forum. 2004, 17: 22-29. 10.1016/S0840-4704(10)60324-9.
Dabhilkar M, Bengtsson L, Bessant J: Convergence or national specificity? Testing the CI maturity model across multiple countries. Creat Innov Man. 2007, 16: 348-362. 10.1111/j.1467-8691.2007.00449.x.
Mokkink LB, Terwee CB, Patrick DL, Alonso J, Stratford PW, Knol DL, Bouter LM, de Vet HCW: The COSMIN study reached international consensus on taxonomy, terminology, and definitions of measurement properties for health-related patient-reported outcomes. J Clin Epidemiol. 2010, 63: 737-745. 10.1016/j.jclinepi.2010.02.006.
Mannion R, Davies H, Scott T, Jung T, Bower P, Whalley D, McNally R: Measuring and assessing organizational culture in the NHS (OC1). 2008, London: National Co-ordinating Centre for National Institute for Health Research Service Delivery and Organisation Programme (NCCSDO)
Mokkink LB, Terwee CB, Knol DL, Stratford PW, Alonso J, Patrick DL, Bouter LM, de Vet HC: Protocol of the COSMIN study: COnsensus-based Standards for the selection of health Measurement INstruments. BMC Med Res Methodol. 2006, 6: 2-10.1186/1471-2288-6-2.
Joint Committee on Standards for Educational and Psychological Testing (U.S.): Standards for educational and psychological testing. American Educational Research Association, American Psychological Association, National Council on Measurement in Education. 1999, Washington, DC: American Educational Research Association
Hinkin T: A brief tutorial on the development of measures for use in survey questionnaires. Organ Res Methods. 1998, 1: 104-121. 10.1177/109442819800100106.
Klein KJ, Kozlowski SWJ: From micro to meso: critical steps in conceptualizing and conducting multilevel research. Organ Res Methods. 2000, 3: 211-236. 10.1177/109442810033001.
Malhotra MK, Grover V: An assessment of survey research in POM: from constructs to theory. J Oper Manag. 1998, 16: 407-425. 10.1016/S0272-6963(98)00021-7.
Holt DT, Armenakis AA, Harris SG, Feild HS: Toward a comprehensive definition of readiness for change: a review of research and instrumentation. Research in organizational change and development. Volume 16. Edited by: Pasmore WA, Woodman RW. 2007, Bingley, UK: Emerald Group Publishing Limited, 289-336.
Scott T, Mannion R, Davies H, Marshall M: The quantitative measurement of organizational culture in health care: a review of the available instruments. Health Serv Res. 2003, 38: 923-945. 10.1111/1475-6773.00154.
Gershon RR, Stone PW, Bakken S, Larson E: Measurement of organizational culture and climate in healthcare. J Nurs Adm. 2004, 34: 33-40. 10.1097/00005110-200401000-00008.
Counte MA, Meurer S: Issues in the assessment of continuous quality improvement implementation in health care organizations. Int J Qual Health Care. 2001, 13: 197-207. 10.1093/intqhc/13.3.197.
Rhydderch M, Edwards A, Elwyn G, Marshall M, Engels Y, Van den Hombergh P, Grol R: Organizational assessment in general practice: a systematic review and implications for quality improvement. J Eval Clin Pract. 2005, 11: 366-378. 10.1111/j.1365-2753.2005.00544.x.
French B, Thomas L, Baker P, Burton C, Pennington L, Roddam H: What can management theories offer evidence-based practice? A comparative analysis of measurement tools for organizational context. Implement Sci. 2009, 4: 28-10.1186/1748-5908-4-28.
Boonyasai RT, Windish DM, Chakraborti C, Feldman LS, Rubin HR, Bass EB: Effectiveness of teaching quality improvement to clinicians: a systematic review. JAMA. 2007, 298: 1023-1037. 10.1001/jama.298.9.1023.
Molina-Azorín JF, Tarí JJ, Claver-Cortés E, López-Gamero MD: Quality management, environmental management and firm performance: a review of empirical studies and issues of integration. Int J Manag Rev. 2009, 11: 197-222. 10.1111/j.1468-2370.2008.00238.x.
Minkman M, Ahaus K, Huijsman R: Performance improvement based on integrated quality management models: what evidence do we have? A systematic literature review. Int J Qual Health Care. 2007, 19: 90-104. 10.1093/intqhc/mzl071.
Wardhani V, Utarini A, van Dijk JP, Post D, Groothoff JW: Determinants of quality management systems implementation in hospitals. Health Policy. 2009, 89: 239-251. 10.1016/j.healthpol.2008.06.008.
Snyder-Halpern R: Measuring organizational readiness for nursing research programs. West J Nurs Res. 1998, 20: 223-237. 10.1177/019394599802000207.
Lin MK, Marsteller JA, Shortell SM, Mendel P, Pearson M, Rosen M, Wu SY: Motivation to change chronic illness care: results from a national evaluation of quality improvement collaboratives. Health Care Manage Rev. 2005, 30: 139-156. 10.1097/00004010-200504000-00008.
Saraph JV, Benson PG, Schroeder RG: An instrument for measuring the critical factors of quality management. Decision Sci. 1989, 20: 810-829. 10.1111/j.1540-5915.1989.tb01421.x.
Chen G, Gully SM, Eden D: Validation of a new general self-efficacy scale. Organ Res Methods. 2001, 4: 62-83. 10.1177/109442810141004.
Oudejans SC, Schippers GM, Schramade MH, Koeter MW, Van den Brink W: Measuring the learning capacity of organisations: development and factor analysis of the Questionnaire for Learning Organizations. Qual Saf Health Care. 2011, 20: 4-
Apekey TA, McSorley G, Tilling M, Siriwardena AN: Room for improvement? Leadership, innovation culture and uptake of quality improvement methods in general practice. J Eval Clin Pract. 2011, 17: 311-318. 10.1111/j.1365-2753.2010.01447.x.
Schneider B, Ehrhart MG, Macey WH: Perspectives on organizational climate and culture. APA handbook of industrial and organizational psychology. Volume 1. Edited by: Zedeck S. 2011, Washington, DC: American Psychological Association
O’Neill S, Hempel S, Lim Y, Danz M, Foy R, Suttorp M, Shekelle P, Rubenstein L: Identifying continuous quality improvement publications: what makes an improvement intervention ’CQI’?. BMJ Qual Saf. 2011, 20: 1011-1019. 10.1136/bmjqs.2010.050880.
Nagykaldi Z, Mold JW, Aspy CB: Practice facilitators: a review of the literature. Fam Med. 2005, 37: 581-588.
Thompson GN, Estabrooks CA, Degner LF: Clarifying the concepts in knowledge transfer: a literature review. J Adv Nurs. 2006, 53: 691-701. 10.1111/j.1365-2648.2006.03775.x.
Barsness ZI, Shortell SM, Gillies EFX, Hughes JL, O’Brien D: The quality march. National survey profiles quality improvement activities. Hosp Health Netw. 1993, 67: 52-55.
Institute for Healthcare Improvement: Improvement methods. 2012, Cambridge, MA: Institute for Healthcare Improvement,http://www.ihi.org,
Langley G, Nolan K, Nolan T, Norman C, Provost L: The improvement guide: a practical approach to enhancing organizational performance. 1996, San Francisco: Jossey-Bass Publishers
Vos L, Duckers M, Wagner C, van Merode G: Applying the quality improvement collaborative method to process redesign: a multiple case study. Implement Sci. 2010, 5: 19-10.1186/1748-5908-5-19.
Duckers MLA, Wagner C, Groenewegen PP: Developing and testing an instrument to measure the presence of conditions for successful implementation of quality improvement collaboratives. BMC Health Serv Res. 2008, 8: 172-10.1186/1472-6963-8-172.
Schouten L, Grol R, Hulscher M: Factors influencing success in quality improvement collaboratives: development and psychometric testing of an instrument. Implement Sci. 2010, 5: 84-10.1186/1748-5908-5-84.
Helfrich CD, Li YF, Sharp ND, Sales AE: Organizational readiness to change assessment (ORCA): Development of an instrument based on the Promoting Action on Research in Health Services (PARiHS) framework. Implement Sci. 2009, 4: 38-10.1186/1748-5908-4-38.
Ovretveit J: Change achievement success indicators (CASI). 2004, Stockholm, Sweden: Karolinska Institute Medical Management, http://www.ihi.org/knowledge/Pages/Tools/ChangeAchievementSuccessIndicatorCASI.aspx.
Alexander JA, Lichtenstein R, Jinnett K, D’Aunno TA, Ullman E: The effects of treatment team diversity and size on assessments of team functioning. Hosp Health Serv Adm. 1996, 41: 37-53.
Shortell SM, O’Brien JL, Carman JM, Foster RW, Hughes EF, Boerstler H: Assessing the impact of continuous quality improvement/total quality management: Concept versus implementation. Health Serv Res. 1995, 30: 377-401.
Alemi F, Safaie FK, Neuhauser D: A survey of 92 quality improvement projects. Jt Comm J Qual Improv. 2001, 27: 619-632.
Ostroff C, Kinicki A, Tamkins M: Organizational culture and climate. Handbook of Psychology, Volume 12, Industrial and Organizational Psychology. Edited by: Borman WC, Ilgen DR, Klimoski RJ, Hoboken NJ. 2002, Hoboken, NJ: John Wiley and Sons, Inc, 565-593.
Ovretveit J: Leading improvement effectively: review of research. 2009, London: The Health Foundation London
Bobiak SN, Zyzanski SJ, Ruhe MC, Carter CA, Ragan B, Flocke SA, Litaker D, Stange KC: Measuring practice capacity for change: a tool for guiding quality improvement in primary care settings. Qual Manag Health Care. 2009, 18: 278-284. 10.1136/qshc.2008.028720.
Ohman-Strickland PA, John Orzano A, Nutting PA, Perry Dickinson W, Scott-Cawiezell J, Hahn K, Gibel M, Crabtree BF: Measuring organizational attributes of primary care practices: development of a new instrument. Health Serv Res. 2007, 42: 1257-1273. 10.1111/j.1475-6773.2006.00644.x.
Rushmer RK, Kelly D, Lough M, Wilkinson JE, Greig GJ, Davies HT: The Learning Practice Inventory: diagnosing and developing Learning Practices in the UK. J Eval Clin Pract. 2007, 13: 206-211. 10.1111/j.1365-2753.2006.00673.x.
Sylvester S: Measuring the learning practice: diagnosing the culture in general practice. Qual Prim Care. 2003, 11: 29-40.
Kralewski J, Dowd BE, Kaissi A, Curoe A, Rockwood T: Measuring the culture of medical group practices. Health Care Manage Rev. 2005, 30: 184-193. 10.1097/00004010-200507000-00002.
Marchionni C, Ritchie J: Organizational factors that support the implementation of a nursing best practice guideline. J Nurs Manag. 2008, 16: 266-274. 10.1111/j.1365-2834.2007.00775.x.
Meurer SJ, Rubio DM, Counte MA, Burroughs T: Development of a healthcare quality improvement measurement tool: results of a content validity study. Hosp Top. 2002, 80: 7-13.
Taveira AD, James CA, Karsh BT, Sainfort F: Quality management and the work environment: an empirical investigation in a public sector organization. Appl Ergon. 2003, 34: 281-291. 10.1016/S0003-6870(03)00054-1.
Zeitz G, Johannesson R, Ritchie JE: An employee survey measuring total quality management practices and culture: development and culture. Group Organ Manage. 1997, 22: 414-431. 10.1177/1059601197224002.
Lehman WE, Greener JM, Simpson DD: Assessing organizational readiness for change. J Subst Abuse Treat. 2002, 22: 197-209. 10.1016/S0740-5472(02)00233-7.
Levesque DA, Prochaska JM, Prochaska JO, Dewart SR, Hamby LS, Weeks WB: Organizational stages and processes of change for continuous quality improvement in health care. Consulting Psychol J. 2001, 53: 139-153.
Ashford AJ: Behavioural change in professional practice: supporting the development of effective implementation strategies. 1998, University of Newcastle upon Tyne Newcastle upon Tyne: Centre for Health Services Research
Michie S, Johnston M, Abraham C, Lawton R, Parker D, Walker A, on behalf of the ‘Psychological Theory’ Group: Making psychological theory useful for implementing evidence based practice: a consensus approach. Qual Saf Health Care. 2005, 14: 26-33. 10.1136/qshc.2004.011155.
Herscovitch L, Meyer JP: Commitment to organizational change: extension of a three-component model. J Appl Psychol. 2002, 87: 474-487.
Holt DT, Armenakis AA, Field HS, Harris SG: Readiness for organizational change. J Appl Behav Sci. 2007, 43: 232-255. 10.1177/0021886306295295.
Spreitzer GM: Psychological empowerment in the workplace: dimensions, mesurement and validation. Acad Manage J. 1995, 38: 1442-1465. 10.2307/256865.
Armenakis AA, Bernerth JB, Pitts JP, Walker HJ: Organizational change recipients’ beliefs scale: development of an assessment instrument. J Appl Behav Sci. 2007, 43: 481-505. 10.1177/0021886307303654.
Hill A, Gwadry-Sridhar F, Armstrong T, Sibbald WJ: Development of the continuous quality improvement questionnaire (CQIQ). J Crit Care. 2001, 16: 150-160. 10.1053/jcrc.2001.30165.
Coyle-Shapiro JAM, Morrow PC: The role of individual differences in employee adoption of TQM orientation. J Vocat Behav. 2003, 62: 320-340. 10.1016/S0001-8791(02)00041-6.
Geboers H, Mokkink H, van Montfort P, van den Hoogen H, van den Bosch W, Grol R: Continuous quality improvement in small general medical practices: the attitudes of general practitioners and other practice staff. Int J Qual Health Care. 2001, 13: 391-397. 10.1093/intqhc/13.5.391.
Fedor DB, Fedor DB, Caldwell S, Herold DM: The effects of organizational changes on employee commitment: a multilevel investigation. Pers Psychol. 2006, 59: 1-29. 10.1111/j.1744-6570.2006.00852.x.
Calomeni CA, Solberg LI, Conn SA: Nurses on quality improvement teams: how do they benefit?. J Nurs Care Qual. 1999, 13: 75-90.
Ogrinc G, Headrick LA, Morrison LJ, Foster T: Teaching and assessing resident competence in practice-based learning and improvement. J Gen Intern Med. 2004, 19: 496-500. 10.1111/j.1525-1497.2004.30102.x.
Gould BE, Grey MR, Huntington CG, Gruman C, Rosen JH, Storey E, Abrahamson L, Conaty AM, Curry L, Ferreira M: Improving patient care outcomes by teaching quality improvement to medical students in community-based practices. Acad Med. 2002, 77: 1011-1018. 10.1097/00001888-200210000-00014.
Irvine D, Leatt P, Evans MG, Baker RG: Measurement of staff empowerment within health service organizations. J Nurs Meas. 1999, 7: 79-96.
Mokkink L, Terwee C, Knol D, Stratford P, Alonso J, Patrick D, Bouter L, De Vet H: The COSMIN checklist for evaluating the methodological quality of studies on measurement properties: a clarification of its content. BMC Med Res Methodol. 2010, 10: 22-10.1186/1471-2288-10-22.
Streiner DL, Norman GR: Health measurement scales: a practical guide to their development and use. 2003, Oxford; New York: Oxford University Press, 3
Podsakoff P, MacKenzie S, Lee J: Common method biases in behavioural research: a critical review of the literature and recommended remedies. J Appl Psychol. 2003, 88: 879-903.
Walshe K: Pseudoinnovation: the development and spread of healthcare quality improvement methodologies. Int J Qual Health Care. 2009, 21: 153-159. 10.1093/intqhc/mzp012.
Bellg AJ, Borrelli B, Resnick B, Hecht J, Minicucci DS, Ory M, Ogedegbe G, Orwig D, Ernst D, Czajkowski S: Enhancing treatment fidelity in health behavior change studies: best practices and recommendations from the NIH Behavior Change Consortium. Health Psychol. 2004, 23: 443-451.
Gearing RE, El-Bassel N, Ghesquiere A, Baldwin S, Gillies J, Ngeow E: Major ingredients of fidelity: a review and scientific guide to improving quality of intervention research implementation. Clin Psychol Rev. 2011, 31: 79-88. 10.1016/j.cpr.2010.09.007.
Glasgow RE, McKay HG, Piette JD, Reynolds KD: The RE-AIM framework for evaluating interventions: what can it tell us about approaches to chronic illness management?. Patient Educ Couns. 2001, 44: 119-127. 10.1016/S0738-3991(00)00186-5.
Øvretveit J, Gustafson D: Evaluation of quality improvement programmes. Qual Saf Health Care. 2002, 11: 270-275. 10.1136/qhc.11.3.270.
Ogrinc G, Mooney SE, Estrada C, Foster T, Goldmann D, Hall LW, Huizinga MM, Liu SK, Mills P, Neily J: The SQUIRE (Standards for QUality Improvement Reporting Excellence) guidelines for quality improvement reporting: explanation and elaboration. Qual Saf Health Care. 2008, 17 (Suppl 1): i13-i32. 10.1136/qshc.2008.029058.
Hrisos S, Eccles MP, Francis JJ, Dickinson HO, Kaner EF, Beyer F, Johnston M: Are there valid proxy measures of clinical behaviour? A systematic review. Implement Sci. 2009, 4: 37-10.1186/1748-5908-4-37.
Kendall DL, Salas E: Measuring team performance: review of current methods and consideration of future needs. The Science and Simulation of Human Performance. Edited by: Ness JW, Tepe V, Ritzer DR. 2004, Bingley, UK: Emerald Group Publishing Limited, 307-326. [Salas E (Series Editor): Advances in Human Performance and Cognitive Engineering Research, vol 5.]
Hall CB, Tennen H, Wakefield DB, Brazil K, Cloutier MM: Organizational assessment in paediatric primary care: development and initial validation of the primary care organizational questionnaire. Health Serv Manage Res. 2006, 19: 207-214. 10.1258/095148406778951457.
Shortell SM, Rousseau DM, Gillies RR, Devers KJ, Simons TL: Organizational assessment in intensive care units (ICUs): construct development, reliability, and validity of the ICU nurse-physician questionnaire. Med Care. 1991, 29: 709-726. 10.1097/00005650-199108000-00004.
Hann M, Bower P, Campbell S, Marshall M, Reeves D: The association between culture, climate and quality of care in primary health care teams. Fam Pract. 2007, 24: 323-329. 10.1093/fampra/cmm020.
Hung DY, Rundall TG, Tallia AF, Cohen DJ, Halpin HA, Crabtree BF: Rethinking prevention in primary care: applying the chronic care model to address health risk behaviors. Milbank Q. 2007, 85: 69-91. 10.1111/j.1468-0009.2007.00477.x.
Zazzali JL, Alexander JA, Shortell SM, Burns LR: Organizational Culture and Physician Satisfaction with Dimensions of Group Practice. Health Serv Res. 2007, 42: 1150-1176. 10.1111/j.1475-6773.2006.00648.x.
Bosch M, Dijkstra R, Wensing M, van der Weijden T, Grol R: Organizational culture, team climate and diabetes care in small office-based practices. BMC Health Serv Res. 2008, 8: 180-10.1186/1472-6963-8-180.
Brazil K, Wakefield DB, Cloutier MM, Tennen H, Hall CB: Organizational culture predicts job satisfaction and perceived clinical effectiveness in pediatric primary care practices. Health Care Manage Rev. 2010, 35: 365-371. 10.1097/HMR.0b013e3181edd957.
Eccles M, Hrisos S, Francis J, Steen N, Bosch M, Johnston M: Can the collective intentions of individual professionals within healthcare teams predict the team’s performance: developing methods and theory. Implement Sci. 2009, 4: 24-10.1186/1748-5908-4-24.
Klein KJ, Conn AB, Smith DB, Sorra JS: Is everyone in agreement? An exploration of within-group agreement in employee perceptions of the work environment. J Appl Psychol. 2001, 86: 3-16.
Cella D, Yount S, Rothrock N, Gershon R, Cook K, Reeve B, Ader D, Fries JF, Bruce B, Rose M, on behalf of the PROMIS Cooperative Group: The Patient-Reported Outcomes Measurement Information System (PROMIS): progress of an NIH Roadmap Cooperative Group during its first two years. Med Care. 2007, 45 (1): S3-S11. 10.1097/01.mlr.0000258615.42478.55.
Gardner B, Whittington C, McAteer J, Eccles MP, Michie S: Using theory to synthesise evidence from behaviour change interventions: the example of audit and feedback. Soc Sci Med. 2010, 70: 1618-1625. 10.1016/j.socscimed.2010.01.039.
Leeman J, Baernholdt M, Sandelowski M: Developing a theory-based taxonomy of methods for implementing change in practice. J Adv Nurs. 2007, 58: 191-200. 10.1111/j.1365-2648.2006.04207.x.
This project was funded by a Monash University Faculty of Medicine Strategic Grant and SB was supported by a Monash University Departmental doctoral scholarship. We are grateful to Matthew Page for providing research assistance in piloting the data extraction methods, Katherine Beringer for retrieving papers for inclusion in the review, and Joanne McKenzie for statistical advice regarding the analysis of multi-level measures. Finally, we gratefully acknowledge the helpful comments of the reviewers Cara Lewis and Cameo Borntrager.
Heather Buchan is a member of the Implementation Science Editorial Board. The authors have no other competing interests.
SB and SG conceived the study with input from HB. SB designed the review, conducted the searching, screening, data extraction and analysis. SG, HB, and MB provided input on the design, provided comment on the analysis and the presentation of results. SB drafted the manuscript and made subsequent revisions. All authors provided critical review of the manuscript. All authors read and approved the final manuscript.
Electronic supplementary material
Additional file 4: Tables reporting the content of instruments (Tables S3-S6) and overview of development and assessment of measurement properties (Table S10). (PDF 300 KB)
Additional file 5: Tables summarising the characteristics of instruments included for review of measurement properties (Tables S7-S9). (PDF 267 KB)
Additional file 6: Tables summarising the development and measurement properties of instruments included in Stage 4 of the review (Tables S11–S13). (PDF 268 KB)
About this article
Cite this article
Brennan, S.E., Bosch, M., Buchan, H. et al. Measuring organizational and individual factors thought to influence the success of quality improvement in primary care: a systematic review of instruments. Implementation Sci 7, 121 (2012). https://doi.org/10.1186/1748-5908-7-121