Association of intervention outcomes with practice capacity for change: Subgroup analysis from a group randomized trial
© Litaker et al; licensee BioMed Central Ltd. 2008
Received: 30 March 2007
Accepted: 16 May 2008
Published: 16 May 2008
The relationship between health care practices' capacity for change and the results and sustainability of interventions to improve health care delivery is unclear.
In the setting of an intervention to increase preventive service delivery (PSD), we assessed practice capacity for change by rating motivation to change and instrumental ability to change on a one to four scale. After combining these ratings into a single score, random effects models tested its association with change in PSD rates from baseline to immediately after intervention completion and 12 months later.
Our measure of practices' capacity for change varied widely at baseline (range 2–8; mean 4.8 ± 1.6). Practices with greater capacity for change delivered preventive services to eligible patients at higher rates after completion of the intervention (2.7% per unit increase in the combined effort score, p < 0.001). This relationship persisted for 12 months after the intervention ended (3.1%, p < 0.001).
Greater capacity for change is associated with a higher probability that a practice will attain and sustain desired outcomes. Future work to refine measures of this practice characteristic may be useful in planning and implementing interventions that result in sustained, evidence-based improvements in health care delivery.
Systematic reviews and meta-analyses demonstrate that many interventions to improve health care quality yield inconsistent results when evaluated through clinical trials [1, 2]. One potential explanation for this is that the design of standardized interventions tends to overlook contextual factors that influence the implementation of new procedures in real world settings [3–8]. Exploring these factors further and testing their association with changes in health care delivery may therefore provide insights that foster more rapid uptake of evidence-based care into routine use.
Over the past two decades, work conducted in a broad range of settings has provided several ways to conceptualize influences on the implementation process [7, 9–13]. One descriptive framework focusing on primary care practices' ability to adopt and implement new approaches to health care delivery may be particularly valuable, given that these practices represent a venue through which a majority of Americans receive ambulatory care [15–18]. This framework, developed by Cohen et al., highlights the potential role of several practice characteristics: the individual and aggregate motivations of practice members; the resources that they identify within and outside the practice that are both accessible and important in supporting change efforts (including previous experience in using new tools or adopting new procedures); the external forces or factors that shape or influence change options; and practice members' perception of options and opportunities for change.
Two factors – motivations and resources for change – are central components of several frameworks for implementation[9, 12, 13], in addition to the one described by Cohen et al. While some have suggested that practice motivation or inertia may be important to clinical guideline implementations, motivation appears to be necessary but not sufficient for change to occur . That is, confidence to act and an ability to implement change must also be present [9, 12]. Other work lends support to this view: interventions that provide instrumental assistance during the implementation phase can be effective in fostering change once motivation exists or is developed . Thus, it is important that studies examining the association of practice capacity for change with implementation use measures that represent both components and assess their potential interaction.
Despite considerable effort to characterize organizational capacity for change at the conceptual level, only a handful of studies have developed operational measures for this construct and established an association with implementation outcomes. Of these, the majority focus on intention to act rather than actual behaviors. To assess the association of capacity for change with demonstrable improvements in evidence-based health care delivery, we used data from the Study to Enhance Prevention by Understanding Practices (STEP-UP). This paper tests the hypotheses that greater practice capacity for change would be associated with greater change in the STEP-UP study outcome, preventive service delivery (PSD), from baseline to the end of the active intervention period and that these improvements would be sustained during follow up, when no intervention was being offered.
The design, methods, and findings from STEP-UP have been described in detail previously [21, 22]. In brief, this group randomized clinical trial to improve preventive service delivery randomly assigned 79 community-based primary care practices in northeast Ohio to a control or intervention group. Intervention practices were assessed by a research nurse facilitator over 1–3 days to gain an understanding of practice roles and routine procedures. The intervention, incorporating information from this assessment, involved creation of a practice-individualized plan for change using a menu of tools (e.g., chart stickers, flow sheets, reminder cards) and approaches (e.g., personnel roles, delivery of preventive care during illness visits) to enhance preventive health care. The study outcome, delivery of preventive services recommended by the U.S. Preventive Services Task Force, was determined through review of a cross-sectional sample of medical records for patients seen on a randomly selected day within two weeks of study baseline (month 0), month 6, month 12 (end of the intervention), and at follow up visits at months 18 and 24. PSD rates were calculated at each of these time points at each practice for each category of recommended services (e.g., screening, immunizations, and behavioral counseling). These three rates were then combined into a single global rate of PSD for each time point . Thirty-nine practices were randomly assigned to the intervention; the 37 practices participating in follow up for the full 24 months represent the sample for this study.
Nearly all previous studies assessing organizational capacity use a quantitative approach that relies upon participant surveys [9, 25–30]. While reflective of the experience or perspectives of those working in the practice, this approach is often limited by low response rates and may miss practice features that are not directly assessed by the items administered. To capture more fully the practice characteristics representing the conceptual domains of motivations and resources, we used a qualitative strategy based on direct observation of the practice by research team members. This process followed several steps. First, each member of the team (comprised of two nurse practice change facilitators, three research nurses involved in on-site medical record review, the epidemiologist data analyst and the physician/epidemiologist principal investigator) read extensive ethnographic field notes generated by the facilitator and research nurses from an assessment used to develop the practice-individualized intervention [20, 31]. Each team member individually rated two aspects of the practice: the amount of effort needed to motivate practice staff to undertake the intervention (an inverse measure of the practice's a priori internal motivation to change), and the amount of instrumental assistance a practice needed to implement tools and approaches designed to increase PSD (an inverse measure of the practice's innate ability to change). Both ratings were expressed using a four-point scale. To facilitate model interpretation, scoring was reversed such that a value of one represented a practice in which substantial efforts were required to motivate the practice or to assist them in performing the instrumental tasks of the intervention (i.e., low intrinsic motivation or low ability to change); a value of four reflected the need for very little effort in either motivating the practice or in assisting it (i.e., high intrinsic motivation or high ability to change). The research team then met and shared their individual ratings. Discrepancies were resolved by discussing the practice from the diverse points of view of the team members. When necessary, original data were consulted to identify confirming or contradictory evidence for disparate ratings. As a final step, numeric ratings were added to form a single score representing the combined effort needed to motivate or to assist each practice in implementing the intervention.
To provide a preliminary test of the relationship between absolute change in PSD and the capacity for change score, we compared mean PSD values for practices in the highest and lowest tertiles of our score using Student's t-test. We then assessed this association more thoroughly using data from all study outcome assessments made every six months to develop models that accounted for repeated measures made at each practice. Separate models were developed to assess the association between the combined practice capacity for change score and change in the practice rate of PSD from baseline to month 12 and from baseline to month 24. A post-hoc analysis assessed the association of an interaction term (the product of both ratings) with the outcome at both time points. A two-tailed p value < 0.05 served as the threshold for statistical significance. SPSS version 13 and HLM version 6.03 were used to perform the analyses. The University Hospitals of Cleveland Institutional Review Board approved this study, which was conducted in accordance with the Declaration of Helsinki principles.
For the group as a whole, change in PSD from baseline to completion of the intervention period (month 12) varied significantly with absolute change ranging from -1% to 21% (mean 7.6% ± 5.5); at month 24, absolute change in PSD rates ranged from -9% to 26% (mean 6.9% ± 7.0). Regarding practices' capacity for change, the full range of scores (0–8) was used, with average score falling in the mid-range (mean 4.8 ± 1.6).
Using multiple assessments of outcomes at each intervention practice, random effects models demonstrated a 2.7% increment in PSD rates at month 12 (completion of the active intervention) for each unit increase in the practice capacity for change score (p < 0.001). This finding was similar at month 24 (3.1%; p < 0.001). To explore differences in PSD rates related to the components of the combined score, a supplemental analysis demonstrated a strong association with instrumental change capacity (3.2%, p = 0.002); a weaker association with motivation to change approached significance (2.1%, p = 0.09). A significant interaction was also observed: each one-point increase in the product of the two ratings (indicative of decreasing research team effort to motivate and to assist in instrumental tasks) was associated with an increase from baseline in PSD rates of 1.1% and 1.3% at months 12 and 24, respectively (both p values < 0.001).
These results yield two insights with potential value for implementation research. First, using qualitative estimates generated by our research team, we observe significant variation among practices in the level of effort required to motivate practices to undertake change and to assist them in implementing tools and approaches to enhance preventive service delivery. We also demonstrate that variation in our estimates of practices' capacity for change correlates with differences in outcomes both at the end of an intervention and for at least 12 months thereafter. Taken together, these findings suggest that these simple measures of capacity for change have utility in predicting intervention adoption, implementation, and maintenance and that variation in capacity for change may potentially explain inconsistent results of efficacious practice-based interventions applied outside controlled trial settings .
It is not surprising that variation exists in practices' capacity for change. Previous work, for example, highlights the rich differences that characterize the health system for primary care and the many factors that contribute to its evolution in individual practice settings [10, 31–34]. Staff with particular skills, interests, and personal motivations, for example, enter and leave practices regularly, while new challenges within the larger health care system and in society continually emerge and dissipate . Acknowledging these differences across practices may be useful for implementing efficacious interventions into real world practice settings in a variety of ways. In some cases, researchers seeking to enhance their success in improving health care delivery have begun to perform initial practice assessments and to use insights from this process to guide the development of tailored interventions [20, 31, 34–36]. An assessment of practice capacity for change may also be useful in promoting greater efficiency or equity in the deployment of an intervention, depending on the goals of the research team. Practice assessments, for example, may allow for the targeting of limited resources to practices with the greatest capacity for change. If resources are less limited, it may be possible to reduce practice-level disparities in performance by targeting greatest efforts toward those with the lowest capacity.
Given the nature of this analysis, we cannot establish a causal link between practice capacity for change and implementation outcomes. Our findings, however, provide justification for future replication studies as well as those that develop and test interventions to enhance both motivational and instrumental change capacity. Rationale for future developmental work in this area is further supported by evidence of a post hoc association between preventive service delivery and an interaction between these two factors sustained over time. Recent studies in commercial business settings now inform our understanding of ways in which motivation to change might be enhanced and new work patterns might be more readily adopted and implemented [37–40]. One strategy, for example, emphasizes organizational self-reflection to first identify and later leverage existing strengths (e.g., resources, personal motivations, and relationships) to build motivation within the group to undertake a project with shared meaning. In contrast to traditional quality improvement efforts, participants begin with a positive focus of what might be, rather than one that seeks to eliminate problems or to reduce gaps. Although its effectiveness in health care settings is currently under investigation, a recent report describes efforts to apply the self-reflective or appreciative approach to improve health care delivery in primary care . Caution is advisable, however, in undertaking efforts to assess and modify motivations and abilities within a practice for the sake of greater implementation effectiveness, especially because the contribution of these features relative to that of other factors included in various conceptual models of organizational capacity for change is unclear. Previous comparative case studies of practices in STEP-UP show the possibility for surprises and missed opportunities, for example. Some practices undertake little change despite appearing to be highly motivated and capable of change, while others make large changes despite low capacity .
Our results should be interpreted within the context of several limitations. In the absence of a control group, we cannot exclude the possibility, for example, that unmeasured practice-level factors may have confounded the associations observed. Also, the sample of family medicine practices used may not have been representative of this diverse primary care specialty or of other primary care specialists (e.g., general internists, pediatricians) located elsewhere. Finally, we acknowledge that the qualitative estimates we used may have failed to capture important dimensions of practice capacity for change. Previous studies, for example, underscore the complexity of this construct [9, 12–14]. Future work that develops these measures further is needed to enable a clearer understanding of the meaning and contribution of practice capacity for change to the adoption and routine delivery of evidence-based care.
Greater practice capacity for change is associated with greater success in implementing and maintaining improvements in health care delivery. Efforts to acknowledge and address this practice characteristic may lead to greater intervention effectiveness and speed the dissemination of evidence-based care into community-based primary care settings.
This work was supported by grants from the National Cancer Institute (2R01 CA80862, 4R01 CA80862 & R25T-CA111898), the American Academy of Family Physicians for the Center for Research in Family Practice and Primary Care, and the VA Health Services Research and Development Service (IIR 06-091). The authors are grateful to practice members participating in STEP-UP, whose enthusiasm inspired this manuscript. The views expressed in this article are those of the authors and do not necessarily represent the views of the Department of Veterans Affairs.
- Stone EG, Morton SC, Hulscher ME, Maglione MA, Roth EA, Grimshaw JM, Mittman BS, Rubenstein LV, Rubenstein LZ, Shekelle PG: Interventions that increase use of adult immunization and cancer screening services: A meta-analysis. Annals of Internal Medicine. 2002, 136: 641-651.View ArticlePubMedGoogle Scholar
- Davis D, Thomson M A, Oxman A D, B. H: Changing Physician Performance. JAMA. 1995, 274: 700-705. 10.1001/jama.274.9.700.View ArticlePubMedGoogle Scholar
- Dobrow MJ, Goel V, Lemieuz-Charles L, Black NA: The impact of context on evidence utilization: A framework for expert groups developing health policy recommendations. Soc Sci Med. 2006, 63: 1811-1824. 10.1016/j.socscimed.2006.04.020.View ArticlePubMedGoogle Scholar
- Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F: Implementation Research: A synthesis of the literature. The National Implementation Research Network. Edited by: University of South Florida LPFMHI. 2005, Tampa, FL , 1-125.Google Scholar
- Glasgow RE, Lichtenstein E, Marcus AC: Why don't we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition. Am J Publ Health. 2003, 93 (8): 1261-1267.View ArticleGoogle Scholar
- McCormack B, Kitson A, Harvey G, Rycroft-Malone J, Titchen A, Seers K: Getting evidence into practice: The meaning of 'context'. J Adv Nurs. 2002, 38: 94-104. 10.1046/j.1365-2648.2002.02150.x.View ArticlePubMedGoogle Scholar
- Rycroft-Malone J, Kitson A, Harvey G, McCormack B, Seers K, Titchen A, Estabrooks C: Ingredients for change: Revisiting a conceptual framework. Qual Saf Health Care. 2002, 11: 174-180. 10.1136/qhc.11.2.174.View ArticlePubMedPubMed CentralGoogle Scholar
- Stetler CB, Ritchie J, Rycroft-Malone J, Schultz A, Charns M: Improving quality of care through routine, successful implementation of evidence-based practice at the bedside: An organizational case study protocol using the Pettigrew and Whipp model of strategic change. Implementation Science. 2001, http://www.implementationscience.com/content/2/1/3:Google Scholar
- Klein KJ, Conn AB, Sorra JS: Implementing computerized technology: An organizational analysis. J Appl Psychol. 2001, 86: 811-824. 10.1037/0021-9010.86.5.811.View ArticlePubMedGoogle Scholar
- Litaker D, Tomolo A, Liberatore V, Stange KC, Aron DC: Using complexity theory to build interventions that improve health care delivery in Primary Care. J Gen Intern Med. 2006, 21:S2: S30-34.View ArticleGoogle Scholar
- Miller WL, Crabtree BF, McDaniel R, Stange KC: Understanding change in primary care practice using complexity theory. J Fam Pract. 1998, 46: 369-376.PubMedGoogle Scholar
- Simpson DD: A conceptual framework for transferring research to practice. J Subst Abuse Treat. 2002, 22: 171-182. 10.1016/S0740-5472(02)00231-3.View ArticlePubMedGoogle Scholar
- Solberg LI: Improving medical practice: A conceptual framework. Ann Fam Med. 2007, 5: 251-256. 10.1370/afm.666.View ArticlePubMedPubMed CentralGoogle Scholar
- Cohen D, McDaniel R, Crabtree BF, Ruhe MC, Weyer SM, Tallia A, Miller WL, Goodwin MA, Nutting P, Solberg LI, Zyzanski SJ, Jaen CR, Gilchrist V, Stange KC: A practice change model for quality improvement in primary care practice. Journal of Healthcare Management. 2004, 49: 155-169.PubMedGoogle Scholar
- Benson V, Marano MA: Current estimates from the National Health Interview Survey. Vital Health Stat. 1994, Hyattsville , National Center for Health Statistics, 10(198):Google Scholar
- Ries P: Physician contacts by sociodemographic and health characteristics, United States, 1982-1983. National Center for Health Statistics, Vital and Health Statistics 1987. 1987, Washington, DC , Government Printing OfficeGoogle Scholar
- Schoenborn CA, Adams FP, Schiller JS: Summary health statistics for the U.S. population: National Health Interview Survey, 2000. Vital Health Stat 10. 2003, National Center for Health Statistics, 10 (214): 1-83.Google Scholar
- Cherry DK, Burt CW, Woodwell DA: National Ambulatory Care Survey: 2001 summary. Vital and Health Statistics of the National Center for Health Statistics. 2003, 337:Google Scholar
- Cabana MD, Rand CS, Powe NR, Wu AW, Wilson MH, Abboud PAC, Rubin HR: Why don't physicians follow clinical practice guidelines? A framework for improvement. Journal of the American Medical Association. 1999, 282: 1458-1465. 10.1001/jama.282.15.1458.View ArticlePubMedGoogle Scholar
- Ruhe MC, Weyer SM, Stange KC, Zronek S: Facilitating practice change: Lessons from the STEP-UP clinical trial. Am J Prev Med. 2005, 40: 728-734.Google Scholar
- Goodwin MA, Zyzanski SJ, Zronek S, Ruhe MC, Weyer SM, Konrad N, Esola D, Stange KC: A clinical trial of tailored office systems for preventive service delivery: The Study to Enhance Prevention by Understanding Practice (STEP-UP). Am J Prev Med. 2001, 21 (1): 20-28. 10.1016/S0749-3797(01)00310-5.View ArticlePubMedGoogle Scholar
- Stange KC, Goodwin MA, Zyzanski SJ, Dietrich AJ: Sustainability of a practice-individualized preventive service delivery intervention. Am J Prev Med. 2003, 25: 296-300. 10.1016/S0749-3797(03)00219-8.View ArticlePubMedGoogle Scholar
- Force USPST: Guide to Clinical Preventive Services: Report of the U.S. Preventive Services Task Force, 2nd ed. 1996, Alexandria, VA , International Medical Publishers, 2ndGoogle Scholar
- Stange KC, Flocke SA, Goodwin MA, Kelly RB, Zyzanski SJ: Direct observation of preventive service delivery in community family practice. Prev Med. 2000, 31 (2 Pt 1): 167-176. 10.1006/pmed.2000.0700.View ArticlePubMedGoogle Scholar
- Courtney KO, Joe GW, Rowan-Szal GA, Simpson DD: Using organizational assessment as a tool for program change. J Subst Abuse Treat. 2007, 2007: 131-137. 10.1016/j.jsat.2006.12.024.View ArticleGoogle Scholar
- Fuller BE, Rieckmann T, Nunes EV, Miller M, Arfken C, Edmundson E, McCarty D: Organizational readiness for change and opinions toward treatment innovations. J Subst Abuse Treat. 2007, 33: 183-192. 10.1016/j.jsat.2006.12.026.View ArticlePubMedPubMed CentralGoogle Scholar
- Lehman WE, Greener JM, Simpson DD: Assessing organization readiness for change. J Subst Abuse Treat. 2002, 22: 197-209. 10.1016/S0740-5472(02)00233-7.View ArticlePubMedGoogle Scholar
- Rampazzo L, De Angeli M, Serpelloni G, Simpson DD, Flynn PM: Italian survey of Organizational Functioning and Readiness for Change: A cross-cultural transfer of treatment assessment strategies. Eur Addict Res. 2006, 12: 176-181. 10.1159/000094419.View ArticlePubMedGoogle Scholar
- Saldana L, Chapman JE Henggeler, SW, Rowland MD: The Organizational Readiness for Change scale in adolescent programs: Criterion validity. J Subst Abuse Treat. 2007, 33: 159-169. 10.1016/j.jsat.2006.12.029.View ArticlePubMedPubMed CentralGoogle Scholar
- Stockdale SE, Mendel P, Jones L, Arroyo W, Gilmore J: Assessing organizational readiness and change in community intervention research: Framework for participatory evaluation. Ethn Dis. 2006, 16 (1 Suppl 1): S136-145.Google Scholar
- Crabtree BF, Miller WL, Stange KC: Understanding practice from the ground up. J Fam Pract. 2001, 50 (10): 881-887.PubMedGoogle Scholar
- McIlvain H, Crabtree BF, Medder J, Miller WL, McDaniel R, Stange KC, Solberg LI: Using "practice genograms" to understand and describe practice configurations. Fam Med. 1998, 30: 490-496.PubMedGoogle Scholar
- Miller WL, McDaniel RR, Crabtree BF, Stange KC: Practice jazz: understanding variation in family practices using complexity science. J Fam Pract. 2001, 50 (10): 872-880.PubMedGoogle Scholar
- Stange KC: "One size doesn't fit all." Multimethod research yields new insights into interventions to improve preventive service delivery in family practice. J Fam Pract. 1996, 43 (4): 358-360.PubMedGoogle Scholar
- Jaen CR, McIlvain H, Pol L, Phillips RL, Flocke SA, Crabtree BF: Tailoring tobacco counseling to the competing demands in the clinical encounter. J Fam Pract. 2001, 50: 859-863.PubMedGoogle Scholar
- Cheater F, Baker R, Hearnshaw H, Robertson N, Hicks N, Oxman A, Flottorp S: Tailored interventions to overcome identified barriers to change: effects on professional practice and health care outcomes. The Cochrane Library. 2003, The Cochrane CollaborationGoogle Scholar
- Attwood M, Pedler M, Pritchard S, Wilkinson D: Leading change: A guide to whole systems working. 2003, Bristol, U.K. , The Policy PressView ArticleGoogle Scholar
- Barrett F, Fry R: Appreciative Inquiry: A Positive Approach To Building Cooperative Capacity. 2005, Chagrin Falls, OH , Taos Institute PublicationsGoogle Scholar
- Cooperrider D, Sorensen P F Jr, Yeager T F, D. W: Appreciative Inquiry: An Emerging Direction for Organization Development. 2001, Champaign, IL , Stipes Publishing LLCGoogle Scholar
- Wensing M, Wollersheim H, R. G: Organizational interventions to implement improvements in patient care: a well structured review of reviews. Implement Sci. 2006, 1 (2):Google Scholar
- Carter C, Ruhe M, Weyer SM, Litaker D, Fry R, Stange KC: An Appreciative Inquiry Approach to Transformative Change in Health Care Settings. Quality Management in Healthcare. 2007, 16 (3): 194-204.View ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.