Data for improvement and clinical excellence: report of an interrupted time series trial of feedback in long-term care
Implementation Science volume 9, Article number: 161 (2014)
There is considerable evidence about the effectiveness of audit coupled with feedback for provider behavior change, although few feedback interventions have been conducted in long-term care settings. The primary purpose of the Data for Improvement and Clinical Excellence-Long-Term Care (DICE-LTC) project was to assess the effects of a feedback intervention delivered to all direct care providers on resident outcomes. Our objective in this report is to assess the effect of feedback reporting on rates of pain assessment, depression screening, and falls over time.
The intervention consisted of monthly feedback reports delivered to all direct care providers, facility and unit administrators, and support staff, delivered over 13 months in nine LTC units across four facilities. Data for feedback reports came from the Resident Assessment Instrument Minimum Data Set (RAI) version 2.0, a standardized instrument mandated in LTC facilities throughout Alberta. The primary evaluation used an interrupted time series design with a comparison group (units not included in the feedback intervention) and a comparison condition (pressure ulcers). We used segmented regression analysis to assess the effect of the feedback intervention.
The primary outcome of the study, falls, showed little change over the period of the intervention, except for a small increase in the rate of falls during the intervention period. The only outcome that improved during the intervention period was the proportion of residents with high pain scores, which decreased at the beginning of the intervention. The proportion of residents with high depression scores appeared to worsen during the intervention.
Maintaining all nine units in the study for its 13-month duration was a positive outcome. The feedback reports, without any other intervention included, did not achieve the desired reduction in proportion of falls and elevated depression scores. The survey on intention to change pain assessment practice which was conducted shortly after most of the feedback distribution cycles may have acted as a co-intervention supporting a reduction in pain scores. The processing and delivery of feedback reports could be accomplished at relatively low cost because the data are mandated and could be added to other intervention approaches to support implementation of evidence-based practices.
Long-term care (LTC) settings, also described as nursing homes or long-term care facilities, exist in some form in most developed countries. In Canada and the United States, institutional long-term care facilities are usually stand-alone facilities or sometimes part of a hospital. They provide ongoing, continuing care to physically frail, often cognitively impaired, residents who live there, for the most part, until their death. Many facilities provide both rehabilitation and long-term care . The focus of our project was on LTC units, not those providing post-acute care or other rehabilitation services. LTC units usually operate under resource constraints and rely heavily on non-professional staff to provide the majority of care to physically frail and typically cognitively impaired older adults. In these settings, resources for quality improvement are difficult to obtain, and trials of quality improvement interventions are relatively rare compared with other health-care settings.
There is a growing evidence base for care in LTC settings, including evidence-based approaches to pain management, falls prevention, and other important topics -. However, getting evidence into practice is not well understood, and relatively little evidence exists to date on effective interventions for implementing evidence-based practices and changing provider behavior in these settings . Many interventions have been rigorously tested across multiple settings and conditions, and some evidence exists for the effectiveness of specific interventions in implementing evidence-based practice through changing provider behavior -. One of these is audit combined with feedback reports (referred to as feedback interventions). Feedback interventions have demonstrated modest effect in promoting desired behavior change among health-care providers across settings and provider types ,. To date, few feedback interventions have been conducted in LTC settings. Those that have been conducted have included only a subset of the providers who give care to residents, typically the most educated professionals . A major barrier to using feedback interventions in LTC has been the lack of data on care processes and outcomes, but this has been addressed in many jurisdictions by the adoption of a Minimum Data Set which assesses resident care processes and outcomes -.
The probable mechanism by which feedback interventions have their main effect is in providing people with information about their own performance -. The results, particularly among people who have not received data-based feedback on their performance in the past, may be to increase motivation to change behavior. Feedback reports have often been used in conjunction with other interventions . Our intention in conducting this study was to keep the intervention simple and relatively low cost by using feedback reports alone without other interventions. The low cost of the type of feedback intervention we studied is supported by widespread availability of data on resident status and outcomes ,, which do not require significant additional cost for audit data to construct the feedback reports ,,. In this paper, we report the summative resident outcomes of a feedback intervention delivered to all staff in four LTC facilities. Our primary hypothesis was that a consistent, long-duration, resident-focused feedback intervention would improve resident outcomes compared with settings which did not receive the feedback intervention and compared with an outcome not included in the feedback report.
We used an interrupted time series design as the overall intervention evaluation approach with comparisons to assess the effect of monthly feedback reports in nine LTC units in four facilities. We conducted a concurrent process evaluation to evaluate staff response to the intervention. We report these results in a separate paper, currently under review. The project received ethics approval from the Health Research Ethics Board, Committee B, at the University of Alberta and operational approval from the two LTC organizations participating in the study. The full protocol for the entire Data for Clinical Improvement and Excellence (DICE) project was published in two protocol papers . We provide brief descriptions of key elements in this paper, in which we report on the DICE-LTC (Long-Term Care) component of the full project .
As we describe in our protocol paper for a social network sub-study within DICE-LTC , we worked from an underlying conceptual model built on the Theory of Planned Behavior, in which we posited that feedback reports would work through multiple paths (attitudes and social norms in particular) to influence intention to change behavior. We were unable to measure provider behavior directly through this study, but we measured longitudinal measures of intention to change behavior, reported in the process evaluation paper. We expected an effect at the level of the LTC nursing unit, a geographic sub-unit of the entire facility, which is relatively self-contained with respect to most of the frontline, day-to-day, and shift-to-shift direct care providers, most of whom are health-care aides (HCAs). The geographic boundaries of the nursing unit are quite permeable, and other types of providers, notably those in the allied health professions including occupational, physical, and recreational therapists, social workers, and others, traverse unit boundaries. However, in our study, geographic nursing units all had a single, clearly identified unit manager and were understood as important and relatively independent components of the organization by staff in all four facilities. Our intervention included handing feedback reports to individual providers of all kinds, as there were few opportunities to deliver reports to groups of staff. We did, however, expect that providers would talk to each other about the reports, and we asked them questions to elicit an understanding of how much they discussed the reports, as well as some information about the purposes of the discussion, described in the process evaluation paper.
Settings and sample
The project was conducted in nine LTC nursing units in four continuing care facilities in Edmonton, Alberta, Canada. The four facilities were part of two organizations providing a range of continuing care services in the community. All four facilities had implemented the Resident Assessment Instrument Minimum Data Set (RAI) version 2.0 (http://www.interrai.org), although two facilities had implemented RAI 2.0 between 3 and 5 years prior to the beginning of the DICE-LTC feedback intervention, while the other two had only implemented this system between 6 and 12 months prior to the project beginning. We initially hypothesized that maturity of RAI assessments might be an important factor in the uptake and effectiveness of the feedback reports.
The feedback reports were developed during a pilot study conducted in two nursing homes in the Edmonton area in late 2007 and early 2008. We used data from the RAI 2.0 to measure resident-level outcomes, and this served as the data source for the feedback reports. The RAI 2.0 assessment tool covers a wide range of process and outcome data at the individual resident level, and assessments are updated quarterly for each resident. We reported on measures of pain frequency and intensity, risk for and occurrence of falls, and depression prevalence, all aggregated to the unit level. These three areas were among the top eight domains identified as priorities through the pilot project and were agreed upon by senior leadership in both participating organizations. Data were extracted at the resident level from vendor servers every month and stripped of personal identifiers, except for the unit on which each resident lived, before being sent to the research team.
Following definitions used by the Canadian Institute for Health Information, we derived the pain scale from two items on the RAI 2.0 assessment, one measuring pain frequency and the other measuring pain intensity. The scale is scored 0 for no pain, 1 for pain less than daily, 2 for daily pain of moderate intensity, and 3 for daily severe pain . The depression rating scale is scored on a 0 to 14 range and is derived from seven RAI 2.0 items, from making negative statements to crying and tearfulness. A score of 3 or more indicates possible depressive disorder and residents should be further evaluated . Falls were defined as any fall occurring in the 31 to 180 days prior to the current assessment, and falls risk were defined as a combination of requiring assistance for locomotion, problems with balance, dizziness or vertigo, and unsteady gait. The assessments were completed by staff in each facility following comprehensive instructions and training required for RAI 2.0 assessments. We aggregated these into proportions for residents on each of the nine units.
Reports were primarily graphic on one sheet of paper, front and back, printed in color, and included a cover sheet with details about the data sources and the comparison units. We provided a sample of this report in the DICE-LTC protocol paper . We provided feedback using monthly time points from months 2 to 11, after which we switched to showing quarterly time points for months 12 and 13. Reports were hand-delivered by project staff, who were all research assistants with minimal training, in each of the nine LTC units during a consistent week in each month for each of the 13 months of the intervention period. Each report was specific to the unit, and all direct care providers and unit managers received the unit-specific reports. Facility administrators received reports for each of their units prior to report distribution on the units. We included facility administrators, nurse managers, and frontline direct care staff, including registered nurses, licensed practical nurses, HCAs, physical therapists, recreational therapists, occupational therapists, pharmacists, social workers, and other allied health providers. The goal of the feedback report distribution was to ensure that frontline staff received the reports directly.
In addition to the feedback intervention, research assistants also distributed and collected post-feedback surveys in each intervention unit as part of an extensive process evaluation. In the first section of the post-feedback survey, we asked questions about response to the reports. In another section, we asked about intentions to change behavior with respect to assessing pain among residents. A sample survey instrument is provided in the protocol paper , and the results detailing provider response to the feedback reports are reported in the process evaluation paper.
In addition to the data included in the feedback reports delivered to participating units and facilities, we also requested data from the same period for nine units in three additional facilities, matched as closely as possible to the facilities participating in the study. All three comparison facilities came from one of the two larger organizations participating in the study. These provided comparison units to control for secular trend over the baseline, intervention, and follow-up periods. We also included pressure ulcer prevalence, a quality indicator not included on the feedback reports, as an additional check on secular trends within the participating units.
Our primary analysis used segmented regression on the interrupted time series data. In this approach, all data are aggregated to a single observation at multiple time points at equal intervals. In our case, they were in months . Given a data set aggregated to equal interval time points, we specify our linear regression model as:
Y t is the outcome in month t;
time is the time in months at time t from the start of the observation period; it ranges from 1 to 25 months;
feedback is an indicator for time t occurring before (feedback = 0) or after (feedback = 1) the feedback report, which was implemented in January 2009 in the series;
time after intervention starts is the number of months after the intervention starts at time t, coded 0 before the feedback intervention starts and (time-6) after the feedback intervention starts;
end of intervention is an indicator for time t occurring before or after the end of the feedback intervention, which was after January 2010 in the time series;
time after intervention ends is the number of months after the intervention at time t, coded 0 before the end of the feedback report and (time-19) after the end of the feedback report;
e t is the error term at time t that represents the unexplained random variation in the model.
In this model, we estimated the regression parameters as follows:
b 0 is the baseline level of the outcome;
b 1 is the slope of the regression line (trend line) prior to the feedback report;
b 2 is the change in level immediately following the feedback report start;
b 3 is the change in regression slope during the feedback report period;
b 4 is the change in level immediately after the feedback report ends;
b 5 is the change in regression slope in the post-feedback report period.
We conducted separate segmented regression analyses for the intervention and comparison facilities in each of the four outcomes: proportion of residents with pain scores greater than 2, proportion with depression scores greater than 3, proportion with falls, and proportion with pressure ulcers. We provide both visual representations of the results in the form of time series graphs and tables of the parameter estimates from the regression analyses for statistical inference. We tested the residuals from each regression analysis for autocorrelation using the Durbin-Watson test, and if this was significant, we used the Prais-Winsten regression to adjust for autocorrelation ,. The Durbin-Watson test is a check of first-order serial auto-correlation due to repeated measures (AR1); the Prais-Winsten regression adjusts for first-order serial auto-correlation by assuming an AR1 error term. When the Durbin-Watson test did not show significant autocorrelation, we present the results of ordinary least squares regression of the time series data. Ordinary least squares regression is more commonly termed linear regression. In each month, we included only residents who had received a new assessment during that month, which lessened the degree of autocorrelation between months as different resident data were included each month. We used this approach in the monthly feedback reports also, to increase the degree to which the data would change from month to month and provide new information.
We initially powered our sample size on falls as the primary outcome as we designed the project, using data reported in the literature on the prevalence of falls in LTC settings similar to those included in our study, and on effect sizes for change in falls due to quality improvement interventions similar to our planned feedback intervention. We did not, in our original power calculations, focus on the interrupted time series design, but instead used a standard approach with an adjustment for clustering. However, our final sample included all available LTC units in the four facilities participating in the project. While the number of residents across all available units met our calculated sample size in designing the project, we did not take into account the interrupted time series approach to analysis. As a result, our sample size calculation may have overestimated our power. In addition to the primary analyses, we also conducted secondary analyses dividing the intervention units into those with mature RAI 2.0 systems compared to those with recent implementation of RAI 2.0. We included secondary hypotheses about the effects of mature vs. new data systems in our original proposal .
We did not risk-adjust our outcomes. We were focused on reporting observed data to the staff providing care to residents, and while risk adjustment might be necessary for comparing across units for performance management or reporting purposes, for quality improvement, staff needed to understand the experience of residents in their care. The data used in this summative evaluation are the same as those provided to staff throughout the feedback intervention. We included data from 6 months prior to the start of the feedback intervention, which began in January 2009, as pre-intervention data, and from 6 months after the end of the intervention, as post-intervention data. The timeline in Figure 1 shows the timing of data extraction, feedback report delivery, and post-intervention data.
Variables included in the analysis
Following the segmented regression analysis approach, the only variables included in these analyses are the time points based on the study timeline and outcome data at each time point. The intervention and comparison units are implicit in the different regression analyses.
Over the full 13 months of the intervention, we reported data on over 500 unique residents in nine LTC nursing units in four facilities participating in the feedback intervention. In Table 1, we present data describing the resident sample from the intervention units at four relevant time periods: at the beginning of the pre-intervention period (July 2008), at the beginning of the intervention period (January 2009), at the end of the intervention period (January 2010), and at the end of the post-intervention period (July 2010). As generally observed among LTC residents, the majority were female and most were widowed. In Table 2, we provide similar data for residents from the comparison units. Numbers are similar in each period, as is the proportion of female residents. In Tables 3 and 4, we show the number of residents included in the monthly feedback reports by nursing unit and facility at important time points in both the intervention and comparison sites.
Segmented regression analysis
We initially display the time series as graphs in Figures 2, 3, 4, and 5. These graphs allow the reader to evaluate the absolute levels of the outcomes in each instance, as well as the change over time. In each graph, the blue arrows indicate the beginning and end of the intervention period. In Tables 5 and 6, we summarize the findings from the segmented regression analyses for the intervention and comparison sites, respectively. The Durbin-Watson statistic was significant only for one outcome, falls, in the intervention sites. As a result, we present the Prais-Winsten regression results for falls. The other regression results are all from ordinary least squares regression.
The coefficient estimate for the change in regression slope during the intervention period (b 3) in the intervention sites is significant but positive, indicating that the rate of falls increased more during the intervention than it had prior to the intervention, counter to our expectation that falls would decrease during the intervention period. None of the other coefficient estimates for falls were significant.
For the proportion of residents with high pain scores, by contrast, the coefficient estimate for the change in level when the intervention started (b 2) is significant and negative, indicating that there was a decrease in the level of the pain scores at the beginning of the intervention. Though the change in regression slope coefficient estimate during the intervention phase (b 3) was negative, it was not statistically significant, indicating that the slope during the intervention period was not significantly different from the slope prior to the intervention, and none of the other parameter estimates were significant.
For the proportion of residents with high depression scores, the pre- and post-intervention regression slope coefficient estimates (b 1 and b 5, respectively) were negative and insignificant but the change in slope during the intervention period (b 3) was positive and significant, as for falls. Again, this was counter to our expectation.
For the proportion of residents with pressure ulcers, which was a control condition not included in our feedback reports, no parameter estimates were significant. We expected no change in pressure ulcer levels as they were not included in our feedback reports.
In the comparison sites, there were no significant changes, either positive or negative, in any of the coefficient estimates for any of the indicators. This is what we expected, because we did not deliver feedback reports in any of these LTC units.
In our secondary analyses of units with more mature vs. more recent implementation of RAI 2.0 (provided as Additional files 1 and 2), we saw a mixed pattern of results across the different outcomes, with little statistical significance. As a result, it does not appear that maturity of data systems had a consistent effect on outcomes.
We report on a feedback intervention which we were able to deliver consistently over a 13-month period across nine LTC units in four facilities. Given the low level of resources available in these facilities, completing the intervention with all nine original units, and with a relatively high rate of participation among providers, is an important achievement. Attrition rates for quality improvement interventions is a critical problem among LTC facilities, usually attributed to high rates of staff turnover, low levels of education, and high workload, and many quality improvement interventions have not been able to retain all participating units for the full project .
Based on our primary outcome, falls, the feedback intervention was not effective in changing resident outcomes. We had hoped to demonstrate a reduction in falls as a result of the intervention. Instead, we found no immediate change in the level or number of falls at the outset of the intervention, when the effect of feedback might be highest, and a modest but significant increase in the rate of falls over the intervention period. This latter finding was contrary to our expectation. The lack of any significant change in either the control condition (pressure ulcers) or comparison sites suggests that there were few, if any, secular trends during the intervention period which would have changed resident outcomes.
In strict terms, then, we did not find the positive effect of feedback interventions alone that we had hoped to find. This is consistent with the literature on feedback interventions, which suggests that feedback interventions are likely insufficient by themselves to effect sufficient provider behavior change to change outcomes. This is mediated significantly by the degree of uptake of the feedback report, the conditions under which it was received, and other factors, based on the Theory of Planned Behavior, which might result in behavior change following receipt of a feedback intervention. In addition to factors related to the Theory of Planned Behavior, which include attitudes and beliefs, social and professional norms, and perceived behavioral control , factors related to the feedback intervention itself are likely important. These include the timeliness of the data, the perceived validity of the data, the source of the feedback, and other factors such as the perceived sign of the information in the report (feedback perceived as positive, negative, or neutral) . We describe findings related to a number of these factors in the process evaluation paper. Briefly, a high proportion of the target provider groups received, read, and understood the feedback report. A smaller proportion discussed it with other staff in the unit. Many participants described an interest in receiving further data as well as further information about how to make changes to benefit residents, but it was beyond the scope of this project to provide information. The lack of time and opportunity to spend time in activities other than offering direct care to residents was a serious and continual constraint on our activities.
Despite that, we note that there was a significant change in pain scores at the beginning of the intervention period, in the direction we had expected. It is notable that the post-feedback survey we administered 1 week after the feedback report distribution asked a set of questions about intention to change pain assessment behavior among direct care staff. We report on these findings in the companion process evaluation paper, but note that this may have unintentionally worked as a co-intervention with the feedback report, as it may have acted to focus participant attention on pain assessment. While this was an unexpected effect, not one we intentionally designed into the intervention, similar results have been observed previously, and there is a literature on what is called “mere measurement” or the “question-behavior effect” ,. This is an intriguing finding that deserves follow-up work in future implementation studies.
We focused on pain assessment because it had been ranked highly as an important quality improvement issue across all provider groups in our pilot study and had been the subject of attention in both organizations participating in the intervention. Managers strongly endorsed pain assessment as an important issue for measurement, and prior to initiating the feedback intervention, several sites had engaged in education about pain assessment and management. The graphical time series for this outcome depicted in Figure 3 shows that the intervention sites demonstrated a decrease in proportion of residents with high pain scores at the outset of the intervention, followed by a period of several months of low scores which slowly climbed again towards the end of the intervention period. In the comparison sites, there was a later drop in scores, but then an inconsistent pattern of increasing and decreasing scores over most of the intervention period. From the coefficient estimates, we see that the change in scores at the beginning of the intervention period was significant for the intervention units, while the changes for the comparison units were not, even though the comparison units were given the same education, unrelated to our intervention, prior to the intervention period.
An important note is that this is the first attempt, to our knowledge, to deliver feedback reports to all direct care providers in LTC settings, including health-care aides. Most previous studies have excluded aides, often without justification. The justifications that have been used frequently include feasibility, lack of educational preparation, and that the primary drivers of quality improvement are professional providers, usually registered or licensed practical nurses. We argue that because aides are the majority of providers in LTC settings, excluding them from involvement in quality improvement interventions may have unintended negative consequences and may decrease the effectiveness of the intervention. In this report, and in the companion process evaluation paper, we have demonstrated that including aides is feasible and that aides report being able to understand feedback reports.
Although this was one of the longest and most intensive studies of a feedback intervention in LTC conducted to date, with a strong process evaluation concurrent with a rigorous quasi-experimental design, our inferences are limited by a number of important issues. First, we were unable to obtain data to link intention to change to actual behavior change. We decided not to request identifying data on individual providers responding to surveys to avoid raising concerns about confidentiality and coercion, and we did not have data that would have allowed us to observe actual practice change leading to resident outcomes. Examples of these kinds of data include drug prescription and administration data for pain management, interventions undertaken to reduce resident risk of falls, prescribing and administering anti-depressant medication, or increases in activities to reduce resident isolation. Alternative approaches to data collection might have included more intensive observation of resident care, timed to coincide with report distribution. Prescription data, and data about direct interventions for residents, are not part of the RAI 2.0 assessment system in the facilities in which we conducted the project, making it impossible to obtain these data without considerable cost and identification of residents. We discussed more intensive observation of resident care with the decision makers on our research team, and the possible invasion of resident privacy, as well as the burdens placed on busy, often crowded, and sometimes stressful environments, precluded our ability to conduct more observations.
Another important limitation was the relatively small time series. Ideally, an interrupted time series design would include at least 40-50 time points measuring the dependent variables. We were able to include only 25 time points, which resulted in less than optimal power. The limitations were the lack of data prior to the start of the intervention. In two of the organizations in which we conducted the intervention, the RAI assessments only began shortly before the intervention started. We collected as much data as possible prior to the intervention. After the intervention ended, budget cuts reduced the number of staff available to support data extraction in both participating organizations, making it very difficult to obtain follow-up data. Difficulties of this kind are common in LTC, making it difficult to find resources to conduct intensive quality improvement interventions. As a result of the realities of data access in this study, we had a time series with 6 time points before the intervention started, 13 during the intervention, and 6 after the intervention ended. Ideally, we would have had a minimum of 10 time points during each phase. In addition, the number of residents included in the measures at each time point varied quite widely from 111 to 157, which created considerable variability, and may have affected the stability of the time series and the level of uncertainty around each time point.
Most prior quality improvement efforts in LTC settings have not included control or comparison groups, and most have used pre- and post-intervention designs that make causal attribution very difficult. We included two different kinds of comparisons to guard against both attention effects and secular trend. We ruled out secular trend as a cause of the significant changes we saw. We also saw relatively little change in the desired direction as a result of this modest, intentionally parsimonious intervention. The one change in the desired direction we did find may have been related to an unintended co-intervention, namely the focus on pain assessment in our measurement instruments. It is difficult, based on our findings, to argue for continuing to conduct feedback interventions alone in LTC settings, although feedback reports can be an important building block in intervention design. Especially when data already exist, this can be a relatively low-cost component to an intervention that may enhance the effect of additional components such as education, goal setting, and other relevant behavior change techniques ,.
We believe that our findings should provide qualified evidence for managers on the usefulness of existing data from sources like the RAI 2.0, which are increasingly available in many jurisdictions. A major impetus for our project was the desire to use the data that staff so arduously collect on a quarterly basis, to improve the quality of care and quality of life of residents. We have shown that this work is feasible, although achieving significant outcome improvement may require additional intervention beyond feedback alone.
AES conceived of the study, drafted and revised the manuscript, and is responsible for the overall analysis. CS conceived of the study, reviewed and contributed to drafts, and shares responsibility for its conduct. MABB conducted the data analysis and reviewed and critically revised the manuscript. KDF reviewed and critically revised the manuscript. All authors read and approved the final manuscript.
Getting on with Better Health Care. 2005, Alberta Health and Wellness, Edmonton
Duffy A: The assessment and management of falls in residential care settings. Br J Nurs. 2013, 22 (5): 259-263. 10.12968/bjon.2013.22.5.259.
Farless LB, Ritchie CS: Challenges of pain management in long-term care. Ann Longterm Care. 2012, 20 (5): 32-38.
MacLaurin A, McConnell H: Utilizing quality improvement methods to prevent falls and injury from falls: enhancing resident safety in long-term care. J Saf Res. 2011, 42 (6): 525-535. 10.1016/j.jsr.2011.10.005.
Gordon AL, Logan PA, Jones RG, Forrester-Paton C, Mamo JP, Gladman JRF: A systematic mapping review of randomized controlled trials (RCTs) in care homes. BMC Geriatr. 2012, 12: 31-10.1186/1471-2318-12-31.
Sales AE, Bostrom AM, Bucknall T, Draper K, Fraser K, Schalm C, Warren S: The use of data for process and quality improvement in long term care and home care: a systematic review of the literature. J Am Med Dir Assoc. 2012, 13 (2): 103-113. 10.1016/j.jamda.2011.01.004.
Grimshaw JM, Shirran L, Thomas R, Mowatt G, Fraser C, Bero L, Grilli R, Harvey E, Oxman A, O'Brien MA: Changing provider behavior: an overview of systematic reviews of interventions. Med Care. 2001, 39 (8 Suppl 2): 2-45.
Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay CR, Vale L, Whitty P, Eccles MP, Matowe L, Shirran L, Wensing M, Dijkstra R, Donaldson C: Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess. 2004, 8 (6): 1-72. iii-iv
Grimshaw J, Eccles M, Tetroe J: Implementing clinical guidelines: current evidence and future implications. J Contin Educ Health Prof. 2004, 24 (Suppl 1): S31-7. 10.1002/chp.1340240506.
Shojania KG, Grimshaw J: Still no magic bullets: pursuing more rigorous research in quality improvement. Am J Med. 2004, 116 (11): 778-780. 10.1016/j.amjmed.2004.03.003.
Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, O'Brien MA, Johansen M, Grimshaw J, Oxman AD: Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012, 6:
Jamtvedt G, Young JM, Kristoffersen DT, O'Brien MA, Oxman AD: Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2006, 2 (2):
Rantz MJ, Popejoy L, Petroski GF, Madsen RW, Mehr DR, Zwygart-Stauffacher M, Hicks LL, Grando V, Wipke-Tevis DD, Bostick J, Porter R, Conn VS, Maas M: Randomized clinical trial of a quality improvement intervention in nursing homes. Gerontologist. 2001, 41 (4): 525-538. 10.1093/geront/41.4.525.
Hirdes JP: Addressing the health needs of frail elderly people: Ontario's experience with an integrated health information system. Age Ageing. 2006, 35 (4): 329-331. 10.1093/ageing/afl036.
Hirdes JP, Sinclair DG, King J, Tuttle P, McKinley J: From anecdote to evidence: complex continuing care at the dawn of the information age in Ontario. 2014,:[http://www.milbank.org/uploads/documents/canada.html]
Hirdes JP: Quality control in nursing homes. CMAJ. 1999, 161 (2): 127-
Carpenter GI, Hirdes JP, Ribbe MW, Ikegami N, Challis D, Steel K, Bernabei R, Fries B: Targeting and quality of nursing home care. A five-nation study. Aging (Milano). 1999, 11 (2): 83-89.
Kluger AN, DeNisi A: Feedback interventions: toward the understanding of a double-edged sword. Curr Dir Psychol Sci. 1998, 7 (3): 67-72. 10.1111/1467-8721.ep10772989.
Kluger AN, Van Dijk D: Feedback, the various tasks of the doctor, and the feedforward alternative. Med Educ. 2010, 44 (12): 1166-1174. 10.1111/j.1365-2923.2010.03849.x.
Gardner B, Whittington C, McAteer J, Eccles MP, Michie S: Using theory to synthesise evidence from behaviour change interventions: the example of audit and feedback. Soc Sci Med. 2010, 70 (10): 1618-1625. 10.1016/j.socscimed.2010.01.039.
Slamet D: Business case for RAI 2.0 implementation: a jurisdictional review of outcomes observed from RAI v2.0 implementation. 2005, Fraser Health Region, Vancouver, B.C
Sales AE, Schalm C: Data for improvement and clinical excellence: protocol for an audit with feedback intervention in long-term care. Implement Sci. 2010, 5: 74-10.1186/1748-5908-5-74.
Fries BE, Simon SE, Morris JN, Flodstrom C, Bookstein FL: Pain in US nursing homes: validating a pain scale for the minimum data set. Gerontologist. 2001, 41 (2): 173-179. 10.1093/geront/41.2.173.
Burrows AB, Morris JN, Simon SE, Hirdes JP, Phillips C: Development of a minimum data set-based depression rating scale for use in nursing homes. Age Ageing. 2000, 29 (2): 165-172. 10.1093/ageing/29.2.165.
Lagarde M: How to do (or not to do)…assessing the impact of a policy change with routine longitudinal data. Health Policy Plan. 2012, 27 (1): 76-83. 10.1093/heapol/czr004.
Thornton DL: A note on the efficiency of the Cochrane-Orcutt estimator of the AR(1) regression model. J Econ. 1987, 36 (3): 369-376. 10.1016/0304-4076(87)90008-X.
Magee L: Efficiency of iterative estimators in the regression model with AR(1) disturbances. J Econ. 1985, 29 (3): 275-287. 10.1016/0304-4076(85)90156-3.
Godin G, Bélanger-Gravel A, Eccles M, Grimshaw J: Healthcare professionals' intentions and behaviours: a systematic review of studies based on social cognitive theories. Implement Sci. 2008, 3: 36-36. 10.1186/1748-5908-3-36.
Godin G, Sheeran P, Conner M, Germain M: Asking questions changes behavior: mere measurement effects on frequency of blood donation. Health Psychology. 2008, 27 (2): 179-184. 10.1037/0278-6184.108.40.206.
Wood C, Conner M, Sandberg T, Godin G, Sheeran P: Why does asking questions change health behaviours? The mediating role of attitude accessibility. Psychol Health. 2014, 29 (4): 390-404. 10.1080/08870446.2013.858343.
Michie S, Ashford S, Sniehotta FF, Dombrowski SU, Bishop A, French DP: A refined taxonomy of behaviour change techniques to help people change their physical activity and healthy eating behaviours: the CALO-RE taxonomy. Psychol Health. 2011, 26 (11): 1479-1498. 10.1080/08870446.2010.540664.
Michie S, Johnston M, Francis J, Hardeman W, Eccles M: From theory to intervention: mapping theoretically derived behavioural determinants to behaviour change techniques. Appl Psychol. 2008, 57 (4): 660-680. 10.1111/j.1464-0597.2008.00341.x.
We gratefully acknowledge the intellectual input from the full research team for this project: Marian Anderson, Melba Andrea Baylon, Anne-Marie Bostrom, Thorsten Duebel, Kari Elliott, Carole Estabrooks, Kimberly Fraser, Gloria Gao, Vivien Lai, Kaila Lapins, Lili Liu, Suzanne Maisey, Anastasia Mallidou, Lynne Mansell, Colleen Maxwell, Joshua Murray, Iris Neumann, Sharon Warren. The writing group for this paper consists of the project research lead (AES) and decision maker lead (CS).
We also acknowledge funding for this project from the Canadian Health Services Research Foundation and the Alberta Heritage Foundation for Medical Research. Neither funding agency was involved in drafting this manuscript, nor is either agency involved in the conduct of the project. AES was a faculty member in the Faculty of Nursing at the University of Alberta when this project was conducted.
The authors declare that they have no competing interests. AES is co-Editor-in-Chief of Implementation Science and had no role in the review of this manuscript.
Electronic supplementary material
Additional file 1: The TIDieR (Template for Intervention Description and Replication) Checklist. Information to include when describing an intervention and the location of the information. (PDF 130 KB)
Additional file 2: Segmented regression on interrupted time series comparing organization 1 (mature use of RAI data) with organization 2 (recent adoption of RAI). Additional analyses for interested readers. (PDF 56 KB)
About this article
Cite this article
Sales, A.E., Schalm, C., Baylon, M.A.B. et al. Data for improvement and clinical excellence: report of an interrupted time series trial of feedback in long-term care. Implementation Sci 9, 161 (2014). https://doi.org/10.1186/s13012-014-0161-5