Skip to main content
  • Research article
  • Open access
  • Published:

A pragmatic cluster randomised controlled trial of a Diabetes REcall And Management system: the DREAM trial

Abstract

Background

Following the introduction of a computerised diabetes register in part of the northeast of England, care initially improved but then plateaued. We therefore enhanced the existing diabetes register to address these problems. The aim of the trial was to evaluate the effectiveness and efficiency of an area wide 'extended,' computerised diabetes register incorporating a full structured recall and management system, including individualised patient management prompts to primary care clinicians based on locally-adapted, evidence-based guidelines.

Methods

The study design was a pragmatic, cluster randomised controlled trial, with the general practice as the unit of randomisation. Set in 58 general practices in three Primary Care Trusts in the northeast of England, the study outcomes were the clinical process and outcome variables held on the diabetes register, patient-reported outcomes, and service and patient costs. The effect of the intervention was estimated using generalised linear models with an appropriate error structure. To allow for the clustering of patients within practices, population averaged models were estimated using generalized estimating equations.

Results

Patients in intervention practices were more likely to have at least one diabetes appointment recorded (OR 2.00, 95% CI 1.02, 3.91), to have a recording of a foot check (OR 1.87, 95% CI 1.09, 3.21), have a recording of receiving dietary advice (OR 2.77, 95% CI 1.22, 6.29), and have a recording of blood pressure (BP) (OR 2.14, 95% CI 1.06, 4.36). There was no difference in mean HbA1c or BP levels, but the mean cholesterol level in patients from intervention practices was significantly lower (-0.15 mmol/l, 95% CI -0.25, -0.06). There were no differences in patient-reported outcomes or in patient-reported use of drugs, or uptake of health services. The average cost per patient was not significantly different between the intervention and control groups. Costs incurred in administering the system at the register and in general practice were in addition to these.

Conclusion

This study has shown benefits from an area-wide, computerised diabetes register incorporating a full structured recall and individualised patient management system. However, these benefits were achieved at a cost. In future, these costs may fall as electronic data exchange becomes a reliable reality.

Trial registration: International Standard Randomised Controlled Trial Number (ISRCTN) Register, ISRCTN32042030.

Peer Review reports

Background

There is broad, international agreement over what constitutes high-quality health care for people with diabetes [1, 2]. In the United Kingdom (UK), this has been captured in a National Service Framework for people with diabetes [3]. At the time of setting up the Diabetes REcall And Management system (DREAM) trial, computerised central recall systems for patients and their family doctors had been supported by the evidence from a 1999 systematic review [4]. However, the evidence base on which these conclusions were based was limited to that from patient- rather than practice-randomised trials, in selected practice samples, and without economic evaluation. Thus the effectiveness of an area-wide, patient-focussed, structured recall and management system (in terms of process of care, patient outcome, and economic impact) remained unknown. A recent systematic review of quality improvement interventions to improve the quality of care in patients with diabetes showed that a range of different interventions resulted in small to modest improvements in glycemic control and in provider adherence to optimal care [5]. Across 59 studies (only five from the UK), they reported a median absolute reduction in serum HbA1c of 0.48 and a median absolute increase in provider adherence of 4.9%. However, they also identified important methodological concerns, with larger studies and randomised studies showing smaller benefits than smaller or non-randomised ones, which strongly suggest the presence of publication bias. Studies in the highest quartile of sample size reported a median reduction in serum HbA1c of only 0.10%.

Within their taxonomy of interventions the categories of "provider reminders" and "audit and feedback" most closely approximate to the intervention in this study. Across 14 trials examining one or both of these interventions, they found median improvements in provider adherence of between 4% and 8%, and improvements in HbA1c of around 0.1%[5]. They also examined 38 comparisons involving some form of clinical information system to deliver the intervention, finding no incremental benefit for any particular informatics function (i.e., decision support, auditing clinical performance, reminder systems), over and above delivering the function without an informatics system.

Following the introduction of a computerised diabetes management system in three (then) Primary Care Group areas, in the northeast of England, care initially improved but then plateaued, a phenomenon also reported by others [6, 7]. At the point this assessment of care was performed, the measures of care were restricted to documenting the performance of various actions (e.g. measurement of BP) rather than documenting the values. We postulated that the platueauing was due to clinicians failing to deliver appropriate clinical interventions due to a lack of coordination (i.e., patients being lost to follow-up), and either a lack of awareness of appropriate care or forgetting to deliver all that was required when patients were seen. Therefore, we developed the diabetes register system to address these problems.

This study aimed to evaluate, within a pragmatic, cluster randomised controlled trial design, the effectiveness and efficiency of an area-wide, 'extended' computerised diabetes register incorporating a full-structured recall and management system, actively involving patients, and including individualised patient-management prompts to primary care clinicians based on locally-adapted, evidence-based guidelines.

Methods

The study methods described here are reported in detail elsewhere [8].

Study general practices and registers

The study general practices were those in three Primary Care Trusts (PCTs) served by two district hospital-based diabetes registers, both using the same register software. When the study was designed, it was based in three PCTs (all agreed to participate in the study) served by a single register. However, the withdrawal of one of these PCTs necessitated the recruitment of a replacement PCT served by a second register. Several factors led to the withdrawal of this PCT. Despite our having appropriate administrative approval, when the trial began it became apparent that the administrative authority did not have the cooperation necessary for all of the GPs to participate in the trial. Consequently we had to enrol individual practices directly (rather than via the PCT), which resulted in fewer practices enrolling and our being at risk of not achieving our required sample size. We recruited a further PCT to address this problem, however, the original PCT then suspended involvement with the diabetes register and their practices had to be excluded from the study. This was a deviation from the published protocol.

Study patients

Study patients were those people with type 2 diabetes appearing on the registers, aged over 35 years and receiving diabetes care exclusively from study general practices or shared between study general practices (GPs) and hospital. At the time of the study, approximately 20% of patients received both GP and specialist care, though there was no formal shared-care scheme in operation in the PCTs studied.

Study design, outcomes and power

The study was a pragmatic two-arm cluster randomised controlled trial with the general practice as the unit of randomisation. Randomisation was performed using electronically-generated random numbers by the study statistician and was stratified by PCT and practice size.

The study outcomes were: the clinical process and outcome variables held on the diabetes registers; patient reported outcomes (the SF36 health status profile [9–11], the Newcastle Diabetes Symptoms Questionnaire [12], and the Diabetes Clinic Satisfaction Questionnaire [13]); and service and patient costs. Patients have been shown to be able to report cost data reliably [14].

As this was a quality of care study interested in a range of measures of care, it was important to use as study outcomes the routinely available process and outcome measures on which clinicians alter patients' care. Our power calculation was based on indicative process and outcome variables. The intra-cluster correlation coefficient (ICC) for measures of process calculated from local data was 0.14, whether a blood pressure measurement or an HbA1c measurement has been recorded in a 12-month period. Therefore, to detect a difference of 15% (42.5% v 57.5%) in a binary variable with 80% power, assuming a significance level of 5%, required 60 practices each contributing 30 patients [15]. The sample size for the outcome of care variables was based on the SF-36. Previous work had shown that where this type of intervention produces an effect, it was likely to produce an effect size of approximately 0.25 in such measures [16] – and that the ICCs for such measures would be approximately 0.07 [17]. A final sample of 27 patients from each of 61 practices would give 85% power to detect an effect size of 0.25, assuming a significance level of 5%. Assuming a response rate of 70%, the starting sample size was 2379 patients (approximately 39 patients per practice).

Data collection

We collected process data for the 12 months preceding the start of the intervention and for the 15 months of the intervention period (1st April 2002 to 30th June 2003). All data were extracted from the registers at the end of the intervention period. Prescription data were similarly collected, but, because of problems reliably determining the date of initiation of prescriptions, we collected drug data back to the point at which a study patient first appeared on the register. We gathered data on patient reported outcomes by postal questionnaire at the end of the intervention period. Questions on the costs incurred by patients were developed by the study health economist and were included in the questionnaire. These questions included the self-reported use of medication. Non-responders to the initial posting received a reminder letter after two weeks; non-responders to this received a second reminder letter and a copy of the questionnaire after a further two weeks.

We gathered information on workload and other resource impacts of the intervention in general practice, with a semi-structured telephone interview survey of key informants within a random sample of 10 intervention and 12 control practices. Similar information on the impact on the registers was collected by the register staff, logging time spent on intervention-related activities.

Analysis

The following analytic strategies were adopted. For the process of care and intermediate outcome variables collected directly from the register, the dependent variable took the form of an observation for an individual patient in the period after implementation of the intervention. We had data on these variables both before and after the intervention, and, for each variable considered, the post intervention measure was specified as the dependent variable and the corresponding pre-intervention measure was specified as a covariate. The effect of the intervention was estimated using generalised linear models with an appropriate error structure (binomial for binary data, normal for continuous data, and negative binomial for count data) and link function (logit for binary data, identity for continuous data, and log for count data). To allow for the clustering of patients within practices, population averaged models were estimated using generalized estimating equations (GEEs). Baseline variables (pre-intervention data) were included in the model as a covariate.

Examination of the drug therapy data suggested that the variable that was recorded most reliably on the register was the date that the medication was started. In general, patients who started on a particular medication prior to the intervention period also were taking that medication during the intervention period. For each type of medication, the total number of patients prescribed that medication in each practice was determined. This variable was analysed using negative binomial regression with the total number of relevant patients in the practice included as an exposure variable; the number of patients prescribed that medication prior to the intervention was included as a covariate.

Questionnaire data were only available following the intervention. Patient-reported outcome measures were analysed using population averaged models as described for the process data above, except that, as we had no pre-intervention measure, no adjustment for differences at baseline was possible, thus no baseline covariate was included in the model. Patient reported medication data were analysed for the register medication data, except that, again, there were no baseline data to include as a covariate.

In addition to the above analyses that were pre-specified, because of large systematic differences between the two registers that became apparent once the data had been collected, a further model was fitted which included a register effect. This was not pre-specified, but the differences were so large it was felt that it would be inappropriate to ignore them during the main analysis. All analyses were undertaken using Stata version 8.

The economic evaluation adopted a 'cost consequences' approach [18]. All costs were expressed in 2002/2003 values. Two main sources were used to assign costs to health care resources [19, 20] supplemented when necessary with unit cost data from other official sources [21] and local surveys. Drug costs were taken from the British National Formulary [22]. Patients reported on the use of NHS (National Health Service) services, medications, travel costs, costs for the purchase of special items, private treatments/consultations and time off work, sick leave and related pay loss, as well as time off work and related pay loss to their companions over a twelve-month period. No discounting was applicable. A simplifying assumption was made that the use of all costs and resources occurred at the beginning of this period.

Intervention

The development and implementation of the intervention have been described in detail elsewhere [23]. In summary the pre-existing diabetes register functioned as a central register of patients with diabetes. A structured dataset was completed on paper forms and returned to the central register; the hospital laboratory provided a monthly download of laboratory test results (e.g. HbA1c) for patients on the register. From this data both patient-specific and aggregated data were provided annually to patients and clinicians. The pre-existing system was passive, in that it did not request data for patients, rather it summarised the data it received. We postulated that the platueauing of performance that had been documented was due to clinicians failing to deliver appropriate clinical interventions due to a lack of co-ordination (patients being lost to follow up) and either a lack of awareness of appropriate care, or forgetting to deliver all that was required when patients were seen.

In the enhanced structured and recall management system, a 'circle of information exchange' was established between the participating general practices and the database. The central database system identified when patients were due for review and generated a letter to the patients asking them to make an appointment for a review consultation. The rules for generation of review letters were adapted for each PCT area. In one PCT, the system acted as a prompting system for annual review, and patients were identified 11 months after their last diabetes appointment. In the other two PCTs, patients who had missed annual reviews were identified by searching for patients who had not had a diabetes appointment for 14 months or more. At the same time, the central database generated a letter to the practice stating that the patient should be making a review appointment in the near future. The letter to the practice included a 'structured management sheet' (to be held in the patient's record) to capture an agreed minimum data set that would be collected during the consultation. This management sheet also contained relevant prompts tailored to a patient's known clinical or biochemical values, derived from locally adapted, national evidence-based guidelines [see Additional file 1].

When the patient was seen in the practice, the primary care professional (often the practice nurse) completed the management sheet and returned a copy for entry into the central register within a designated period of time. This circle of information was broken if the patient did not visit the general practice as planned or the general practice did not return the management sheet to the central register. If this happened, the central register would print reminder letters and further structured management sheets at the next routine database search by the diabetes register facilitator, which occurred at least weekly.

In addition to this cycle based on annual reviews, routine ongoing structured management sheets were produced every time a patient in an intervention practice was identified by the diabetes register facilitator on the register database. For example, when data were inputted on the database for any reason, the system would print a structured management sheet updated for any new data and relevant management prompts, and this would be sent to the relevant practice.

The trial intervention ran for 15 months, commencing on 1 April 2002 and ending on 30 June 2003. The letters to patients inviting them for annual review commenced in October 2002 – delayed to overcome concerns about the accuracy of patient details on the database up to this point. The enhanced system also was capable of producing patient letters to accompany routine ongoing structured management sheets for practices, but because of difficulties operating this element of the software it was not possible to run this feature during the lifetime of the trial. This was a deviation from the published protocol.

Ethics

The study was approved by the South Tyneside, Southwest Durham, Hartlepool, and North Tees Local Research Ethics Committees (LRECs).

Results

Figure 1 shows the number of practices and patients at each stage of the study. It was not possible to provide the number of patients within the 90 practices assessed for eligibility, as we did not have ethical approval to access data on the patient inclusion criteria for practices that had not agreed to participate in the study. As a condition of ethical approval in one of the PCTs, individual opt-out consent had to be sought from patients whose practices had agreed to participate: 477 out of 4577 (10.4%) patients invited to participate opted out of the trial. [The considerably higher number of patients written to as compared to the number of patients included in the trial reflected the need to get permission before being able to access the diabetes register – and then apply the inclusion criteria.] Table 1 shows the baseline characteristics of control and intervention practices and patients. None of the differences in these variables between the intervention and control group are statistically significant. Unfortunately, we were unable to compare the clinical characteristics of respondents and non-respondents to the patient survey, as we were subject to the requirement of the ethics/research governance organisations that we should not hold any patient-identifiable data within our academic institution. We were supplied with a list of names and addresses of patients to whom we could send out a patient survey, but were not allowed access to link that information with individual patient records on the registers.

Table 1 Baseline characteristics of control and intervention practices and patients.
Figure 1
figure 1

Flow of clusters and individual participants through each stage of recruitment, randomisation and analysis.

The findings from analysis of the process of care clinical variables and drug data from the register-derived dataset are shown in Table 2. This analysis is adjusted for differences at baseline and a systematic difference between registers. Analyses allowing for baseline data only and register effect only are presented alongside this analysis in Additional file 2 [see Additional file 2]. Nineteen subjects (7 in control group, 12 in intervention group) had no valid date in their medication record and were excluded from the medication analysis. With the exception of serum creatinine, a variable that we anticipated that the intervention would not influence, all of the variables measured showed a direction of effect in favour of the intervention. For 10 of the 26 variables measured, this difference achieved statistical significance.

Table 2 Adjusted register-derived process and clinical outcome data results for intervention and control groups. Odds ratios are estimates of the difference between intervention and control practices at follow-up, adjusting for differences at baseline and a systematic difference between registers.

Patient reported outcome data

We surveyed a random sample of 3056 patients, receiving usable responses from 1433. With 241 exclusions, this gave an overall response rate of 51% (number of eligible subjects who responded divided by the number of people sampled, minus those known to be ineligible) (Figure 1). There were no statistically significant differences in response rate between intervention and control group respondents, or on any sociodemographic variables. Analyses of the patient-reported medication data are summarised in Table 3. The differences between intervention and control groups were not statistically significant. There were no differences between the two registers, so the adjusted values differ little from the unadjusted ones. The patient-reported outcome data from the questionnaire survey are summarised in Table 4. The ICC for the diabetes symptom score was 0.03, and the ICCs for the SF 36 physical and mental health component scores were 0.03 and 0.02, respectively. There were no statistically significant differences in scores on any of the measures in Table 4, or on any of the items of the DCSQ.

Table 3 Self-reported medication data from the patient questionnaire survey.
Table 4 Patient-reported outcomes

Economic data

The economic data relating to service use and patient expenditure are summarised in Table 5, and were not significantly different between intervention and control groups. The intervention costs were: UK£11,443 for developing the local guidelines, UK£14,034 for software development, and UK£2,408 for educational activities. This gave a total one-off cost of initiating the system across the two register areas of UK£27,885. The additional annual cost of running the system for the two registers was UK£11,170. Based on the interviews with practice-based informants, the mean maximum annual cost per patient that the practices had to meet when using the system (including staff time and consumables) was estimated at £76.46 per patient; the minimum annual costs were zero. However, because of the semi-structured nature of the interviews, it was not possible to accurately estimate the distribution of costs within this range.

Table 5 Economic analysis profile (Costs expressed in 2002/03 UK£).

Discussion

We have evaluated an area-wide computerised diabetes register incorporating a full structured recall and individualised patient management system – one of the largest trials of its kind in terms of the number of provider units, and the largest in terms of patient numbers. The intervention produced improvements in patient attendance, improvement in four of the nine measured areas of provider adherence to recommended care (the recording of foot examination, dietary advice, blood pressure, and smoking status), and improvement in one measure of clinical control (serum cholesterol). These benefits incurred costs.

Although we showed significant improvements in the recording of drugs in the database, there were problems with the dating of the drug data. The patient reported data on drug use showed no significant differences in usage between the two groups. Given this discrepancy and the potential for inaccuracy in both data sets, the impact of the intervention on prescribing has to be regarded as unclear.

In our study which utilised provider reminders and audit and feedback within an information system, we found changes in provider adherence that were considerably larger than those identified in the review by Shojania et al., and four of our nine were statistically significant improvements [5]. However, because our data was coming from a routine register it is important to consider the possibility that, for the provider adherence variables, some of the effect was due to a recording phenomenon, and the same actions were being performed as frequently in control practices but were just not being recorded. Setting aside the fact that the recording of care, particularly in chronic disease management, is a central part of good care [24], all of the provider adherence and all of the clinical variables showed a direction of effect in favour of the intervention. In addition, four of the provider adherence variables (recording of HbA1c, cholesterol, serum creatinine, and urinary albumin:creatinine ratio) were not reliant on recording within general practices; they were routinely transferred into the diabetes databases directly from the laboratory information systems, and so would not be subject to any recording effect. Whilst these were not statistically significantly different between intervention and control groups, many clinicians would regard changes of this size as clinically significant (16% increase in HbA1c recording and 21% increase in cholesterol recording).

There is some suggestion of under-recording of data on the registers, with an apparently low proportion of people on aspirin and insulin (mirrored by an apparently high proportion of people on diet alone). This is almost certainly due to a combination of factors of which a degree of under-recording is only one. Low aspirin prescription rates could be due to patients buying aspirin directly from pharmacies rather than receiving it via prescription (common in the UK). This is supported by the figures for self-reported aspirin use being higher than those on the registers. Excluded from the study were people being treated for their diabetes solely by hospital, who are more likely to be treated on insulin and less likely to be on diet alone. However, while we have the same rates for patients treated with diet alone from the register and from self-report, self-report of insulin use was considerably higher than on the registers. This suggests that insulin use was under-recorded on the registers, but equally so for both intervention and control groups.

Unlike the studies in the review, we found no significant effect on levels of HbA1c. This may reflect the overall levels of control in our study population with baseline HbA1c of 7.7, and both groups improving to 7.3. The studies in the review were conducted in more poorly controlled populations, with median baseline HbA1c values of over 8 (and in one case over 10). Our findings also may reflect the relatively short period for which the intervention ran as fully intended. While the intervention was in place for the planned 15 months, the full intervention ran for only 9 months, and the intended patient intervention was never fully operational.

We did, however, show a modest and statistically significant lowering of serum cholesterol of 0.15 mmol/l in the intervention group compared to the control group. As the impact of the intervention on medication, including lipid-lowering therapy, was unclear from the register-derived data and negative from the patient-reported data, it is possible that this effect may be due to the increased delivery of dietary advice – one of the four areas of improvement in provider adherence to recommended care.

We showed no significant difference in patient-reported outcomes between intervention and control groups. The observed clustering in the outcome scores was smaller than that assumed in the sample size calculation, and, as we achieved the desired sample size, the lack of significant changes in patient outcomes is unlikely to be due to a lack of power. However, we do have to consider the possibility of non-response bias for all the self-reported data with a response rate of 51%, even though there was no difference between intervention and control group response rates, or on sociodemographic variables.

It is very unusual for implementation trials to include a rigorous economic evaluation [25]. Given that implementation trials do not produce a single estimate of overall effect, we have expressed the economic evaluation in terms of the profile of incurred costs. Our assessment of costs incurred by the practices was limited, and so we have only suggested a hypothetical illustration of the likely costs for an average Primary Care Trust as shown in Table 6. Whilst we could not precisely define the distribution of the costs for general practices, assuming an average cost of 25% of the range shows that the practice incurred costs would still be the single largest cost element incurred by introducing a system such as this. Whilst for any individual practice the figures would be proportionately lower, in a demand-led system such as UK general practice, coping with such innovations should be accompanied by commensurate resources. This is particularly important when, as in this case, an innovation can reside in specialist services or hospital care that has no responsibility for expenditure incurred in family or general practice. In any future study, a more detailed costing study in general practice would be important.

Table 6 Hypothetical example of the estimated costs of the intervention applied to an average PCT (Costs expressed in 2002/03 UK£).

Conclusion

This study has shown benefits from an area-wide, computerised diabetes register incorporating a full structured recall and individualised patient management system. However, these benefits were achieved at a cost. In future, these costs may fall as electronic data exchange becomes a reliable reality. However, as performance steadily rises it will become ever more difficult to demonstrate smaller and smaller incremental improvements. Considering our findings alongside those of Shojania's review, such developments should only be evaluated in large-scale, randomised controlled trials incorporating a full economic evaluation.

References

  1. The Acropolis Affirmation: Diabetes care - St Vincent in progress [Statement from St Vincent Declaration Meeting, Athens, Greece, March 1995]. Diabetic Med. 1995, 12: 636-

    Article  Google Scholar 

  2. UK Prospective Diabetes Study (UKPDS) Group: Intensive blood-glucose control with sulphonylureas or insulin compared to conventional treatment and risk of complications in patients with type 2 diabetes (UKPDS 33). Lancet. 1998, 352: 837-853. 10.1016/S0140-6736(05)61359-1.

    Article  Google Scholar 

  3. Department of Health: National Service Framework for Diabetes: Delivery Strategy. 2003, London, Department of Health

    Google Scholar 

  4. Griffin S, Kinmonth AL: Diabetes care: the effectiveness of systems for routine surveillance for people with diabetes [Cochrane Review]. The Cochrane Library. 1999, Oxford, Update Software, Issue 4

    Google Scholar 

  5. Shojania KG, Ranji SR, Shaw LK, Charo LN, Lai JC, Rushakoff RJ, McDonald KM, Owens DK: Diabetes Mellitus Care. Closing the quality gap: a critical analysis of quality improvement strategies. Technical Review 9. Edited by: Shojania KG, McDonald KM, Wachter RM and Owens DK. 2004, Rockville, Agency for Healthcare Research and Quality, 2

    Google Scholar 

  6. New JP, Hollis S, Campbell F, McDowell D, Burns E, Dornan TL, Young RJ: Measuring clinical performance and outcomes from diabetes information systems: an observational study. Diabetologia. 2000, 43: 836-843. 10.1007/s001250051458.

    Article  CAS  PubMed  Google Scholar 

  7. Masding MG, Adams A, Dawson A, Gatling W: A countywide primary care annual audit fails to demonstrate an improvement in diabetes care between 1996 and 2001. Pract Diabetes Int. 2003, 20: 129-134. 10.1002/pdi.472.

    Article  Google Scholar 

  8. Eccles M, Hawthorne G, Whitty P, Steen N, Vanoli A, Grimshaw J, Wood L: A randomised controlled trial of a patient based Diabetes Recall and Management System: the DREAM Trial: A study protocol [ISRCTN32042030] (21 March 2002). BMC Health Services Research. 2002, 2:5:

    Google Scholar 

  9. Ware J: SF-36 Health Survey: Manual and Interpretation Guide. 1993, Boston, MA, The Health Institute, New England Medical Center

    Google Scholar 

  10. Ware JE, Kosinski M, Bayliss MS, McHorney CA, Rogers WH, Raczek A: Comparison of methods for the scoring and statistical analysis of SF-36 health profile and summary measures: summary of results from the Medical Outcomes Study. Med Care. 1995, 33(4): 264-279.

    Google Scholar 

  11. Jenkinson C, Stewart-Brown S, Petersen S, Paice C: Assessment of the SF-36 version 2 in the United Kingdom. J Epidemiol Comm Health. 1999, 53: 46-50.

    Article  CAS  Google Scholar 

  12. McColl E, Steen IN, Meadows KA, Eccles MP, Hewison J, Fowler P, Blades SM: Developing outcome measures for ambulatory care: an application to asthma and diabetes. Social Science & Medicine. 1995, 41: 1339-1348. 10.1016/0277-9536(95)00120-V.

    Article  CAS  Google Scholar 

  13. Bradley C: Handbook of psychology and diabetes: A guide to psychological measurement in diabetes research and practice. Edited by: Bradley C. 1994, Chur, Switzerland, Harwood Academic Publishers, 392-393.

    Google Scholar 

  14. Thompson S, Wordsworth S, on behalf of the UK Working Party on Patient Costs: An annotated cost questionnaire for completion by patients [HERU Discussion Paper 13/01]. 2001, University of Aberdeen, Health Economics Research Unit

    Google Scholar 

  15. Donner A, Birkett N, Buck C: Randomization by cluster, sample size requirements and analysis. Am J Epidemiol. 1981, 114: 906-915.

    CAS  PubMed  Google Scholar 

  16. Jenkinson C, Layte R, Wright L, Coulter A: The UK SF-36: an analysis and interpretation manual. 1996, Oxford, Health Services Research Unit, University of Oxford

    Google Scholar 

  17. Eccles M, McColl E, Steen N, Rousseau N, Grimshaw J, Parkin D, Purves I: Effect of computerised evidence based guidelines on management of asthma and angina in adults in primary care: cluster randomised controlled trial. BMJ. 2002, 325: 941-944. 10.1136/bmj.325.7370.941.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Drummond MF, Schulpher M, O'Brien BJ, Stoddart GL, Torrance GW: Methods of economic evaluation in health care. 2005, Oxford, Oxford University Press, 3rd

    Google Scholar 

  19. Netten A, Curtis L: Unit costs of health and social care 2003. 2003, Kent, University of Canterbury

    Google Scholar 

  20. Department of Health: The new NHS: Reference costs. 2002, London, Department of Health

    Google Scholar 

  21. Report by the Comptroller and Auditor General: NHS Direct in England. 2002, London, The Stationery Office, HC 505, Session 2001-2002: 25 January 2002

    Google Scholar 

  22. British Medical Association: BNF 45, MeReC [electronic version]. 2003, London, The British Medical Association and the Royal Pharmaceutical Society of Great Britain

    Google Scholar 

  23. Whitty P, Eccles MP, Hawthorne G, Steen N, Vanoli A, Grimshaw JM, Wood L, Speed C, McDowell D: Improving services for people with diabetes: lessons from setting up the DREAM Trial. Pract Diabetes Int. 2004, 21: 323-328. 10.1002/pdi.711.

    Article  Google Scholar 

  24. General Medical Council: Good Medical Practice. 2001, London, 3rd Ed May

    Google Scholar 

  25. Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay CR, Vale L, Whitty P, Eccles MP, Matowe L, Shirran L, Wensing M, Dijkstra R, Donaldson C: Effectiveness and efficiency of guidline dissemination and implementation strategies. Health Technol Assess. 2004, 8: 1-84.

    Article  Google Scholar 

Download references

Acknowledgements

This study was funded by Diabetes UK, and Northern and Yorkshire Regional NHS R&D Office. The study was independent of the funding bodies, and the views expressed here are those of the authors and do not necessarily reflect the views of the funding bodies. The study funders had no involvement in the study design, collection, analysis, interpretation of the data, writing of the report or paper, or in the decision to submit the paper for publication.

We are grateful to our collaborators at North Tees and Hartlepool NHS Trust, particularly Dr Carr, Dr MacLeod, Joanne Clayton and John Fitzsimmons; at Easington Primary Care Trust and North Tees Primary Care Trust; and at South Tyneside Primary Care Trust and South Tyneside Healthcare NHS Trust, particularly Professor C Bradshaw and Dr J Parr, Wynn Schembri and Clare Beard. Jeremy Grimshaw holds a Canada Research Chair in Health Knowledge Uptake and Transfer. The Diabetes Clinic Satisfaction Questionnaire was supplied by Prof C Bradley.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paula M Whitty.

Additional information

Declaration of competing interests

DM was senior partner of Westman Medical Software, who developed the software. The company was taken over by ProWellness UK Ltd, who continue to maintain the software used in this study. The remaining authors declare that they have no competing interests.

Authors' contributions

The study was conceived by ME, GH and JG. It was designed by ME, GH, PW, JG, NS, AV, DM and LW. It was run by PW, CS, GH, AV and ME. DM developed and modified the software. Senior clinicians and diabetes register staff in the two sites ran and maintained the intervention. NS supervised the analysis. AV conducted the economic evaluation. All authors commented on successive drafts of the paper. ME is the guarantor of the paper.

Electronic supplementary material

13012_2006_35_MOESM1_ESM.pdf

Additional File 1: Example of a structured management sheet. The file provides an anonymised example of a structured management sheet from the enhanced diabetes register. (PDF 292 KB)

13012_2006_35_MOESM2_ESM.doc

Additional File 2: Table 2 (expanded). Unadjusted and adjusted register-derived process and clinical outcome data results for intervention and control groups. This table reproduces the data provided in Table 2 and also includes analyses allowing for baseline data only and register effect only. (DOC 85 KB)

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Eccles, M.P., Whitty, P.M., Speed, C. et al. A pragmatic cluster randomised controlled trial of a Diabetes REcall And Management system: the DREAM trial. Implementation Sci 2, 6 (2007). https://doi.org/10.1186/1748-5908-2-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1748-5908-2-6

Keywords