This article has Open Peer Review reports available.
Patient-specific computer-based decision support in primary healthcare—a randomized trial
© Kortteisto et al.; licensee BioMed Central Ltd. 2014
Received: 27 March 2013
Accepted: 14 January 2014
Published: 20 January 2014
Computer-based decision support systems are a promising method for incorporating research evidence into clinical practice. However, evidence is still scant on how such information technology solutions work in primary healthcare when support is provided across many health problems. In Finland, we designed a trial where a set of evidence-based, patient-specific reminders was introduced into the local Electronic Patient Record (EPR) system. The aim was to measure the effects of such reminders on patient care. The hypothesis was that the total number of triggered reminders would decrease in the intervention group compared with the control group, indicating an improvement in patient care.
From July 2009 to October 2010 all the patients of one health center were randomized to an intervention or a control group. The intervention consisted of patient-specific reminders concerning 59 different health conditions triggered when the healthcare professional (HCP) opened and used the EPR. In the intervention group, the triggered reminders were shown to the HCP; in the control group, the triggered reminders were not shown. The primary outcome measure was the change in the number of reminders triggered over 12 months. We developed a unique data gathering method, the Repeated Study Virtual Health Check (RSVHC), and used Generalized Estimation Equations (GEE) for analysing the incidence rate ratio, which is a measure of the relative difference in percentage change in the numbers of reminders triggered in the intervention group and the control group.
In total, 13,588 participants were randomized and included. Contrary to our expectation, the total number of reminders triggered increased in both the intervention and the control groups. The primary outcome measure did not show a significant difference between the groups. However, with the inclusion of patients followed up over only six months, the total number of reminders increased significantly less in the intervention group than in the control group when the confounding factors (age, gender, number of diagnoses and medications) were controlled for.
Computerized, tailored reminders in primary care did not decrease during the 12 months of follow-up time after the introduction of a patient-specific decision support system.
The treatment of patients is based on clinical expertise, whose key elements are research evidence, clinical situations and circumstances, and patients’ preferences and actions . The evidence is translated into practical form, for example, in clinical practice guidelines , whereas active incorporation of these into everyday practice has only recently become a recognized target for research [3, 4]. These methods have previously been summarized and the conclusion is that because there are no magic formulae [5, 6], tailoring the intervention is necessary .
One of the innovations in the incorporation of evidence into practice is computer-based decision support to bring relevant evidence to the attention of healthcare professionals (HCPs) at the point of care . Such automatic systems combine medical evidence with patient-specific data from the Electronic Patient Record (EPR), which supports clinical decision making [9–11]. According to a Cochrane Review of 28 studies, computer reminders achieved a median improvement in process adherence of 4.2% . Focused computer-generated reminders and alerts work well in a variety of single conditions [13–16] and in preventive care . Decision support can in many settings improve the quality of care and help to avoid mistakes in clinical work, thereby improving patient safety [18, 19]. There is still, however, scant evidence on how such information technology solutions work across many diseases or conditions in primary healthcare where multi-professional teams [20, 21] care for patients with multiple health problems, both acute and chronic [22, 23].
In our study, a set of evidence-based patient-specific reminders in the form of the computer-based decision support service EBMeDS (Evidence-Based Medicine electronic Decision Support, http://www.ebmeds.org) was integrated into the EPR system of one primary care organisation. The EBMeDS service aims to aid treatment across several conditions in actual clinical practice and should therefore be usable in primary healthcare. Our study question was: ‘Do patient- and problem-specific automatic reminders shown to HCPs during primary care consultations have an effect on patient care?’ We hypothesized that the total number of triggered reminders would decrease in the intervention group, in contrast to the control group, indicating a possible improvement in patient care. The hypothesis was formulated on the basis of the idea of a gold standard, by which a triggered reminder indicates that the patient care is not evidence-based, and no reminder indicates that the patient care is evidence-based.
The study was reviewed and accepted by the Pirkanmaa Hospital District (Tampere University Hospital) Ethics Committee (ETL R08149) and registered at ClinicalTrials.gov (www.clinicaltrials.gov: NCT00915304).
The setting was the primary healthcare center of Sipoo, which was selected from regular users of the Mediatri EPR system. The center comprises 48 HCPs: 15 physicians, 24 nurses and 9 other HCPs (physiotherapists, ward nurses, a psychologist), described in detail elsewhere . The HCPs used the EPR system during outpatient consultations as well as on an inpatient ward typical of Finnish primary care.
The study started in July 2009 and ended in October 2010.
We developed a unique method for population-based outcome data gathering from the EPR archive, the Repeated Study Virtual Health Check (RSVHC). During an RSVHC, the EPR archive sent to the EBMeDS service structured patient data (diagnoses, medications, and laboratory results) on the base study population (request), and the service generated all reminders triggered by these data and returned them (answer). The RSVHC was planned to be performed weekly at night. Actually, one to five RSVHCs were performed per month. The requests and answers of each RSVHC were stored automatically in a log file located in the EPR server, to be exported to the study register and analyzed at the end of the study period.
The reminders were written by the EBMeDS editorial team using medical evidence embedded in the sets of Duodecim guidelines and other sources, and linked with evidence-based decision support (DS) rules . In April 2009, the 107 available DS rules were piloted in Sipoo. Subsequently, the chief medical officer of the health center decided which of the DS rules should be implemented. These totalled 96.
Here is an example of an implemented DS rule that conforms to if–then logic: ‘Metformin is the first choice oral hypoglycaemic agent in type 2 diabetes’ (DS rule 16).
The DS rule is implemented if the diagnosis in the EPR is type 2 diabetes. First, the rule checks whether the medication list on the EPR contains metformin. If it does not, the rule then checks for the plasma/serum creatinine value from the EPR laboratory results. If the glomerular filtration rate (GFR) is in the normal range, then reminder one, ‘Type 2 diabetes – start metformin’ is shown on the screen. If the GFR is <60 ml/min, then reminder two, ‘Type 2 diabetes—start metformin, note GFR’ is shown. If the GFR is missing or out of date, then reminder three, ‘Type 2 diabetes—check renal function and start metformin’ is shown .
See Additional file 1 for more examples of DS rules and reminders targeting diabetes patients, a large patient group in primary care. HCPs can easily check background information and evidence behind each reminder by clicking the reminder and opening the references.
This was a register-based study using the EPR data without any direct contact with patients.
The first step in the data collection comprised 52 RSVHCs carried out between July 2009 and October 2010 in the base population. At the end of the study, the population was 17,541 (total number of patient IDs in the study register). Data in the RSVHCs were structured patient-specific information (diagnoses, medications, laboratory results), and the triggered patient-specific reminders at each time point were stored in the study register.
In the second step, using the study ID, the earliest date was defined when the patient’s EPR had been opened during the study (start of individual follow-up). All EBMeDS procedures (requests and answers) which actually took place during the study were stored in monthly log files on the EPR server. One log file (April 2010) was missed because of technical problems. A baseline date was determined individually for each study patient as the date of the first opening of the EPR during the study period, and the patient-specific follow-up started from this date.
The third step involved linking the information from steps one and two. These final data comprised patient-specific information from the individual baseline date to all RSVHC points where the patient was followed up.
The swine flu epidemic, with the ensuing universal vaccination procedures, occurred between September 2009 and February 2010. We excluded from the monthly log files patients who received only the swine flu vaccination during a short visit (5 to 10 minutes) with a nurse. According to the nurses , nothing else was checked, including triggered reminders.
Examples of the EBMeDS reminders listed according to ICD-10 coding system
Decision support ID
Decision support title
Reminder (short version)
Cardiovascular diseases (IX, Diseases of the circulatory system)
Anticoagulants for atrial fibrillation
Atrial fibrillation—start warfarin?
Atrial fibrillation—consider warfarin?
Follow-up of patients with hypertension
Hypertension—time to check blood pressure?
Elevated blood pressure in last measurement—time to check blood pressure?
Ear diseases (VIII, Diseases of the ear and mastoid process)
Avoiding decongestants and antihistamines in otitis media in children
Otitis media—avoid decongestants and antihistamines
Endocrine and metabolic diseases (IV, Endocrine, nutritional and metabolic diseases)
An abnormal potassium result
Serum potassium is dangerously out of range (@1)!
Serum potassium is out of range (@1)
Serum potassium is slightly out of range (@1)
Genitourinary diseases (XIV Diseases of the genitourinary system)
GFR below 55 ml/min
Decreased GFR—no diagnosis of renal failure
Decreased GFR and no recent creatinine test—order new creatinine test?
Haematological diseases (III, Diseases of the blood and blood-forming organs and certain disorders involving the immune mechanism)
Low haemoglobin concentration in adults and adolescents
Decreased haemoglobin concentration—start investigations?
Musculoskeletal diseases (XIII, Diseases of the musculoskeletal system and connective tissue)
Prevention of osteoporosis in long-term use of glucocorticoids
Long-term glucocorticoids—add calcium and vitamin D?
Long-term glucocorticoids—add a bisphosphonate?
Neoplastic diseases (II, Neoplasms)
Follow-up of high PSA concentration
High PSA—time to repeat the test?
High PSA—time to repeat the test? Note 5-alpha reductase medication
Nervous system diseases (VI, Diseases of the nervous system)
SSRIs not indicated for headaches
Headache—SSRIs are not recommended
Respiratory diseases (X, Diseases of the respiratory system)
Inhaled corticosteroids instead of oral steroids for chronic asthma
Asthma treated with oral steroids—start inhaled steroids?
Asthma treated with courses of oral steroids—start inhaled steroids?
The control group was treated according to normal practice, and the triggered patient-specific reminders were not shown to the HCP on screen. Instead, these were stored in the log files and exported to the study register. Usual care and the evidence for that were available to HCPs at all times during the trial, by active searching of, e.g., guidelines.
The four EBMeDS decision support functions available for the healthcare professional
These are patient-specific and the short version is shown automatically, the long version when the cursor hovers over the reminder. These are triggered on: opening of the patient record; or recording a new diagnosis; or prescribing new medication.
The focus of the present paper.
These are shown in accordance with the patient’s diagnosis list and ICD-10 codes.
Virtual health check (VHC)
The healthcare professional can run a (clinical) VHC on a selected group of patients. Patient-specific reminders appear on the screen, which can be used e.g., for planning the following day’s consultations.
Reminders triggered on prescribing a medication.
The primary composite outcome measure was the change in the numbers of all reminders triggered in the target population over 12 months of individual follow-up. As secondary outcome measures, we explored the changes also after three and six months of follow-up.
A single ratio procedure randomized the base population of the health center at the beginning of the study to an intervention and a control group of the same size without any other criteria. The procedure was done once per individual by a computer using a mathematical formula based on the PIC of each patient in the EPR system, and assigning each patient a unique study ID number. The forthcoming patients (for example, new inhabitants of Sipoo) were randomized according to the same procedure. The procedure was performed by a person outside the study group who also retained the key of the formula linking the PIC to the study ID number.
The randomization was masked from the patients, the HCPs, and the study group. However, when the HCP was shown the patient-specific reminders on screen, he or she knew that the patient belonged to the intervention group. The study group first opened randomization after the data collection period.
Characteristics of the intervention and the control group in the three models
Model 1 with 12 months’ follow-up (indiv1_MO12)
Model 2 with 6 months’ follow-up (indiv1_MO6)
Model 3 with 3 months’ follow-up (indiv1_MO3)
Number of patients
Age (mean, sd)
Number of diagnoses at baseline (%)
Number of diagnoses at the end of follow-up period (%)
Number of medications at baseline (%)
Number of triggered reminders at baseline (mean, sd)
Number of triggered reminders at the end of follow-up period (mean, sd)
Number of participants with no triggered reminder (%)
Incidence rate ratios (IRR) of the number of triggered reminders by negative binomial regression models using a generalized estimation equation
IRR (95% CI)
IRR (95% CI)
1.002 (0.895 – 1.121)
1.004 (0.903 – 1.116)
1.014 (1.001 – 1.023)
1.017 (1.008 – 1.026)
1.002 (1.001 – 1.003)
1.002 (1.001 – 1.003)
Group × Time
1.001 (0.995 – 1.008)
1.002 (0.995 – 1.009)
1.011 (0.913 – 1.120)
1.008 (0.923 – 1.101)
1.038 (1.030 – 1.046)
1.044 (1.036 – 1.052)
Group × Time
0.990 (0.980 – 1.001)
0.989 (0.978 – 0.9997)
0.990 (0.895 – 1.094)
1.013 (0.926 – 1.108)
1.036 (1.024 – 1.050)
1.046 (1.031 – 1.062)
Group × Time
0.998 (0.980 – 1.017)
0.996 (0.975 – 1.018)
To account for the within-participant correlation between repeated measures, we used Generalized Estimation Equations (GEE) by using the STATA software package (version 12.0 for Windows). Liang and Zeger proposed the GEE approach in 1986 to deal with impractical probability distribution in handling correlated responses . We used different correlation structures (exchangeable, first-order autoregressive and unstructured) to account for the correlation within each unit. All models were evaluated in terms of how well they fitted the data using the quasi-likelihood under the independence model information criterion (QIC) for model selection . The model with the lowest QIC was selected as the final model. Robust standard errors were used for all GEE-fitted models. IRRs were presented with 95% confidence intervals (95% CI) and p-values. We defined <0.05 risk of error as the significance p-value.
Three GEE models were made (Table 4). The primary outcome after 12 months, model indiv_MO12, included all participants with individual follow-up for 12 months (n = 7,570). At baseline, there were no differences between the intervention and control groups. The incidence rate for triggered reminders increased significantly (p = 0.002) over the follow-up period, and the intervention and control group behaved similarly. The result was congruent with confounding variables, such as age, gender, number of diagnoses, and number of medications (adjusted model).
The GEE model indiv_MO6 included all participants with at least six months of follow-up (n = 11,911). At baseline, the intervention and control groups did not differ. The incidence rate for triggered reminders increased significantly (p <0.001) during the follow-up period. The difference in development between the groups was not significant (p = 0.066) in the unadjusted model, but in the adjusted model there was a significant difference (p = 0.044), indicating that the number of reminders increased less in the intervention group than in the control group.
The GEE model indiv_MO3 included all participants with at least three months of follow-up (n = 12,795). At baseline, the intervention and control groups did not differ. During the follow-up period, the incidence rate for triggered reminders increased significantly (p <0.001). The intervention effect was not significant, indicating that the reminders were triggered similarly in both groups. The result was congruent with confounding variables in the adjusted model.
We did not detect any direct harm to the participants from the intervention during the trial. A conceivable harm to the HCPs originated from needless or incorrect reminders based on missing laboratory codes or unexpected changes in the EPR system after updates.
The two new data-gathering methods, the RSVHC and the monthly log file, functioned as planned. More than 70% of participants did not trigger reminders (Table 3), probably because most reminders were for chronic conditions and the proportion of elderly people is relatively small in the Sipoo community. Contrary to our expectations, the difference in the number of reminders after 12 months of follow-up (primary outcome measure) between the intervention and control group was not significant (Table 4), and the pattern was similar: increasing numbers of reminders in the intervention and the control group. We used the robust RCT method [32, 33], and the most likely explanation for the results is that the recording of diagnostic codes improved markedly during the trial (Table 3). However, at six months individual follow-up time, the increase in the total number of reminders was significantly less in the intervention group than in the control group, when controlling for the confounding factors, such as age, gender, number of diagnoses, and medications.
There are a number of possible explanations for the results. First, the setting was one health center with 15 physicians making clinical decisions on diagnosis, medication, and laboratory tests. Because each physician took care of patients in the intervention group as well as in the control group, contamination was possible in so far as the physicians could have learned to treat the control group patients according to the reminders for the intervention group patients. This possible learning effect can be presumed, decreasing the trial effect. Therefore, the present results are a conservative estimate, and in future trials, a cluster randomization of several study sites or a randomization of HCPs would be preferable.
Second, the set of study reminders was chosen by the study group and may have addressed the HCPs’ needs insufficiently. Tailoring the guidance to HCPs’ needs has been indicated as a key issue for successful implementation [5, 7]. Local HCPs, not only the Chief Medical Officer, may have to be involved for an adequate understanding of their needs as the starting point for developing and implementing reminders, as had been indicated previously . Further, competence-based individual tailoring could be helpful.
Some of the most common and important reminders, including those warning of high LDL cholesterol, had to be excluded from the analysis because the codes for laboratory tests had changed over time (which the research group was not aware of), and only the old codes were interpreted by the rules, resulting in false reminders based on old (and not the most recent) test results. This may have resulted in mistrust of the reminders among the HCPs, and poor compliance with all reminders.
Many patients were seen by nurses rather than physicians, but the actions suggested by the reminders (like ordering tests or medications) could only be taken by physicians. The nurses may not always have consulted physicians after they saw the reminders, resulting in no action.
According to a meta-regression of 162 randomized trials , the odds of success of computer-based decision support are greater for systems that require HCPs to provide reasons when overriding advice than for systems that do not. The odds of success are also better for systems that provide advice concurrently to patients and HCPs. The intervention system possessed neither of these features.
There was a delay in implementing the EBMeDS service due to technical problems. This necessitated retraining the HCPs, which for practical reasons was delayed and took place in February 2010, eight months after the introduction of the service . This delay from the introduction of the service to the training of the HCPs in its use may partly explain the results.
The reliable performance of the data-gathering methods gives confidence that they can be applied successfully in future studies and combined with a more extensive use of decision support, for example the monitoring and auditing care of such large patient groups as those with type 2 diabetes.
We managed to randomize all patients of the health center as planned to two comparable study groups, indicating a high validity of the results [36–38]. However, the results may not be generalizable to other primary care environments, where the EBMeDS service could be more vigorously implemented. Another set of physicians in another health center could use the reminders much more or much less than the present 15 physicians, and therefore the results could be totally different. Also, the integration of the Mediatri EPR and the EBMeDS service was unique. That reminders were triggered by only below a third of the participants may be related to recording issues, or to the timing of the successive accessing of the EPR. Further exploration is warranted.
The EBMeDS service, developed using the best available evidence [9, 10], aimed to offer recommendations to HCPs’ workflow across several conditions in primary care practice. During the trial, patient-specific reminders were triggered systematically but had only limited effects on patient care (our secondary outcome measure after six months’ individual follow-up time). Our results reaffirm previous evidence [39–41] that implementation of computer-based decision support is problematic. HCPs seem to accept the service in principle [27, 42], but in practice they may neglect using the reminders for many practical reasons.
The intervention itself is complex, as reminders have different purposes in accordance with decision support rules (Table 1 and Additional files). Some provide advice on diagnosis or medication or laboratory test decisions, for example ‘Atrial fibrillation—start warfarin?’ (DS rule 457), and some are reminders to follow patient measures, for example ‘Hypertension—time to check blood pressure?’ (DS rule 578). The expected time interval between consultations and changes in the patient record differs across the reminders: some changes take place quickly, even in one visit, for example, as the HCP records a new diagnosis or prescribes medication. By contrast, some changes need more time, at least until the next visit or laboratory test. In the latter case, the time interval between the triggering of the reminder and the measurement of the outcome was too long for reminders to show differences between the intervention and control groups. Groups of reminders and individual reminders should also be evaluated separately, probably after careful tailoring of the time to outcome, in order to determine the types of reminder that have an effect and when this should be measured.
Environmental issues included functional changes in the health center, such as turnover of staff, and the coincidence of the swine flu epidemic with the trial, which could have influenced the study. Moreover, the physicians decided that ICD-10 diagnosis classification would be used systematically from spring 2009. The use of the classification was made a mandatory function for recording patient data in each encounter. This might be a key reason for changes in data entry during the trial. However, in a randomized design both groups would have been affected similarly. The reason for a patient visit may not have had any bearing on the triggered reminders, resulting in HCPs ignoring them. In fact, our feasibility study  indicates that missing or outdated patient data on, for example, medication, resulted in needless or wrong reminders. These had to be checked in the EPR, which took time.
We can speculate on at least three specific issues. First, our hypothesis was based on an optimistic estimation of the potential consequences of triggered reminders. We assumed that if HCPs received patient-specific reminders they would act on them, and that this automatically would decrease the number of future reminders. We did not recognize that other factors in patient care—above all, changes in patient data recording—could have an opposite effect: for example, recording a new diagnosis or medication would trigger new reminders instead of decreasing the number of reminders triggered. This confirms previous findings [43, 44].
Second, our choice of the primary outcome measure and its timing was based only on our understanding, without any actual evidence. Despite extensive research during the planning of the trial, we could not find a study to help us with this. Choosing the primary outcome measures is not easy .
Third, the analysis consisted of three models (Indiv_MO12, Indiv_MO6, Indiv_MO3) based on the different follow-up periods of the participants. In order to be followed up for 12 months and be included in the measurement of the primary outcome, a patient had to have a first contact by the end of October 2009. The starting date could be as late as April 2010 for inclusion in the six-month follow-up period and July for inclusion in the three-month period. Most notably, the groups of individuals with 3, 6, or 12 months’ follow-up differed in the numbers of diagnoses and medications at baseline (Table 3). The higher numbers of diagnoses, medications and triggered reminders in the 12-month group indicate more chronically ill patients in this group than in the other groups. The effects of the marked improvement in diagnosis coding during the study are difficult to assess, because the influence of reminders triggered for the intervention group may differ from that in the control group. Some of the reminders indicated to the HCP that diagnoses had not been coded. This may have resulted in more comprehensive coding of some diagnoses in the intervention group, which further may have resulted in more triggered reminders than in the control group, because many reminders could be triggered only if a specific diagnosis was present. Adjusting for the number of all diagnoses cannot fully remove this confounding factor.
We did not find an intervention effect of the reminders on the primary outcome measure. However, a positive effect was seen in the secondary measure over a six-month follow-up period. This trial has to be considered a pilot, identifying key factors to be taken account when implementing and evaluating EBMeDS services or similar systems in the future. Patient information in the EPR system has to be accurately recorded for the reminders to trigger correctly. Appropriate functionality of the integrated system should be confirmed before the trial starts.
Presently, the integration of EBMeDS with any EPR system includes a thorough and systematic check of, for example, all existing laboratory code values. In integrated systems, all technical changes such as routine updating of the EPR system can influence the functioning of the decision support system.
This study was funded by the Finnish Funding Agency for Technology and Innovation (TEKES), the National Institute for Health and Welfare (THL), Duodecim Medical Publications Ltd, ProWellness Ltd and the Doctoral Programs in Public Health (DPPH). We are grateful to the chief officers at the Sipoo health center, who gave their time to participate in the trial.
- Haynes RB, Devereaux PJ, Guyatt GH: Clinical expertise in the era of evidence-based medicine and patient choice. ACP J Club. 2002, 136 (2): A11-14.Google Scholar
- Burgers JS, Grol R, Klazinga NS, Makela M, Zaat J: Towards evidence-based clinical practice: an international survey of 18 clinical guideline programs. Int J Qual Healthcare. 2003, 15 (1): 31-45. 10.1093/intqhc/15.1.31.View ArticleGoogle Scholar
- Fixsen DL, Naoom SF, Blase KA, Friedman RM, Frances W: Implementation research: a synthesis of the literature. 2005, Tampa: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research NetworkGoogle Scholar
- Hojgaard L: Forward look - implementation of medical research in clinical practice. 2011, Strasbourg: European science foundationGoogle Scholar
- Grol R, Wensing M, Eccles M: Implementation of changes in practice. Improving patient care: the implementation of change in clinical practice. Edited by: Grol R, Wensing M, Eccles M. 2005, Edinburgh: Elsevier, 6-14.Google Scholar
- Oxman AD, Thomson MA, Davis DA, Haynes RB: No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. CMAJ. 1995, 153 (10): 1423-1431.PubMedPubMed CentralGoogle Scholar
- Lugtenberg M, Burgers JS, Westert GP: Effects of evidence-based clinical practice guidelines on quality of care: a systematic review. Qual Saf Healthcare. 2009, 18 (5): 385-392. 10.1136/qshc.2008.028043.View ArticleGoogle Scholar
- Greenes RA: Clinical decision support: the road ahead. 2007, Boston: AcademicGoogle Scholar
- Garg AX, Adhikari NK, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J, Sam J, Haynes RB: Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005, 293 (10): 1223-1238. 10.1001/jama.293.10.1223.View ArticlePubMedGoogle Scholar
- Kawamoto K, Houlihan CA, Balas EA, Lobach DF: Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ. 2005, 330 (7494): 765-10.1136/bmj.38398.500764.8F.View ArticlePubMedPubMed CentralGoogle Scholar
- Shortliffe EH: Computer programs to support clinical decision making. JAMA. 1987, 258 (1): 61-66. 10.1001/jama.1987.03400010065029.View ArticlePubMedGoogle Scholar
- Shojania KG, Jennings A, Mayhew A, Ramsay CR, Eccles MP, Grimshaw J: The effects of on-screen, point of care computer reminders on processes and outcomes of care. Cochrane Database Syst Rev. 2009, 3: CD001096Google Scholar
- Bell LM, Grundmeier R, Localio R, Zorc J, Fiks AG, Zhang X, Stephens TB, Swietlik M, Guevara JP: Electronic health record-based decision support to improve asthma care: a cluster-randomized trial. Pediatrics. 2010, 125 (4): e770-777. 10.1542/peds.2009-1385.View ArticlePubMedGoogle Scholar
- McCullough A, Fisher M, Goldstein AO, Kramer KD, Ripley-Moffitt C: Smoking as a vital sign: prompts to ask and assess increase cessation counseling. J Am Board Fam Med. 2009, 22 (6): 625-632. 10.3122/jabfm.2009.06.080211.View ArticlePubMedGoogle Scholar
- Padberg FT, Hauck K, Mercer RG, Lal BK, Pappas PJ: Screening for abdominal aortic aneurysm with electronic clinical reminders. Am J Surg. 2009, 198 (5): 670-674. 10.1016/j.amjsurg.2009.07.021.View ArticlePubMedGoogle Scholar
- Schriefer SP, Landis SE, Turbow DJ, Patch SC: Effect of a computerized body mass index prompt on diagnosis and treatment of adult obesity. Fam Med. 2009, 41 (7): 502-507.PubMedGoogle Scholar
- Balas EA, Weingarten S, Garb CT, Blumenthal D, Boren SA, Brown GD: Improving preventive care by prompting physicians. Arch Intern Med. 2000, 160 (3): 301-308. 10.1001/archinte.160.3.301.View ArticlePubMedGoogle Scholar
- Car J, Black A, Anandan C, Cresswell K, Pagliari C, McKinstry B, Procter R, Majeed A, Sheikh A: The impact of eHealth on the quality & safety of healthcare. A systematic overview & synthesis of the literature. Report for the NHS Connecting for Health Evaluation Programme. 2008, London: Imperial College LondonGoogle Scholar
- Huckvale C, Car J, Akiyama M, Jaafar S, Khoja T, Bin Khalid A, Sheikh A, Majeed A: Information technology for patient safety. Qual Saf Healthcare. 2010, 19 (Suppl 2): i25-33. 10.1136/qshc.2009.038497.View ArticleGoogle Scholar
- Bryan C, Boren SA: The use and effectiveness of electronic clinical decision support tools in the ambulatory/primary care setting: a systematic review of the literature. Inform Prim Care. 2008, 16 (2): 79-91.PubMedGoogle Scholar
- Gosling AS, Westbrook JI, Spencer R: Nurses’ use of online clinical evidence. J Adv Nurs. 2004, 47 (2): 201-211. 10.1111/j.1365-2648.2004.03079.x.View ArticlePubMedGoogle Scholar
- Gill JM, Chen YX, Glutting JJ, Diamond JJ, Lieberman MI: Impact of decision support in electronic medical records on lipid management in primary care. Popul Health Manag. 2009, 12 (5): 221-226. 10.1089/pop.2009.0003.View ArticlePubMedGoogle Scholar
- Sequist TD, Gandhi TK, Karson AS, Fiskio JM, Bugbee D, Sperling M, Cook EF, Orav EJ, Fairchild DG, Bates DW: A randomized trial of electronic clinical reminders to improve quality of care for diabetes and coronary artery disease. J Am Med Inform Assoc. 2005, 12 (4): 431-437. 10.1197/jamia.M1788.View ArticlePubMedPubMed CentralGoogle Scholar
- Kortteisto T, Komulainen J, Kunnamo I, Mäkelä M, Kaila M: Implementing clinical decision support for primary care professionals – the process. Finn J eHealth eWelfare. 2012, 3 (4): 150-161.Google Scholar
- Electronic identity and certificates. [http://www.vrk.fi/default.aspx?id=21]
- EBMeDS clinical decision support. [http://www.ebmeds.org/web/guest/home?lang=fi]
- Kortteisto T, Komulainen J, Makela M, Kunnamo I, Kaila M: Clinical decision support must be useful, functional is not enough: a qualitative study of computer-based clinical decision support in primary care. BMC Health Serv Res. 2012, 12: 349-10.1186/1472-6963-12-349.View ArticlePubMedPubMed CentralGoogle Scholar
- SOTKAnet statistics and indicator bank 2005–2012. [http://uusi.sotkanet.fi/portal/page/portal/etusivu]
- McCullagh P, Nelder J: Generalized linear models. 1989, London: Chapman and Hall, 2View ArticleGoogle Scholar
- Liang K-Y, Zeger SL: Longitudinal data analysis using generalized linear models. Biometrika. 1986, 73 (1): 13-22. 10.1093/biomet/73.1.13.View ArticleGoogle Scholar
- Pan W: Akaike’s information criterion in generalized estimating equations. Biometrics. 2001, 57 (1): 120-125. 10.1111/j.0006-341X.2001.00120.x.View ArticlePubMedGoogle Scholar
- Campbell MJ, Machin D: Medical statistics: a commonsense approach. 2002, Chichester: John Wiley & Sons, Ltd, 3Google Scholar
- Eccles M, Grimshaw J, Campbell M, Ramsay C: Research designs for studies evaluating the effectiveness of change and improvement strategies. Qual Saf Healthcare. 2003, 12 (1): 47-52. 10.1136/qhc.12.1.47.View ArticleGoogle Scholar
- Varonen H, Kortteisto T, Kaila M: What may help or hinder the implementation of computerized decision support systems (CDSSs): a focus group study with physicians. Fam Pract. 2008, 25 (3): 162-167. 10.1093/fampra/cmn020.View ArticlePubMedGoogle Scholar
- Roshanov PS, Fernandes N, Wilczynski JM, Hemens BJ, You JJ, Handler SM, Nieuwlaat R, Souza NM, Beyene J, Van Spall HG: Features of effective computerised clinical decision support systems: meta-regression of 162 randomised trials. BMJ. 2013, 346: f657-10.1136/bmj.f657.View ArticlePubMedGoogle Scholar
- Fransen GA, van Marrewijk CJ, Mujakovic S, Muris JW, Laheij RJ, Numans ME, de Wit NJ, Samsom M, Jansen JB, Knottnerus JA: Pragmatic trials in primary care. Methodological challenges and solutions demonstrated by the DIAMOND-study. BMC Med Res Methodol. 2007, 7: 16-10.1186/1471-2288-7-16.View ArticlePubMedPubMed CentralGoogle Scholar
- Juni P, Altman DG, Egger M: Systematic reviews in healthcare: assessing the quality of controlled clinical trials. BMJ. 2001, 323 (7303): 42-46. 10.1136/bmj.323.7303.42.View ArticlePubMedPubMed CentralGoogle Scholar
- Schulz KF, Grimes DA: Generation of allocation sequences in randomised trials: chance, not choice. Lancet. 2002, 359 (9305): 515-519. 10.1016/S0140-6736(02)07683-3.View ArticlePubMedGoogle Scholar
- Black AD, Car J, Pagliari C, Anandan C, Cresswell K, Bokun T, McKinstry B, Procter R, Majeed A, Sheikh A: The impact of eHealth on the quality and safety of healthcare: a systematic overview. PLoS Med. 2011, 8 (1): e1000387-10.1371/journal.pmed.1000387.View ArticlePubMedPubMed CentralGoogle Scholar
- Heselmans A, Van de Velde S, Donceel P, Aertgeerts B, Ramaekers D: Effectiveness of electronic guideline-based implementation systems in ambulatory care settings - a systematic review. Implement Sci. 2009, 4: 82-10.1186/1748-5908-4-82.View ArticlePubMedPubMed CentralGoogle Scholar
- Martens JD, van der Weijden T, Winkens RA, Kester AD, Geerts PJ, Evers SM, Severens JL: Feasibility and acceptability of a computerised system with automated reminders for prescribing behaviour in primary care. Int J Med Inform. 2008, 77 (3): 199-207. 10.1016/j.ijmedinf.2007.05.013.View ArticlePubMedGoogle Scholar
- Heselmans A, Aertgeerts B, Donceel P, Geens S, Van de Velde S, Ramaekers D: Family physicians’ perceptions and use of electronic clinical decision support during the first year of implementation. J Med Syst. 2012, 36 (6): 3677-3684. 10.1007/s10916-012-9841-3.View ArticlePubMedGoogle Scholar
- Herzberg S, Rahbar K, Stegger L, Schafers M, Dugas M: Concept and implementation of a computer-based reminder system to increase completeness in clinical documentation. Int J Med Inform. 2011, 80 (5): 351-358. 10.1016/j.ijmedinf.2011.02.004.View ArticlePubMedGoogle Scholar
- Wright A, Pang J, Feblowitz JC, Maloney FL, Wilcox AR, McLoughlin KS, Ramelson H, Schneider L, Bates DW: Improving completeness of electronic problem lists through clinical decision support: a randomized, controlled trial. J Am Med Inform Assoc. 2012, 19 (4): 555-561. 10.1136/amiajnl-2011-000521.View ArticlePubMedPubMed CentralGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.