Skip to main content

Patient-specific computer-based decision support in primary healthcare—a randomized trial



Computer-based decision support systems are a promising method for incorporating research evidence into clinical practice. However, evidence is still scant on how such information technology solutions work in primary healthcare when support is provided across many health problems. In Finland, we designed a trial where a set of evidence-based, patient-specific reminders was introduced into the local Electronic Patient Record (EPR) system. The aim was to measure the effects of such reminders on patient care. The hypothesis was that the total number of triggered reminders would decrease in the intervention group compared with the control group, indicating an improvement in patient care.


From July 2009 to October 2010 all the patients of one health center were randomized to an intervention or a control group. The intervention consisted of patient-specific reminders concerning 59 different health conditions triggered when the healthcare professional (HCP) opened and used the EPR. In the intervention group, the triggered reminders were shown to the HCP; in the control group, the triggered reminders were not shown. The primary outcome measure was the change in the number of reminders triggered over 12 months. We developed a unique data gathering method, the Repeated Study Virtual Health Check (RSVHC), and used Generalized Estimation Equations (GEE) for analysing the incidence rate ratio, which is a measure of the relative difference in percentage change in the numbers of reminders triggered in the intervention group and the control group.


In total, 13,588 participants were randomized and included. Contrary to our expectation, the total number of reminders triggered increased in both the intervention and the control groups. The primary outcome measure did not show a significant difference between the groups. However, with the inclusion of patients followed up over only six months, the total number of reminders increased significantly less in the intervention group than in the control group when the confounding factors (age, gender, number of diagnoses and medications) were controlled for.


Computerized, tailored reminders in primary care did not decrease during the 12 months of follow-up time after the introduction of a patient-specific decision support system.

Trial registration NCT00915304

Peer Review reports


The treatment of patients is based on clinical expertise, whose key elements are research evidence, clinical situations and circumstances, and patients’ preferences and actions [1]. The evidence is translated into practical form, for example, in clinical practice guidelines [2], whereas active incorporation of these into everyday practice has only recently become a recognized target for research [3, 4]. These methods have previously been summarized and the conclusion is that because there are no magic formulae [5, 6], tailoring the intervention is necessary [7].

One of the innovations in the incorporation of evidence into practice is computer-based decision support to bring relevant evidence to the attention of healthcare professionals (HCPs) at the point of care [8]. Such automatic systems combine medical evidence with patient-specific data from the Electronic Patient Record (EPR), which supports clinical decision making [911]. According to a Cochrane Review of 28 studies, computer reminders achieved a median improvement in process adherence of 4.2% [12]. Focused computer-generated reminders and alerts work well in a variety of single conditions [1316] and in preventive care [17]. Decision support can in many settings improve the quality of care and help to avoid mistakes in clinical work, thereby improving patient safety [18, 19]. There is still, however, scant evidence on how such information technology solutions work across many diseases or conditions in primary healthcare where multi-professional teams [20, 21] care for patients with multiple health problems, both acute and chronic [22, 23].

In our study, a set of evidence-based patient-specific reminders in the form of the computer-based decision support service EBMeDS (Evidence-Based Medicine electronic Decision Support, was integrated into the EPR system of one primary care organisation. The EBMeDS service aims to aid treatment across several conditions in actual clinical practice and should therefore be usable in primary healthcare. Our study question was: ‘Do patient- and problem-specific automatic reminders shown to HCPs during primary care consultations have an effect on patient care?’ We hypothesized that the total number of triggered reminders would decrease in the intervention group, in contrast to the control group, indicating a possible improvement in patient care. The hypothesis was formulated on the basis of the idea of a gold standard, by which a triggered reminder indicates that the patient care is not evidence-based, and no reminder indicates that the patient care is evidence-based.

The study was reviewed and accepted by the Pirkanmaa Hospital District (Tampere University Hospital) Ethics Committee (ETL R08149) and registered at ( NCT00915304).


Trial design

The setting was the primary healthcare center of Sipoo, which was selected from regular users of the Mediatri EPR system. The center comprises 48 HCPs: 15 physicians, 24 nurses and 9 other HCPs (physiotherapists, ward nurses, a psychologist), described in detail elsewhere [24]. The HCPs used the EPR system during outpatient consultations as well as on an inpatient ward typical of Finnish primary care.

We used a parallel randomized controlled trial design, with patient identification (ID) numbers in the EPR system as the unit of randomization. We made use of the Finnish Personal Identity Code (PIC), by which each individual can be specifically identified [25], to produce anonymized study IDs based on the PICs. All patients who were listed as undergoing occupational healthcare were excluded for legal reasons (Figure 1).

Figure 1
figure 1

Study design. RSVHC is the Repeated Study Virtual Health Check where all decision support rules are run at once and the triggered reminders at the time point are registered; see text for explanation. Both the intervention and control groups were accrued as new individuals visited the health center (i.e. first contact date). Therefore, the starting and end points of follow-up are individual. Occupational healthcare was excluded. (See text and Figure 3 for further explanation).

The study started in July 2009 and ended in October 2010.

We developed a unique method for population-based outcome data gathering from the EPR archive, the Repeated Study Virtual Health Check (RSVHC). During an RSVHC, the EPR archive sent to the EBMeDS service structured patient data (diagnoses, medications, and laboratory results) on the base study population (request), and the service generated all reminders triggered by these data and returned them (answer). The RSVHC was planned to be performed weekly at night. Actually, one to five RSVHCs were performed per month. The requests and answers of each RSVHC were stored automatically in a log file located in the EPR server, to be exported to the study register and analyzed at the end of the study period.

The reminders were written by the EBMeDS editorial team using medical evidence embedded in the sets of Duodecim guidelines and other sources, and linked with evidence-based decision support (DS) rules [26]. In April 2009, the 107 available DS rules were piloted in Sipoo. Subsequently, the chief medical officer of the health center decided which of the DS rules should be implemented. These totalled 96.

Here is an example of an implemented DS rule that conforms to if–then logic: ‘Metformin is the first choice oral hypoglycaemic agent in type 2 diabetes’ (DS rule 16).

The DS rule is implemented if the diagnosis in the EPR is type 2 diabetes. First, the rule checks whether the medication list on the EPR contains metformin. If it does not, the rule then checks for the plasma/serum creatinine value from the EPR laboratory results. If the glomerular filtration rate (GFR) is in the normal range, then reminder one, ‘Type 2 diabetes – start metformin is shown on the screen. If the GFR is <60 ml/min, then reminder two, ‘Type 2 diabetes—start metformin, note GFR’ is shown. If the GFR is missing or out of date, then reminder three, ‘Type 2 diabetes—check renal function and start metformin’ is shown [26].

See Additional file 1 for more examples of DS rules and reminders targeting diabetes patients, a large patient group in primary care. HCPs can easily check background information and evidence behind each reminder by clicking the reminder and opening the references.


This was a register-based study using the EPR data without any direct contact with patients.

The first step in the data collection comprised 52 RSVHCs carried out between July 2009 and October 2010 in the base population. At the end of the study, the population was 17,541 (total number of patient IDs in the study register). Data in the RSVHCs were structured patient-specific information (diagnoses, medications, laboratory results), and the triggered patient-specific reminders at each time point were stored in the study register.

In the second step, using the study ID, the earliest date was defined when the patient’s EPR had been opened during the study (start of individual follow-up). All EBMeDS procedures (requests and answers) which actually took place during the study were stored in monthly log files on the EPR server. One log file (April 2010) was missed because of technical problems. A baseline date was determined individually for each study patient as the date of the first opening of the EPR during the study period, and the patient-specific follow-up started from this date.

The third step involved linking the information from steps one and two. These final data comprised patient-specific information from the individual baseline date to all RSVHC points where the patient was followed up.

The swine flu epidemic, with the ensuing universal vaccination procedures, occurred between September 2009 and February 2010. We excluded from the monthly log files patients who received only the swine flu vaccination during a short visit (5 to 10 minutes) with a nurse. According to the nurses [27], nothing else was checked, including triggered reminders.


The intervention consisted of the patient-specific reminders being shown to the HCP on opening and using the EPR. A concrete example, the reminders for one diabetes patient, has been published previously [27]. Short versions of the triggered reminders, for example, ‘Type 2 diabetes—time for nephropathy screening’ , were shown automatically on screen. The full version of the reminders could be seen when the HCP hovered the cursor over the reminder, for example, ‘This patient has type 2 diabetes and no screening for microalbuminuria has been carried out during the last year. Annual screening for microalbuminuria is recommended in type 2 diabetes’. See Table 1 for examples of the reminders according to ICD-10 diagnosis groups. All study reminders are available in Additional files 2, 3 and 4.

Table 1 Examples of the EBMeDS reminders listed according to ICD-10 coding system

The control group was treated according to normal practice, and the triggered patient-specific reminders were not shown to the HCP on screen. Instead, these were stored in the log files and exported to the study register. Usual care and the evidence for that were available to HCPs at all times during the trial, by active searching of, e.g., guidelines.

There are four different EBMeDS decision support service functions: reminders, guideline links, a clinical virtual health check, and drug alerts (Table 2). Reminders and drug alerts are triggered automatically, but the guideline links and clinical virtual health check functions need active querying. Here, we focused only on the automatic reminder function. The interface of the integrated systems (EPR and EBMeDS) was designed by the EPR system vendor.

Table 2 The four EBMeDS decision support functions available for the healthcare professional

The number of reminders selected for the study was 154, based on 73 DS rules. After piloting, we had to exclude 23 of the original 96 DS rules (not triggered, n = 10, or not calculable, n = 13). In practice, 14 additional DS rules failed to function as planned (missing laboratory codes or unexpected changes in the EPR system after updates). After excluding a further 38 reminders based on these 14 DS rules, the analyzable final maximum number of different reminders was 116 (Figure 2).

Figure 2
figure 2

The process of eliminating decision support rules from the analyses. The process of elimination from the analyses of non-functioning decision support (DS) rules ending with the 59 rules that were used.


The primary composite outcome measure was the change in the numbers of all reminders triggered in the target population over 12 months of individual follow-up. As secondary outcome measures, we explored the changes also after three and six months of follow-up.

Sample size

We planned to include all of the Sipoo patients’ EPRs in the study, and estimated that at least 50% of the population would contact the health center during one year, based on available data on visits to primary healthcare centers in Finland, and local statistics [28]. This translated into an approximation of 10,500 participants in the final study sample. The accumulation of the study participants (n = 13,588) is shown in Figure 3.

Figure 3
figure 3

Accumulation of study participants from July 2009 to October 2010.


A single ratio procedure randomized the base population of the health center at the beginning of the study to an intervention and a control group of the same size without any other criteria. The procedure was done once per individual by a computer using a mathematical formula based on the PIC of each patient in the EPR system, and assigning each patient a unique study ID number. The forthcoming patients (for example, new inhabitants of Sipoo) were randomized according to the same procedure. The procedure was performed by a person outside the study group who also retained the key of the formula linking the PIC to the study ID number.


The randomization was masked from the patients, the HCPs, and the study group. However, when the HCP was shown the patient-specific reminders on screen, he or she knew that the patient belonged to the intervention group. The study group first opened randomization after the data collection period.

Statistical methods

Baseline characteristics of the intervention and control group with ancillary analysis for triggered reminders were performed using means and standard deviations or frequencies and proportions (Table 3).

Table 3 Characteristics of the intervention and the control group in the three models

To investigate the effect of the intervention on patient care, the outcome variable was the number of triggered reminders in each RSVHC. Because the data were right-skewed and the variance was greater than the mean, the negative binomial model provided a better fit to the data and accounted for over-dispersion better than a Poisson regression model [29]. We used negative binomial regression to model the number of triggered reminders at 12, 6, and 3 months follow-up times (Table 4, models 1 to 3). The negative binomial model included a variable (group) to indicate the difference between groups at baseline and a variable (time) to indicate the changes in the number of triggered reminders over time. The difference in change in the number of triggered reminders across the intervention between the two groups was tested using an interaction term between group and time. The exponent of the coefficient of the interaction term is the incidence rate ratio (IRR), i.e., an estimate of the relative difference in percentage change in the number of triggered reminders in the intervention group, compared with the control group. We also added to the models some potential confounding variables, such as age, gender, number of diagnoses, and number of medications.

Table 4 Incidence rate ratios (IRR) of the number of triggered reminders by negative binomial regression models using a generalized estimation equation

To account for the within-participant correlation between repeated measures, we used Generalized Estimation Equations (GEE) by using the STATA software package (version 12.0 for Windows). Liang and Zeger proposed the GEE approach in 1986 to deal with impractical probability distribution in handling correlated responses [30]. We used different correlation structures (exchangeable, first-order autoregressive and unstructured) to account for the correlation within each unit. All models were evaluated in terms of how well they fitted the data using the quasi-likelihood under the independence model information criterion (QIC) for model selection [31]. The model with the lowest QIC was selected as the final model. Robust standard errors were used for all GEE-fitted models. IRRs were presented with 95% confidence intervals (95% CI) and p-values. We defined <0.05 risk of error as the significance p-value.


In total, 17,541 potential participants were registered in the study on the basis of the 52 RSVHCs. Of these, 13,588 individuals’ EPRs were accessed by the HCPs during the study (Figure 4). The characteristics and descriptive statistics of the analyzed participants in different models (age, gender, number of diagnoses and triggered reminders at baseline and at the end of the follow-up period, number of medications at baseline, and number of participants with no triggered reminder) are presented in Table 3. The participants’ individual follow-up periods varied from 1 day to 480 days, which was decisive for inclusion in or exclusion from the GEE models’ analyses.

Figure 4
figure 4

Flow chart of study participants. Follow-up time is individual. (Primary outcome measure shown in bold text).

Three GEE models were made (Table 4). The primary outcome after 12 months, model indiv_MO12, included all participants with individual follow-up for 12 months (n = 7,570). At baseline, there were no differences between the intervention and control groups. The incidence rate for triggered reminders increased significantly (p = 0.002) over the follow-up period, and the intervention and control group behaved similarly. The result was congruent with confounding variables, such as age, gender, number of diagnoses, and number of medications (adjusted model).

The GEE model indiv_MO6 included all participants with at least six months of follow-up (n = 11,911). At baseline, the intervention and control groups did not differ. The incidence rate for triggered reminders increased significantly (p <0.001) during the follow-up period. The difference in development between the groups was not significant (p = 0.066) in the unadjusted model, but in the adjusted model there was a significant difference (p = 0.044), indicating that the number of reminders increased less in the intervention group than in the control group.

The GEE model indiv_MO3 included all participants with at least three months of follow-up (n = 12,795). At baseline, the intervention and control groups did not differ. During the follow-up period, the incidence rate for triggered reminders increased significantly (p <0.001). The intervention effect was not significant, indicating that the reminders were triggered similarly in both groups. The result was congruent with confounding variables in the adjusted model.

We did not detect any direct harm to the participants from the intervention during the trial. A conceivable harm to the HCPs originated from needless or incorrect reminders based on missing laboratory codes or unexpected changes in the EPR system after updates.


The two new data-gathering methods, the RSVHC and the monthly log file, functioned as planned. More than 70% of participants did not trigger reminders (Table 3), probably because most reminders were for chronic conditions and the proportion of elderly people is relatively small in the Sipoo community. Contrary to our expectations, the difference in the number of reminders after 12 months of follow-up (primary outcome measure) between the intervention and control group was not significant (Table 4), and the pattern was similar: increasing numbers of reminders in the intervention and the control group. We used the robust RCT method [32, 33], and the most likely explanation for the results is that the recording of diagnostic codes improved markedly during the trial (Table 3). However, at six months individual follow-up time, the increase in the total number of reminders was significantly less in the intervention group than in the control group, when controlling for the confounding factors, such as age, gender, number of diagnoses, and medications.


There are a number of possible explanations for the results. First, the setting was one health center with 15 physicians making clinical decisions on diagnosis, medication, and laboratory tests. Because each physician took care of patients in the intervention group as well as in the control group, contamination was possible in so far as the physicians could have learned to treat the control group patients according to the reminders for the intervention group patients. This possible learning effect can be presumed, decreasing the trial effect. Therefore, the present results are a conservative estimate, and in future trials, a cluster randomization of several study sites or a randomization of HCPs would be preferable.

Second, the set of study reminders was chosen by the study group and may have addressed the HCPs’ needs insufficiently. Tailoring the guidance to HCPs’ needs has been indicated as a key issue for successful implementation [5, 7]. Local HCPs, not only the Chief Medical Officer, may have to be involved for an adequate understanding of their needs as the starting point for developing and implementing reminders, as had been indicated previously [34]. Further, competence-based individual tailoring could be helpful.

Some of the most common and important reminders, including those warning of high LDL cholesterol, had to be excluded from the analysis because the codes for laboratory tests had changed over time (which the research group was not aware of), and only the old codes were interpreted by the rules, resulting in false reminders based on old (and not the most recent) test results. This may have resulted in mistrust of the reminders among the HCPs, and poor compliance with all reminders.

Many patients were seen by nurses rather than physicians, but the actions suggested by the reminders (like ordering tests or medications) could only be taken by physicians. The nurses may not always have consulted physicians after they saw the reminders, resulting in no action.

According to a meta-regression of 162 randomized trials [35], the odds of success of computer-based decision support are greater for systems that require HCPs to provide reasons when overriding advice than for systems that do not. The odds of success are also better for systems that provide advice concurrently to patients and HCPs. The intervention system possessed neither of these features.

There was a delay in implementing the EBMeDS service due to technical problems. This necessitated retraining the HCPs, which for practical reasons was delayed and took place in February 2010, eight months after the introduction of the service [24]. This delay from the introduction of the service to the training of the HCPs in its use may partly explain the results.


The reliable performance of the data-gathering methods gives confidence that they can be applied successfully in future studies and combined with a more extensive use of decision support, for example the monitoring and auditing care of such large patient groups as those with type 2 diabetes.

We managed to randomize all patients of the health center as planned to two comparable study groups, indicating a high validity of the results [3638]. However, the results may not be generalizable to other primary care environments, where the EBMeDS service could be more vigorously implemented. Another set of physicians in another health center could use the reminders much more or much less than the present 15 physicians, and therefore the results could be totally different. Also, the integration of the Mediatri EPR and the EBMeDS service was unique. That reminders were triggered by only below a third of the participants may be related to recording issues, or to the timing of the successive accessing of the EPR. Further exploration is warranted.


The EBMeDS service, developed using the best available evidence [9, 10], aimed to offer recommendations to HCPs’ workflow across several conditions in primary care practice. During the trial, patient-specific reminders were triggered systematically but had only limited effects on patient care (our secondary outcome measure after six months’ individual follow-up time). Our results reaffirm previous evidence [3941] that implementation of computer-based decision support is problematic. HCPs seem to accept the service in principle [27, 42], but in practice they may neglect using the reminders for many practical reasons.

The intervention itself is complex, as reminders have different purposes in accordance with decision support rules (Table 1 and Additional files). Some provide advice on diagnosis or medication or laboratory test decisions, for example ‘Atrial fibrillation—start warfarin?’ (DS rule 457), and some are reminders to follow patient measures, for example ‘Hypertension—time to check blood pressure?’ (DS rule 578). The expected time interval between consultations and changes in the patient record differs across the reminders: some changes take place quickly, even in one visit, for example, as the HCP records a new diagnosis or prescribes medication. By contrast, some changes need more time, at least until the next visit or laboratory test. In the latter case, the time interval between the triggering of the reminder and the measurement of the outcome was too long for reminders to show differences between the intervention and control groups. Groups of reminders and individual reminders should also be evaluated separately, probably after careful tailoring of the time to outcome, in order to determine the types of reminder that have an effect and when this should be measured.

Environmental issues included functional changes in the health center, such as turnover of staff, and the coincidence of the swine flu epidemic with the trial, which could have influenced the study. Moreover, the physicians decided that ICD-10 diagnosis classification would be used systematically from spring 2009. The use of the classification was made a mandatory function for recording patient data in each encounter. This might be a key reason for changes in data entry during the trial. However, in a randomized design both groups would have been affected similarly. The reason for a patient visit may not have had any bearing on the triggered reminders, resulting in HCPs ignoring them. In fact, our feasibility study [27] indicates that missing or outdated patient data on, for example, medication, resulted in needless or wrong reminders. These had to be checked in the EPR, which took time.

We can speculate on at least three specific issues. First, our hypothesis was based on an optimistic estimation of the potential consequences of triggered reminders. We assumed that if HCPs received patient-specific reminders they would act on them, and that this automatically would decrease the number of future reminders. We did not recognize that other factors in patient care—above all, changes in patient data recording—could have an opposite effect: for example, recording a new diagnosis or medication would trigger new reminders instead of decreasing the number of reminders triggered. This confirms previous findings [43, 44].

Second, our choice of the primary outcome measure and its timing was based only on our understanding, without any actual evidence. Despite extensive research during the planning of the trial, we could not find a study to help us with this. Choosing the primary outcome measures is not easy [36].

Third, the analysis consisted of three models (Indiv_MO12, Indiv_MO6, Indiv_MO3) based on the different follow-up periods of the participants. In order to be followed up for 12 months and be included in the measurement of the primary outcome, a patient had to have a first contact by the end of October 2009. The starting date could be as late as April 2010 for inclusion in the six-month follow-up period and July for inclusion in the three-month period. Most notably, the groups of individuals with 3, 6, or 12 months’ follow-up differed in the numbers of diagnoses and medications at baseline (Table 3). The higher numbers of diagnoses, medications and triggered reminders in the 12-month group indicate more chronically ill patients in this group than in the other groups. The effects of the marked improvement in diagnosis coding during the study are difficult to assess, because the influence of reminders triggered for the intervention group may differ from that in the control group. Some of the reminders indicated to the HCP that diagnoses had not been coded. This may have resulted in more comprehensive coding of some diagnoses in the intervention group, which further may have resulted in more triggered reminders than in the control group, because many reminders could be triggered only if a specific diagnosis was present. Adjusting for the number of all diagnoses cannot fully remove this confounding factor.


We did not find an intervention effect of the reminders on the primary outcome measure. However, a positive effect was seen in the secondary measure over a six-month follow-up period. This trial has to be considered a pilot, identifying key factors to be taken account when implementing and evaluating EBMeDS services or similar systems in the future. Patient information in the EPR system has to be accurately recorded for the reminders to trigger correctly. Appropriate functionality of the integrated system should be confirmed before the trial starts.

Presently, the integration of EBMeDS with any EPR system includes a thorough and systematic check of, for example, all existing laboratory code values. In integrated systems, all technical changes such as routine updating of the EPR system can influence the functioning of the decision support system.


  1. Haynes RB, Devereaux PJ, Guyatt GH: Clinical expertise in the era of evidence-based medicine and patient choice. ACP J Club. 2002, 136 (2): A11-14.

    Google Scholar 

  2. Burgers JS, Grol R, Klazinga NS, Makela M, Zaat J: Towards evidence-based clinical practice: an international survey of 18 clinical guideline programs. Int J Qual Healthcare. 2003, 15 (1): 31-45. 10.1093/intqhc/15.1.31.

    Article  Google Scholar 

  3. Fixsen DL, Naoom SF, Blase KA, Friedman RM, Frances W: Implementation research: a synthesis of the literature. 2005, Tampa: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network

    Google Scholar 

  4. Hojgaard L: Forward look - implementation of medical research in clinical practice. 2011, Strasbourg: European science foundation

    Google Scholar 

  5. Grol R, Wensing M, Eccles M: Implementation of changes in practice. Improving patient care: the implementation of change in clinical practice. Edited by: Grol R, Wensing M, Eccles M. 2005, Edinburgh: Elsevier, 6-14.

    Google Scholar 

  6. Oxman AD, Thomson MA, Davis DA, Haynes RB: No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. CMAJ. 1995, 153 (10): 1423-1431.

    CAS  PubMed  PubMed Central  Google Scholar 

  7. Lugtenberg M, Burgers JS, Westert GP: Effects of evidence-based clinical practice guidelines on quality of care: a systematic review. Qual Saf Healthcare. 2009, 18 (5): 385-392. 10.1136/qshc.2008.028043.

    Article  CAS  Google Scholar 

  8. Greenes RA: Clinical decision support: the road ahead. 2007, Boston: Academic

    Google Scholar 

  9. Garg AX, Adhikari NK, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J, Sam J, Haynes RB: Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005, 293 (10): 1223-1238. 10.1001/jama.293.10.1223.

    Article  CAS  PubMed  Google Scholar 

  10. Kawamoto K, Houlihan CA, Balas EA, Lobach DF: Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ. 2005, 330 (7494): 765-10.1136/bmj.38398.500764.8F.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Shortliffe EH: Computer programs to support clinical decision making. JAMA. 1987, 258 (1): 61-66. 10.1001/jama.1987.03400010065029.

    Article  CAS  PubMed  Google Scholar 

  12. Shojania KG, Jennings A, Mayhew A, Ramsay CR, Eccles MP, Grimshaw J: The effects of on-screen, point of care computer reminders on processes and outcomes of care. Cochrane Database Syst Rev. 2009, 3: CD001096

    Google Scholar 

  13. Bell LM, Grundmeier R, Localio R, Zorc J, Fiks AG, Zhang X, Stephens TB, Swietlik M, Guevara JP: Electronic health record-based decision support to improve asthma care: a cluster-randomized trial. Pediatrics. 2010, 125 (4): e770-777. 10.1542/peds.2009-1385.

    Article  PubMed  Google Scholar 

  14. McCullough A, Fisher M, Goldstein AO, Kramer KD, Ripley-Moffitt C: Smoking as a vital sign: prompts to ask and assess increase cessation counseling. J Am Board Fam Med. 2009, 22 (6): 625-632. 10.3122/jabfm.2009.06.080211.

    Article  PubMed  Google Scholar 

  15. Padberg FT, Hauck K, Mercer RG, Lal BK, Pappas PJ: Screening for abdominal aortic aneurysm with electronic clinical reminders. Am J Surg. 2009, 198 (5): 670-674. 10.1016/j.amjsurg.2009.07.021.

    Article  PubMed  Google Scholar 

  16. Schriefer SP, Landis SE, Turbow DJ, Patch SC: Effect of a computerized body mass index prompt on diagnosis and treatment of adult obesity. Fam Med. 2009, 41 (7): 502-507.

    PubMed  Google Scholar 

  17. Balas EA, Weingarten S, Garb CT, Blumenthal D, Boren SA, Brown GD: Improving preventive care by prompting physicians. Arch Intern Med. 2000, 160 (3): 301-308. 10.1001/archinte.160.3.301.

    Article  CAS  PubMed  Google Scholar 

  18. Car J, Black A, Anandan C, Cresswell K, Pagliari C, McKinstry B, Procter R, Majeed A, Sheikh A: The impact of eHealth on the quality & safety of healthcare. A systematic overview & synthesis of the literature. Report for the NHS Connecting for Health Evaluation Programme. 2008, London: Imperial College London

    Google Scholar 

  19. Huckvale C, Car J, Akiyama M, Jaafar S, Khoja T, Bin Khalid A, Sheikh A, Majeed A: Information technology for patient safety. Qual Saf Healthcare. 2010, 19 (Suppl 2): i25-33. 10.1136/qshc.2009.038497.

    Article  Google Scholar 

  20. Bryan C, Boren SA: The use and effectiveness of electronic clinical decision support tools in the ambulatory/primary care setting: a systematic review of the literature. Inform Prim Care. 2008, 16 (2): 79-91.

    PubMed  Google Scholar 

  21. Gosling AS, Westbrook JI, Spencer R: Nurses’ use of online clinical evidence. J Adv Nurs. 2004, 47 (2): 201-211. 10.1111/j.1365-2648.2004.03079.x.

    Article  PubMed  Google Scholar 

  22. Gill JM, Chen YX, Glutting JJ, Diamond JJ, Lieberman MI: Impact of decision support in electronic medical records on lipid management in primary care. Popul Health Manag. 2009, 12 (5): 221-226. 10.1089/pop.2009.0003.

    Article  PubMed  Google Scholar 

  23. Sequist TD, Gandhi TK, Karson AS, Fiskio JM, Bugbee D, Sperling M, Cook EF, Orav EJ, Fairchild DG, Bates DW: A randomized trial of electronic clinical reminders to improve quality of care for diabetes and coronary artery disease. J Am Med Inform Assoc. 2005, 12 (4): 431-437. 10.1197/jamia.M1788.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Kortteisto T, Komulainen J, Kunnamo I, Mäkelä M, Kaila M: Implementing clinical decision support for primary care professionals – the process. Finn J eHealth eWelfare. 2012, 3 (4): 150-161.

    Google Scholar 

  25. Electronic identity and certificates. []

  26. EBMeDS clinical decision support. []

  27. Kortteisto T, Komulainen J, Makela M, Kunnamo I, Kaila M: Clinical decision support must be useful, functional is not enough: a qualitative study of computer-based clinical decision support in primary care. BMC Health Serv Res. 2012, 12: 349-10.1186/1472-6963-12-349.

    Article  PubMed  PubMed Central  Google Scholar 

  28. SOTKAnet statistics and indicator bank 2005–2012. []

  29. McCullagh P, Nelder J: Generalized linear models. 1989, London: Chapman and Hall, 2

    Book  Google Scholar 

  30. Liang K-Y, Zeger SL: Longitudinal data analysis using generalized linear models. Biometrika. 1986, 73 (1): 13-22. 10.1093/biomet/73.1.13.

    Article  Google Scholar 

  31. Pan W: Akaike’s information criterion in generalized estimating equations. Biometrics. 2001, 57 (1): 120-125. 10.1111/j.0006-341X.2001.00120.x.

    Article  CAS  PubMed  Google Scholar 

  32. Campbell MJ, Machin D: Medical statistics: a commonsense approach. 2002, Chichester: John Wiley & Sons, Ltd, 3

    Google Scholar 

  33. Eccles M, Grimshaw J, Campbell M, Ramsay C: Research designs for studies evaluating the effectiveness of change and improvement strategies. Qual Saf Healthcare. 2003, 12 (1): 47-52. 10.1136/qhc.12.1.47.

    Article  CAS  Google Scholar 

  34. Varonen H, Kortteisto T, Kaila M: What may help or hinder the implementation of computerized decision support systems (CDSSs): a focus group study with physicians. Fam Pract. 2008, 25 (3): 162-167. 10.1093/fampra/cmn020.

    Article  PubMed  Google Scholar 

  35. Roshanov PS, Fernandes N, Wilczynski JM, Hemens BJ, You JJ, Handler SM, Nieuwlaat R, Souza NM, Beyene J, Van Spall HG: Features of effective computerised clinical decision support systems: meta-regression of 162 randomised trials. BMJ. 2013, 346: f657-10.1136/bmj.f657.

    Article  PubMed  Google Scholar 

  36. Fransen GA, van Marrewijk CJ, Mujakovic S, Muris JW, Laheij RJ, Numans ME, de Wit NJ, Samsom M, Jansen JB, Knottnerus JA: Pragmatic trials in primary care. Methodological challenges and solutions demonstrated by the DIAMOND-study. BMC Med Res Methodol. 2007, 7: 16-10.1186/1471-2288-7-16.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Juni P, Altman DG, Egger M: Systematic reviews in healthcare: assessing the quality of controlled clinical trials. BMJ. 2001, 323 (7303): 42-46. 10.1136/bmj.323.7303.42.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  38. Schulz KF, Grimes DA: Generation of allocation sequences in randomised trials: chance, not choice. Lancet. 2002, 359 (9305): 515-519. 10.1016/S0140-6736(02)07683-3.

    Article  PubMed  Google Scholar 

  39. Black AD, Car J, Pagliari C, Anandan C, Cresswell K, Bokun T, McKinstry B, Procter R, Majeed A, Sheikh A: The impact of eHealth on the quality and safety of healthcare: a systematic overview. PLoS Med. 2011, 8 (1): e1000387-10.1371/journal.pmed.1000387.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Heselmans A, Van de Velde S, Donceel P, Aertgeerts B, Ramaekers D: Effectiveness of electronic guideline-based implementation systems in ambulatory care settings - a systematic review. Implement Sci. 2009, 4: 82-10.1186/1748-5908-4-82.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Martens JD, van der Weijden T, Winkens RA, Kester AD, Geerts PJ, Evers SM, Severens JL: Feasibility and acceptability of a computerised system with automated reminders for prescribing behaviour in primary care. Int J Med Inform. 2008, 77 (3): 199-207. 10.1016/j.ijmedinf.2007.05.013.

    Article  CAS  PubMed  Google Scholar 

  42. Heselmans A, Aertgeerts B, Donceel P, Geens S, Van de Velde S, Ramaekers D: Family physicians’ perceptions and use of electronic clinical decision support during the first year of implementation. J Med Syst. 2012, 36 (6): 3677-3684. 10.1007/s10916-012-9841-3.

    Article  PubMed  Google Scholar 

  43. Herzberg S, Rahbar K, Stegger L, Schafers M, Dugas M: Concept and implementation of a computer-based reminder system to increase completeness in clinical documentation. Int J Med Inform. 2011, 80 (5): 351-358. 10.1016/j.ijmedinf.2011.02.004.

    Article  PubMed  Google Scholar 

  44. Wright A, Pang J, Feblowitz JC, Maloney FL, Wilcox AR, McLoughlin KS, Ramelson H, Schneider L, Bates DW: Improving completeness of electronic problem lists through clinical decision support: a randomized, controlled trial. J Am Med Inform Assoc. 2012, 19 (4): 555-561. 10.1136/amiajnl-2011-000521.

    Article  PubMed  PubMed Central  Google Scholar 

Download references


This study was funded by the Finnish Funding Agency for Technology and Innovation (TEKES), the National Institute for Health and Welfare (THL), Duodecim Medical Publications Ltd, ProWellness Ltd and the Doctoral Programs in Public Health (DPPH). We are grateful to the chief officers at the Sipoo health center, who gave their time to participate in the trial.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Tiina Kortteisto.

Additional information

Competing interests

Authors TK, JR, MM, and PR declare that they have no competing interests. JK is Editor-in-chief of Current Care Guidelines, published by the Finnish Medical Society Duodecim, and a member of the editorial board for EBMeDS, Duodecim Medical Publications Ltd. IK is a salaried employee of Duodecim Medical Publications Ltd, the company that develops and licenses the EBMeDS decision support service. MK chairs the Current Care Guidelines board at Finnish Medical Society Duodecim. The authors declare that they have no competing interests.

Authors’ contributions

All authors and other members of the EBMeDS study group (Jukkapekka Jousimaa, Helena Liira, Taina Mäntyranta and Peter Nyberg) were involved in conceiving the study and designing the trial. JR was responsible for data coding and analysis. TK led the writing process, supervised by MK, and all authors commented on sequential drafts and approved the final version of the manuscript.

Electronic supplementary material


Additional file 1: Examples of implemented decision support rules for diabetes and the reminders that may be triggered depending on the patient [26].(PDF 10 KB)


Additional file 2: All analysed reminders [26]. The decision support rule ID is included to assist interested readers to obtain more information at (PDF 38 KB)


Additional file 3: Reminders that were excluded after local piloting [26]. The decision support rule ID is included to assist interested readers to obtain more information at (PDF 18 KB)


Additional file 4: Decision support rules that were excluded from the analyses for technical and other reasons [26]. The decision support rule ID is included to assist interested readers to obtain more information at (PDF 15 KB)

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Kortteisto, T., Raitanen, J., Komulainen, J. et al. Patient-specific computer-based decision support in primary healthcare—a randomized trial. Implementation Sci 9, 15 (2014).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: