Study selection
Searching the databases resulted in a total of 1937 articles. After adjusting for duplicates and non-English language articles, 1446 remained. Title and abstract screening excluded all but 36 articles. Examining the full-text records and reference lists of the remaining 36 articles resulted in 19 articles reporting on 22 studies, meeting the inclusion criteria. Five articles were discussed with the moderator to achieve consensus (Kappa score 0.86) Fig. 1.
Study characteristics including risk of bias assessments
The 22 studies comprised 10 RCTs [54–63], 3 NRCTs [64–66], 1 CBA trial [67] and 8 quasi-experimental or observational studies [68–75] (Tables 1 and 2). The studies were conducted in Australia (n = 8) [54–56, 60, 64, 65, 68, 69], the USA (n = 4) [57–59, 73], the UK (n = 3) [62, 63, 75], the Netherlands (n = 2) [61, 72], Belgium (n = 1) [67], Canada, (n = 1) [71], Finland (n = 1) [74], Germany (n = 1) [70] and Switzerland (n = 1) [66]. The time frame for intervention-duration ranged from 1 day to 2 years. The included studies had sample sizes ranging from a group of 8 pharmacists to an intervention group of 1222 community pharmacies. Power calculations to determine an appropriate sample size were performed in three studies [62, 66, 70].
The 22 studies included in the review were assessed for risk of bias using separate tools depending on the study design. Consensus was achieved through discussion. Moderation was required for one domain (“comparability domain” in the Newcastle-Ottawa tool), which was the main area of disagreement (Tables 3 and 4).
Fourteen studies were assessed using the EPOC risk of bias tool for RCTs, NRCTs and CBA studies. Of these, 11 studies were evaluated as having a high risk of bias [55–59, 61, 63–67] and three studies assessed as having an unclear risk of bias [54, 60, 62]. Several domains contributed to risk of bias in multiple studies. These included, inadequately addressing incomplete data, lack of similarity in baseline measurements and protection from contamination in the study. It was not clear if these inadequacies were due to lack of reporting or flaws in research design. Two of the studies evaluated as having an overall unclear risk of bias used implementation strategies involving a computer prompt [54, 60].
The Newcastle-Ottawa tool for cohort studies was used to evaluate eight studies [68–75]. The study by Egen showed the least risk of bias with the domains around selection of study groups and outcomes deemed low risk [70]. However, like all of the cohort studies, in this review, the study design or analysis meant the comparability of cohorts could be a source of bias. There was little to suggest that studies had used measures to correct for confounding variables in any of the eight studies evaluated.
Use of the two risk of bias tools lead to the observation that many of the studies in this review were subject to selection bias and performance bias. Selection bias was inherent in the self-selection of study participants. Self-selection of participants occurred in 18 of the 22 studies, even if randomisation methods were subsequently used [54–58, 60–64, 66–69, 71–74]. Performance bias relates to the propensity for the studies to be threatened by the Hawthorne effect, which is the tendency to modify behaviour when being observed [76, 77]. Simulated patient methodology is one method that has been used in pharmacy-practice research to avoid the Hawthorne effect [78]. In this review, simulated patient methodology was used in ten studies [55, 56, 62–64, 66, 68–70, 75]. However, in most instances, the studies used the simulated patients overtly rather than covertly, which while ethically sound, does not eliminate the risk of bias.
Implementation strategies and their effectiveness
The areas of clinical practice that guideline implementation were designed to impact were varied. They involved chronic disease states [57, 60, 65, 71, 75], guidelines for the supply of particular classes of medications [55, 56, 59, 61, 64, 66, 68–70, 72], community pharmacy’s role in preventative health [58, 67, 73] and guidelines for appropriate pharmacy practice [63, 74]. The variation did not allow for any conclusions to be made about the effectiveness of implementation based on guideline characteristics.
Implementation activities were targeted to community pharmacy, and in particular to pharmacists in 20 studies [54–62, 65–75] and pharmacy support staff in 11 studies [56, 58, 61–64, 66, 68, 69, 72, 75]. In two studies, implementation activities were directed exclusively to pharmacy support staff, while three studies involved implementation activities to other health professionals and/or patients as well as community pharmacy staff [58, 65, 70]. Both the studies in this review involving comparisons between professional and support staff demonstrated that better outcomes were achieved when the patient encountered a pharmacist [62, 75].
Eighteen studies involved multifaceted interventions [54–58, 60, 62–70, 73–75], and single intervention strategies were used in four studies [59, 63, 71, 72]. The multifaceted nature of the majority of implementation strategies did not allow for a clear understanding of the effectiveness of individual strategies. Overall, eight different implementation strategies were reported in studies in the review including, use of educational materials, educational meetings, educational outreach visits, mass media campaigns, audit and feedback, reminders (including CDSS), practice support and fee for service.
The most commonly used implementation strategies were educational interventions. All but two of the studies in the review had an educational component in their intervention including provision of educational materials, educational meetings or educational outreach visits [54–59, 61–70, 72–75]. Seven studies used educational strategies exclusively [59, 63, 65, 72–75]. Of these, two studies used behaviour change theory to inform the intervention [63, 73] resulting in variable outcomes. The remaining five studies achieved modest positive outcomes. Educational outreach visits (sometimes called academic detailing) were undertaken in three studies [62, 65, 70]. One of the studies by Watson compared educational outreach and education meetings, individually and in combination, against a control [62]. This study did not demonstrate effectiveness for either strategy.
In six studies, educational interventions were combined with audit and feedback [55, 56, 64, 66, 68, 69]. Benrimoj and colleagues authored all of these studies. Five of the studies (from two papers) were in collaboration with De Almeida Neto [55, 56, 64, 68, 69] as the primary author, and one study was in collaboration with Sigrist [66] as the primary author. The studies using audit and feedback were generally effective. All reported positive outcomes except one, which had variable outcomes [66].
Practice support, in the form of follow-up visits, phone calls or via provision of practice tools (e.g. checklists, patient handouts, documentation forms), was used in seven studies [55, 56, 58, 60–62]. Only one study used practice support as a single intervention [71]. The study was a pilot study that provided pharmacists with practice tools for diabetes management. The proposed method was to implement the tools in conjunction with an education programme; however, the education programme was not available for the pilot study. The use of the practice support tools alone resulted in a modest positive outcome.
Computer prompts, as reminders to support practice change, were the used in four studies [54, 57, 60, 67, 71]. The studies using this implementation strategy in the community pharmacy setting produced variable outcomes. The sub-studies by Curtain and Reeve [54, 60], which were both part of the larger Pharmacy Recording of Medication and Services (PROMISe) project [79], demonstrated very strong evidence of effect, whereas the study by Kradjan [57] demonstrated no effect. The strongly positive outcomes and comparatively robust methodology, seen in the Curtain [54] and Reeve [60] studies, indicate the best evidence for effectiveness of an implementation strategy in the community pharmacy setting.
The study by Egen used an intensive mass media campaign along with specific educational interventions directed to pharmacists and gynaecologists [70]. This multifaceted approach did not demonstrate evidence of effect in pharmacist- or patient-outcome measures.
A monetary incentive (fee for service) was only used in one study as an implementation strategy [57]. However, it was part of a multifaceted implementation, which also involved educational interventions and reminders. There was no evidence of effectiveness demonstrated.
Basis for implementation strategies
Most implementation strategies were educational and designed to improve knowledge and/or communication skills. However, this choice of implementation strategy was, in almost all instances, not based on an identified deficit in these areas. Tailored implementation strategies, based on addressing identified barriers to successful guideline implementation, were only evident in two studies [61, 63]. The barriers identified included, organisational barriers, knowledge deficits, social factors such as patient indifference and suboptimal communication by non-professional pharmacy staff. Both the studies, which used tailored strategies, were ineffective in producing positive primary outcomes. Two other studies also addressed organisational factors (including workflow and time constraints), but without mentioning that the choice of strategy was tailored [58, 71]. Both proved to be moderately effective. Studies were based on specific behavioural theorems or frameworks in six instances [55, 56, 58, 63, 66, 73]. The behavioural theories, frameworks and strategies used included “Motivational Interviewing” techniques [55]; “Stages of Change Model” [55, 56, 66]; “Trans‐theoretical Model for Change” [73]; “Social Cognitive Theory” [58]; “Health Beliefs Model” [66]; “Theory of Planned Behaviour” [63]; “Calgary-Cambridge Model of Communication Skills” [63] and “Cognitive Behavioural Therapy” [63] techniques. Despite most of the studies in this review reporting positive outcomes, two out of the six studies based on behaviour change models did not [63, 66].
Characteristics of outcome measures including quality of evidence assessments
Fifteen studies [54–56, 58–60, 64, 65, 67–69, 71–73, 75] reported positive outcomes as a result of clinical guidelines implementation. Five studies indicated no significant improvement in primary outcomes measures as a result of the implementation strategy [57, 61–63, 70] and two studies reported minimal or variable results [66, 74]. Almost all studies reported multiple outcomes making assessment of primary outcomes difficult. Objective measures were used to report outcomes in 18 studies [54–56, 59–69, 71, 73–75]. Self-reported outcomes, both patient and practitioner outcomes, were included in 15 studies [54, 55, 57, 58, 60–63, 65, 67, 70–74]. In many instances, these outcomes were assessed via novel un-validated tools, modified validated tools or the reporting was inadequate to ascertain validity and reliability of the tool. Such measures are of limited benefit in evaluating successful guideline implementation. Surrogate measures were used in nine studies and included assessments of attitudes, self-efficacy, and patient and practitioner satisfaction/acceptability of interventions [55, 57, 60, 61, 65, 67, 71, 73, 74]. Four studies did not use any objective measures and relied upon self-reported (subjective) measures [57, 67, 71, 74].
All studies in the review measured process outcomes related to changes in the practice of community pharmacy staff. Patient outcomes were only measured in three studies and all relied upon self-reported outcomes using patient surveys [54, 57, 70]. Only one of the patient outcomes measured was based on a health outcome, but it was a surrogate measure for perceived asthma control [57]. The lack of measurement of robust (objective) patient outcomes determines that few conclusions can be made about the evidence of effectiveness for implementation of guidelines to community pharmacy.
Most studies did not comment on sustainability of outcomes or mentioned it as a limitation of the research. Sustainable practice change was noted in two studies, but the sample sizes were small [71, 73]. Four studies indicated a decline in outcomes over the course of data collection [54, 60, 61, 71]. One study indicated that ongoing communication was required for sustainability [75]. Thus there is little evidence that the positive outcomes generated by implementation of clinical guidelines in most of the studies, are sustainable effects.
Economic outcomes were determined in three studies [54, 59, 62]. All the economic evaluations looked at different measurements. These included the cost of implementing guidelines [62], cost savings to the health system due to improved guideline adherence [54] and an attempt at a more thorough cost-benefit analysis [59]. No analysis was undertaken to assess the quality of the economic modelling undertaken.
Six primary outcome measures were agreed upon and assessed using the GRADE approach [51]. Five outcomes were assessed as having very low quality of evidence. One outcome was assessed as having low quality of evidence. The main reasons for this were the heterogeneity of the studies, variability of outcomes and the potential for bias in studies. None of the outcomes were considered to have moderate or high quality of evidence. Thus, even though the studies demonstrated positive outcomes from clinical guideline implementation, these outcomes are generally not a reliable indication of the likely effect. The likelihood that the effect will be substantially different is very high (Table 5).