Process evaluation of the data-driven quality improvement in primary care (DQIP) trial: active and less active ingredients of a multi-component complex intervention to reduce high-risk primary care prescribing
Implementation Science volume 12, Article number: 4 (2017)
Two to 4% of emergency hospital admissions are caused by preventable adverse drug events. The estimated costs of such avoidable admissions in England were £530 million in 2015. The data-driven quality improvement in primary care (DQIP) intervention was designed to prompt review of patients vulnerable from currently prescribed non-steroidal anti-inflammatory drugs (NSAIDs) and anti-platelets and was found to be effective at reducing this prescribing. A process evaluation was conducted parallel to the trial, and this paper reports the analysis which aimed to explore response to the intervention delivered to clusters in relation to participants’ perceptions about which intervention elements were active in changing their practice.
Data generation was by in-depth interview with key staff exploring participant’s perceptions of the intervention components. Analysis was iterative using the framework technique and drawing on normalisation process theory.
All the primary components of the intervention were perceived as active, but at different stages of implementation: financial incentives primarily supported recruitment; education motivated the GPs to initiate implementation; the informatics tool facilitated sustained implementation. Participants perceived the primary components as interdependent. Intervention subcomponents also varied in whether and when they were active. For example, run charts providing feedback of change in prescribing over time were ignored in the informatics tool, but were motivating in some practices in the regular e-mailed newsletter. The high-risk NSAID and anti-platelet prescribing targeted was accepted as important by all interviewees, and this shared understanding was a key wider context underlying intervention effectiveness.
This was a novel use of process evaluation data which examined whether and how the individual intervention components were effective from the perspective of the professionals delivering changed care to patients. These findings are important for reproducibility and roll-out of the intervention.
High-risk prescribing in primary care
High-risk prescribing in primary care is a major concern for health-care systems internationally. Between 2 and 4% of emergency hospital admissions are caused by preventable adverse drug events [1, 2]. The National Institute of Clinical Excellence (NICE) estimated in 2015 that avoidable drug-related admissions in England cost commissioners £530 million per year , and the combined cost of drug-related hospital admissions, emergency department and outpatient visits in the USA was estimated at $19.6 billion in 2013 . A large proportion of these admissions are caused by high-risk prescribing of commonly prescribed drugs, with non-steroidal anti-inflammatory drugs (NSAIDs) and anti-platelets being the main or among the main drugs implicated, causing gastrointestinal, cardiovascular, and renal adverse events [5–7].
Data-driven quality improvement in primary care (DQIP) intervention and trial
In the UK, virtually, all primary care prescribing is done by general practitioners (GPs). The DQIP intervention was systematically developed and optimised [8–10] and comprised three intervention components: (1) professional education about the risks of NSAIDs and anti-platelets via an educational outreach visit by a pharmacist, written educational material, and regular newsletters which also provided feedback on progress after the practice started the intervention; (2) financial incentives to review patients at the highest risk of NSAID and anti-platelet ADEs, split into a participation fee of £350 and £15 per patient reviewed; and (3) access to a web-based IT tool (which extracted data from GP practice systems to measure practice rates of high-risk prescribing, identify patients for review, bring together relevant data from different parts of the GP record to make review easier, and allowed recording of review decisions to ensure appropriate follow-up), to identify such patients and support structured review. The DQIP intervention was evaluated in a pragmatic cluster randomised controlled stepped wedge trial  in 33 practices from one Scottish health board, where all participating practices received the intervention but were randomised to one of ten different start dates . The primary outcome of the trial was a composite of nine NSAID and anti-platelet prescribing indicators. The trial analysis of the primary outcome showed that across all practices, the targeted high-risk prescribing fell during the intervention period (from 3.7% immediately before to 2.2% at the end of the intervention period (adjusted OR 0.63 [95%CI 0.57–0.68], p < 0.0001). The intervention only incentivised review of ongoing high-risk prescribing, but led to reductions in both ongoing (1.5% at end vs. 2.6% pre-intervention, adjusted OR 0.60 [95%CI 0.53 to 0.67], p < 0.001) and ‘new’ high-risk prescribing (0.7 vs. 1.0%, adjusted OR 0.77 [0.68 to 0.87], p < 0.001). Notably, reductions in high-risk prescribing were sustained in the year after financial incentives stopped. In addition, in trial pre-specified secondary analysis, there were reductions in emergency hospital admissions with gastrointestinal ulcer or bleeding (from 55.7 to 37.0/10,000 person-years, RR 0.66 [95%CI 0.51–0.86], p = 0.002) and heart failure (from 708 to 513/10,000 person-years, RR 0.73 [95%CI 0.56–0.95], p = 0.02) .
Process evaluation of the DQIP intervention
Descriptions of complex interventions in the research literature often lack details about the context in which interventions were delivered and about the delivery and implementation of individual intervention components [13–15]. Such details are however important to decide whether and how an intervention can be implemented in routine care and to inform future research [14, 15]. Alongside the main DQIP trial, we therefore carried out a comprehensive mixed-methods process evaluation [16, 17] based on a cluster-randomised trial process-evaluation framework which we developed . Our framework emphasises the importance of considering two levels of intervention delivery and response that often characterise cluster-randomised trials of behaviour change interventions (although their relative importance will depend on intervention design). The first is the intervention that is delivered to clusters, which respond by adopting (or not) the intervention and integrating it with existing work. The second is the change in care which the cluster professionals deliver to individual patients. In DQIP, the delivery of the intervention to professionals was pre-defined, intended to be delivered with high fidelity across all practices and under the control of the research team, whereas the intervention delivered to patients was largely at the discretion of practices, who decided whether and how they reviewed patients and whether to change prescribing in those reviewed (similar to most health service interventions of this nature). We used this framework to structure our parallel process evaluation, mapping data collection to a logic model of how the data-driven quality improvement in primary care (DQIP) intervention was expected to work (Fig. 1).
Our process evaluation was also informed by normalisation process theory which assists exploration how interventions become integrated, embedded, and routinized into social contexts . Normalisation process theory (NPT) is a theory of implementation designed to assist interpretation of how interventions or new work practices are embedded, enacted, and operationalised within healthcare settings. Interventions or practices become routinely embedded through people working individually or collectively to enact them. This theory is made up of four constructs: coherence which refers to participants understanding of the intervention; cognitive participation which focuses on enrolment and engagement with the work; collective action focuses on how the work was carried out; and reflexive monitoring is about how participants assess their progress. NPT had utility qualitatively in sensitising the research team to response in relation to whether and how the intervention was incorporated into practice from the professional’s perspective, and quantitatively in designing measures to assess implementation of the DQIP intervention.
Focus of this paper
The focus of this paper is on practice participant perceptions of the intervention delivered by the research team to participating practices, which had financial incentive, educational, and informatics components. Complex interventions which have multiple components are common, usually because researchers believe that components will be complementary in terms of being either additive in their effect or synergistic (the whole being greater than the sum of its parts). In the analysis of the main trial, it is not possible to disentangle which components are effective or necessary. The aim of the analysis reported in this paper was therefore to examine professionals’ perceptions of, and responses to the multicomponent intervention delivered to practices, and at which point in recruitment and implementation these components were perceived as more or less active. The study was reviewed by the Fife and Forth Valley Research Ethics Committee (11/AL/0251), and informed consent was obtained from all participants to participate and to publish anonymised data.
The overall design and methods have been described previously in the published protocol . In brief, the overall design was a mixed methods parallel process evaluation which examined a set of pre-defined processes and their associations with change in high-risk prescribing at practice level. The quantitative element examined how change in prescribing at practice level was associated with practice characteristics and practice implementation of key processes and is reported separately (Process evaluation of the data-driven quality improvement in primary care (DQIP) trial: quantitative examination of representativeness of trial participants and heterogeneity of impact. Submitted). The qualitative element consisted of comparative case studies in 10 of the 33 participating practices purposively sampled using maximum variation sampling  to include a mix of those initially responding and not responding to the intervention by rapidly reducing their high-risk prescribing, as judged by visual inspection of run charts approximately 4 months after practices started the intervention. The case-study analysis of how practices adopted, implemented, and maintained the intervention is described separately (Process evaluation of the data-driven quality improvement in primary care (DQIP) trial: case-study evaluation of adoption and maintenance of a complex intervention to reduce high-risk primary care prescribing. Submitted). This paper examines professional participant perceptions of the multicomponent intervention delivered to practices using qualitative analysis of interview data collected in the case-study practices.
In each practice, all interviews were carried out by AG (a researcher over 10 years of qualitative experience and already known to two of the practices from a previous project examining prescribing behaviour)  with the most involved GP and one other GP, the practice manager and any attached primary care pharmacist approximately 6 months after the practice started the intervention, and the most involved GP again 9 to 12 months after starting the intervention to explore changes over time. Interviews were facilitated by a NPT informed topic guide and lasted approximately 1 h. As part of the intervention in all practices, the AG accompanied the pharmacist (TD) on the educational outreach visit (EOV) and made field notes detailing attendance and the practice’s response. Data was gathered between September 2011 and December 2013 and was in parallel with the trial to capture changes over time.
The analysis was concurrent and iterative with data generation allowing issues and themes identified to inform subsequent data generation and facilitate greater exploration. The analysis continued after data generation until no new themes emerged. The analysis was carried out by AG with BG contributing through discussion of data and interpretation until he became aware of the trial results, and was completed by AG before she knew the outcome of the trial, meaning that qualitative interpretation was blind to trial findings and detailed quantitative process evaluation data. Interview audio-recordings were transcribed verbatim. To preserve anonymity, some identifiable details have been changed and pseudonyms used. A coding frame was developed inductively from field notes and initial interviews and based on our topic guides, framework , and logic model . The constant comparative method facilitated revision through detailed analysis . This coding frame was systematically applied to all data, facilitated by NVIVO 8. Analysis utilised the framework technique  and NPT as a conceptual framework . AG analysed the data collected in the study twice, inductively letting themes emerge from the data and deductively based on normalisation process theory. NPT interpretation and coding reliability was established through a workshop with NPT experts. In this paper, we drew on analysis from the coherence construct which relates to participant’s perceptions of the intervention which was useful in identifying intervention components from the participant’s perspective. In our accompanying papers, we present the qualitative analysis from the remaining NPT constructs: cognitive participation, collective action, and reflective monitoring (Process evaluation of the data-driven quality improvement in primary care (DQIP) trial: case-study evaluation of adoption and maintenance of a complex intervention to reduce high-risk primary care prescribing. Submitted) and the quantitative data analysis from the NPT informed questionnaires (Process evaluation of the data-driven quality improvement in primary care (DQIP) trial: quantitative examination of representativeness of trial participants and heterogeneity of impact. Submitted). The data was explored for negative cases. Practice names have all been anonymised.
The findings presented are from 38 professional interviews (ten lead GPs of whom nine were interviewed twice, seven GPs less involved with DQIP, nine practice managers/administrators and three practice pharmacists and from approximately 11 h of field notes from each case study’s EOV (one practice requested and received the education and training twice because of initial implementation failure).
Tables 1 and 2 provide a detailed description of the intervention components defined in the published protocol [15, 20] based on the TIDieR checklist  with illustrative study materials provided in the Additional files 1, 2, and 3. The intervention delivered to practices had three main components: (1) financial incentives; (2) education; and (3) a web-based informatics tool which used data extracted from GP electronic medical records. Each of these three main components had a number of subcomponents, included in the intervention for a range of rationales which are detailed in Table 2. Participant’s perceptions of these components and when they were perceived to have an effect are summarised in Table 2 and discussed in detail below.
GPs perceived that the offer of a financial incentive was important during recruitment since it was recognition of the additional work to be done in the context of already stretched work schedules and offered a means of generating extra income for the practice. The financial incentive was structured with an upfront payment of £350 ($600, €440) paid after the EOV and £15 ($25, €19) per patient reviewed paid after the end of the intervention in the practice. Most GPs said that the per-patient fee did not actually change whether or not they reviewed patients. However, in some practices, sampled for the case study analysis as early implementation failures (Orosay, Hellisay and Boreray), the interviewed GP felt increased awareness of the financial structure or payment could have facilitated greater implementation of changes in care for patients. In contrast, GPs in some practices which implemented the intervention immediately felt that DQIP was easy money for work they should already be doing. One GP went as far as questioning the legitimacy of paying GPs for safety work:
“… whether we should be earning for doing that particular thing I think I would feel a little bit … mmm, you know, this is something that we should be picking up on probably without someone dangling a carrot really …” (Hirta Practice, GP 3 Interview).
Some practices had to be repeatedly reminded to invoice for the work done, supporting the belief that in at least some practices, financial incentives played a limited role in mediating effectiveness even though the offer of payment helped to get initial engagement during recruitment.
The educational component had several elements; written material summarising the literature and providing prescribing advice, tailored newsletters summarising practice progress and offering support, and an EOV which both targeting knowledge and attitudes, and facilitated discussion of how practices were going to organise to do the DQIP work. All of these and the initial recruitment material were designed so that they clearly branded the work as being about patient safety.
Branding as patient safety and the NSAID and anti-platelet topic
Generally, participants in practices in the process evaluation said they signed up to the DQIP trial because they perceived that prescribing safety was clearly about good patient care and was therefore work they should be doing. GPs said that the NSAID and anti-platelet topics covered by DQIP were well-known safety issues, and the logic of the trial resonated with messages they had received from other sources, such as in Health Board organised ‘protected learning time’ educational sessions and from their practice pharmacist. As a result of this shared understanding, they perceived that their practice required good justification for not signing up for an intervention which targeted a well-known patient safety risk and paid them to do so. This high legitimation of the work also facilitated implementation of the intervention in all but four practices which took part in the trial and facilitated expeditious implementation in some practices.
“I felt this was actually genuinely useful, and of benefit to the patients… we’ve always had a bit of an interest in prescribing actually and quite enjoyed some of the little projects that we’ve done with the pharmacist…he is supporting this because it covers some of the issues we’ve already covered but in less detail.” (Monach Practice, GP 1 interview)
Prescribing advice, structured written educational material and educational outreach visit
The summary of the literature and educational material aimed to improve knowledge and motivate GPs to carry out the reviews. This was valued because it provided clear and up-to-date recommendations, brought the latest evidence and recommendations to the fore of their minds and aided consistent prescribing behaviour.
… it, it was good, there were references that you could go away along and, and read if you wanted … that’s one of the benefits of these things, is if somebody’s done the research and presents it to you and we’re all singing from the same Hymn sheet.” (Hirta Practice, GP 1 interview)
However, these messages were not viewed as ‘new’, and although participants valued the educational material, they did not refer to it while delivering care to patients, perhaps reflecting that its messages were easily internalised. These views were consistent across all participants in the process evaluation. Participants contrasted this summary material with what was normally provided during other improvement activities, which was typically longer, less focused and perceived as being less useful.
Overall, the EOV was viewed by interviewees as useful but not essential (see Additional file 1 for the presentation used). They felt it was useful to have a verbal summary of the literature and recommendations and an overview of the DQIP tool, but these were not felt to be essential to implementing effective reviews. Consistent with this, some GPs who led on delivering their practice’s reviews using the tool did not attend the EOV. Interestingly, one practice (Orosay Practice) which failed to implement the DQIP intervention by not conducting any reviews using the tool still observed a reduction in the targeted prescribing, potentially because of an effect from the educational material.
“Yeah it’s been because of DQIP really yes, aye. Well I suppose we were always anxious prescribing in the elderly… eh, but it’s merely made me sit up and pay a bit more attention…
AG: when you see the patient’s name on the tool or now more generally are you better informed?
It’s more now when I see them, and I didn’t even know if their name’s on the tool …em it’s more of when I see them and I see that they’re on it (NSAID or antiplatelet).”(Orosay Practice, GP 1 interview)
Discussion about potential process to do the work
During the EOV, practices were offered the opportunity discuss the ‘best’ process by which they could manage the work load. For some practices, these discussions played an important role in defining how they would organise the work, including roles and responsibilities. These discussions ensured the work commenced shortly after the EOV in the large initial implementation practices where administrative staff were given a co-ordinating role (Taransay and Hirta Practices), but were of less value in small practices where the intervention was being delivered by fewer people.
“…we started immediately…a list is issued by (administrative staff member named), em to whoever perceived to be the em doctor who sees most of that patient to action it each week, and then it’s, the boxes are ticked and it’s gone back to them to compute.” (Taransay Practice, GP 1 interview)
Newsletters, summarising practice performance using data from the tool, were sent to practices at eight weekly intervals. They were circulated around GPs in most practices, but were not formally discussed in any practice participating in the process evaluation (an example is shown in Additional file 2). In practices which immediately implemented DQIP, GPs felt the newsletters were nice to receive to see their targeted ‘high-risk’ prescribing reducing. In two small practices, with immediate implementation, the GPs responsible for the review work found the run charts showing practice progress motivating. Similarly, in two of the practices which initially did not implement the intervention, GPs felt the newsletters were motivating and pushing them into revisiting the tool and review medication.
“Oh yes! Yes we did … yeah we did, em … yeah it kind of pushed us into doing it (reviewing prescribing)…when we got the monthly newsletter.” (Hellisay Practice, GP 2 Interview)
In order to be paid, GPs had to use the informatics tool to record review decisions. The tool provided feedback on changes over time in targeted prescribing (run charts), identified patients requiring a review, supported review by summarising relevant clinical information extracted from multiple areas of the GP clinical record, and allowed recording of review decisions which then determined whether and when the patient would be identified as needing review again. Although some practices initially intended to use a print out of identified patients to do reviews, all but one GP carried out the medication reviews online using the tool (see Additional file 3, for example, screenshots). All GPs found the tool intuitive, straightforward and well structured, as the quote below illustrates:
“I liked it — I think we both did — it was, it was easy to use, it was intuitive, you didn’t need a half day tutorial on how to use the blessed thing.” (Hellisay Practice, GP 2 Interview)
The tool’s case finding ability was particularly attractive, especially identifying historical risk factors which GPs felt they were likely to overlook when carrying out a medication review using the same data in their own electronic medical record (e.g. previous peptic ulcer). The GPs also valued the focused data presentation, which only displayed information relevant to the prescribing decision.
“…it’s very straightforward in that you have the on-line tool and it gives you a very structured way of doing it which is, once you get into it much quicker, and other stuff that we do for the sort of GMS and the quality, the National Quality indicator things, em all involve us having to go in and do searches and find people and you know a lot of it’s done on bits of paper and things, so it, it’s done but it’s not, it’s not as easy, it’s, it’s actually made doing this really quite simple and that the tools you know, searches for the patients for you and things which is really good.” (Mingulay Practice, GP 2 Interview)
Although GPs were very positive about the tool, they would prefer a tool which was able to write to their clinical records to prevent double entry of review decisions, for example:
“I think it would be great, the only thing that would be really, really good is if it could talk to Vision [the clinical IT system]…” (Mingulay Practice, GP 1 Interview)
The majority of GPs also expressed a desire for a real-time intervention triggered by an alert when the patient consulted:
“…at the time of the prescription being done rather than retrospectively looking back at it, so that the prescription’s not issued in the first place, might be a good idea.” (Monach Practice, GP1 Interview)
However, a number of GPs preferred a DQIP-like intervention which retrospectively corrected prescribing, because they felt consultations were already heavily time constrained and preferred reviewing outside of clinical consultation to allow sufficient time to adequately review medication.
“Correcting past behaviour … that can make it … it’s a bit of a frustration but I can’t see how it would be otherwise really in the sense that, I don’t know how you fancy an alarm going off every time you sort of try to prescribe something…” (Gighay Practice, GP 1 Interview)
There were no differences in opinion about the informatics tool between professionals in practices which initially responded to the intervention delivered to them and those who did not.
Performance graphs (run charts)
Within the informatics tool were run charts of change in high-risk prescribing over time where practices could review their progress. These depicted the trend in their targeted high-risk prescribing 2-years pre-DQIP and since they started receiving the DQIP intervention. The run charts in the tool were not perceived to be of value by most GPs due to the relatively small number of patients identified in most practices, although as described above, the same run charts in the newsletters were motivating in some practices.
Although the component parts of the DQIP intervention were important at different stages of recruitment and delivery of the intervention, they also had interactions which were important for effective delivery and implementation. Consistent branding of the DQIP intervention and the financial component ensured GPs were on-board and engaged, the educational component provided clear messages which helped ensure consistency when making prescribing decisions using the tool, and the newsletters provided feedback and motivation to ensure engagement was maintained.
Our findings have shown that all primary components of the intervention were ‘active’, but at different stages. The financial incentive was perceived as most important during recruitment because it acknowledged the additional work required and offered a means of generating extra income. The education including the focus on patient safety reflected in the way the work was branded, motivated the GPs to prioritise this work during initial implementation, and was valued because it provided clear and concise prescribing advice. Notably, many GPs perceived that some or all of the elements of education were useful but not essential, but individuals varied in terms of which elements they valued most. The informatics tool was crucial in case-finding patients, in facilitating efficient medication review decisions, and in implementing the required changes. Many GPs expressed a desire for a real-time alerting element to the informatics as well as support for retrospective review (although it is important to recognise that the majority of the targeted indicators would have triggered an interaction alert at the point of prescribing, consistent with point of care reminders not being a panacea in this area).
Overall, all respondents perceived that some of the components and sub-components were active and synergistic, but there was a variation between participants in which were most valued or perceived to be most active. There was therefore no clearly or consistently identified set of inactive components, and our interpretation is that this suggests that all component parts should be delivered in any further roll-out. However, our prior expectations of how and when different components would be active were not always correct. In particular, financial incentives were perceived by participants as less important than we anticipated, being active in recruitment and initial engagement, but not in promoting sustained delivery to all eligible patients (notwithstanding the caveats described above about how professionals might talk about such incentives in interviews). Although, the run charts and newsletters presented the same data, the run charts providing feedback on progress in the informatics tool were rarely looked at whereas the newsletters were valued in some practices possibly because the latter were accompanied by some individualised interpretation, emphasising that delivery mechanisms matter as well as what is being delivered. This is consistent with the wider audit and feedback literature which emphasises that feedback needs to be optimised to context to maximise effectiveness, although this study was not designed to examine this directly .
It is important to note that professional interview data about financial incentives can be difficult to interpret, since some professionals may find it morally ambiguous to say that they require money to improve safety, making it more likely that they say that the incentives did not alter their actions. Nevertheless, some practices had to be repeatedly reminded to invoice for the work done, supporting the belief that in at least some practices, financial incentives played a limited role in mediating effectiveness even though the offer of payment helped to get initial engagement during recruitment.
It is also worth noting that the targeted prescribing topic and intervention were intertwined, in that participants were already mostly persuaded that the targeted prescribing was risky, and their perceptions of the intervention components have to be understood in this context. One implication is that the same intervention components might not be as effective if targeting a prescribing topic that GPs did not perceive as important, or which they felt was less under their control. This is consistent with the findings of a trial of a much simpler feedback intervention in Scotland where five of the targeted measures were similar to DQIP and reduced, but the sixth measure targeting anti-psychotic prescribing in older people was unaffected by the intervention .
The normalisation process theory construct of coherence was useful in identifying and describing the components and sub-components of the intervention and when they were useful from the perspective of those who had engaged in some from with the DQIP intervention. For the analysis presented in this paper, we found normalisation process theory to be useful in understanding the nuances associated with collective implementation of the DQIP intervention in general practices where much clinical work is shared.
This study has a number of limitations. The GPs and practices which participated in this study are a sample of approximately a third of the whole trial population and were sampled as outliers at both ends of the distribution of initial success in reducing prescribing, so their views may not be truly representative. Data collection and analysis were carried out by a researcher (AG) who had involvement in the development of the intervention although the researcher with primary responsibility (TD) for the intervention development and the trial had no input into analysis and interpretation of this study. AG did have discussions with BG about the data analysis and interpretation. BG is responsible for the research programme grant, meaning that analysis had limited external scrutiny and challenge.
The analysis in this paper has focused on overall perceptions of when and how different components of the intervention were effective, but the accompanying case study paper (Process evaluation of the data-driven quality improvement in primary are (DQIP) trial: case-study evaluation of an complex intervention to reduce high-risk primary care prescribing. Submitted) qualitatively examining variation between practices in more detail indicates that this plays out variably in different practices, although variation in implementation is also significantly driven by variability in the barriers experienced. The quantitative analysis reported in the second accompanying paper (Process evaluation of the data-driven quality improvement in primary care (DQIP) trial: quantitative examination of representativeness of the trial participants and heterogeneity of impact. Submitted). Found that 30/33 practices had at least some reduction in high-risk prescribing, which is consistent with the finding in this paper that all involved professionals perceived that some or all of the intervention components were active.
In the wider literature, the DQIP intervention is most like the intervention evaluated in the pharmacist-led information technology intervention for medication errors (PINCER) trial . Important differences were that PINCER targeted a broader range of prescribing topics than DQIP, that it was delivered over 12 weeks by pharmacists, and that it was more standardised in terms of how the pharmacists carried out medication reviews. However, the PINCER process evaluation did not explicitly examine the subcomponents of the intervention . A systematic review of main trial evaluations of pharmaceutical and non-pharmaceutical interventions found evaluations of non-pharmaceutical interventions are significantly less likely to list ‘active ingredients’ (those elements of the intervention intended to lead to change in the outcome, , and we have not been able to identify any other process evaluations of cluster-randomised trials of multicomponent complex interventions which have examined the ingredients of complex interventions (although such studies are not easy to find because of poor labelling and reporting of process evaluations more generally) [17, 18].
This paper provides both a detailed description of the DQIP intervention delivered to practices to motivate and facilitate the intended change in how they delivered care to patients. It uses the TIDieR framework to structure intervention description  and examines where and when the intervention components were more and less active. We show that all components of the intervention were active but at different stages of recruitment and delivery of the intervention, and that components were perceived to be interdependent and synergistic in their effects. These findings are important for informing wider implementation, since they facilitate reproducibility. More broadly, we believe that trials of complex interventions should at a minimum report the details of their intervention using a framework such as TIDieR to provide a detailed description of the intended active ingredients of the intervention, but should ideally aim to examine whether and when each of these the chosen ingredients is actually active from the perspectives of the intervention targets.
Data-driven quality improvement in primary care
Educational outreach visit
National Institute of Clinical Excellence
Normalisation process theory
Non-steroidal anti-inflammatory drugs
A Pharmacist-led Information technology Intervention for Medication Errors
Template for intervention description and replication
Howard R, Avery A, Bissell P. Causes of preventable drug-related hospital admissions: a qualitative study. Qual Saf Health Care. 2008;17(2):109–16.
Hakkarainen KM, Hedna K, Petzold M, Hagg S. Percentage of patients with preventable adverse drug reactions and preventability of adverse drug reactions—a meta-analysis. PLoS One. 2012;7(3):e33236.
National Institute for Health and Care Excellence (NICE). Costing statement: medicines optimisation—implementing the NICE guideline on medicines information (NG5). 2015. Available at https://www.nice.org.uk/guidance/ng5/resources/costing-statement-6916717. Accessed 5 Nov 2016.
Institute for Health Care Informatics, IMS. Responsible use of medicines report. 2012. http://www.imshealth.com/en/thought-leadership/ims-institute/reports/responsible-use-of-medicines-report. Accessed 8 Dec 2016.
Howard RL, Avery A, Slavenburg S, Royal S, Pipe G, Lucassen P, Pirohamed M. Which drugs cause preventable admissions to hospital? A systematic review. Br J Clin Pharmacol. 2006;63:136–47.
Budnitz DS, Lovegrove MC, Shehab N, Richards CL. Emergency hospitalisation for adverse drug events in older Americans. N Engl J Med. 2011;365(21):2002–12.
Leendertse AJ, Egberts ACG, Stoker LJ, van den Bent PM. Frequency of and risk factors for preventable medication-related hospital admissions in the Netherlands. Arch Intern Med. 2008;168(17):1890–6.
Guthrie B, McCowan C, Davey P, Simpson CR, Dreischulte T, Barnett K. High risk prescribing in primary care patients particularly vulnerable to adverse drug events: cross sectional population database analysis in Scottish general practice. Br Med J. 2011;342:d3514.
Dreischulte T, Grant A, Donnan P, McCowan C, Davey P, Petrie D, Treweek S, Guthrie B. A cluster randomised stepped wedge trial to evaluate the effectiveness of a multifaceted information technology-based intervention in reducing high-risk prescribing of non-steroidal anti-inflammatory drugs and antiplatelets in primary care: the DQIP study protocol. Implement Sci. 2012;7:24.
Grant AM, Guthrie B, Dreischulte T. Developing a complex intervention to improve prescribing safety in primary care: mixed methods feasibility and optimisation pilot study. BMJ Open 2014;4:e004153. doi:10.1136/bmjopen-2013-004153.
Brown C, Lilford R. The stepped wedge trial design: a systematic review. BMC Med Res Methodol. 2006;6(1):54.
Dreischulte T, Donnan P, Grant A, Hapca A, McCowan C, Guthrie B. Safer prescribing—a trial of education, informatics and financial incentives. New England J Med. 2016;374:1053–64.
Dopson S, Locock L, Chambers D, Gabbay J. Implementation of evidence-based medicine: evaluation of the promoting action of clinical effectiveness programme. J Health Serv Res Policy. 2001;6:23–31.
Glasziou P, Meats E, Heneghan C, Shepperd S. What is missing from descriptions of treatment in trials and reviews? Br Med J. 2008;336(7659):1472–4.
Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, Altman DG, Barbour V, Macdonald H, Johnston M, Lamb SE, Dixon-Woods M, McCulloch P, Wyatt JC, Chan A, Michie S. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. Br Med J. 2014;348:g1687.
Medical Research Council. Developing and evaluating complex interventions: new guidance. 2008.
Medical Research Council. Process evaluation of complex interventions: UK Medical Research Council (MRC) guidance. 2015.
Grant A, Treweek S, Dreischulte T, Foy R, Guthrie B. Process evaluations for cluster-randomised trials of complex interventions: a proposed framework for design and reporting. Trials. 2013;14(1):15.
May C, Finch T. Implementing, embedding, and integrating practices: an outline of normalisation process theory. Sociology. 2009;43:535.
Grant A, Dreischulte T, Treweek S, Guthrie B. Study protocol of a mixed-methods evaluation of a cluster randomised trial to improve the safety of NSAID and antiplatelet prescribing: data-driven quality improvement in primary care. Trials. 2012;13:154.
Palinkas LA, Horwitz SM, Green CA, Wisdom JP, Duan N, Hoagwood K. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Administration Policy Mental Health Mental Health Serv Res. 2015;42(5):533–44.
Grant A, Sullivan F, Dowell J. An ethnographic exploration of influences on prescribing in general practice: why is there variation in prescribing practices? Implement Sci. 2013;8(1):72.
Glaser B, Strauss A. The discovery of grounded theory: strategies for qualitative research. Chicago: Aldine Publishing Co; 1967.
Ritchie J, Spencer L, O’Connor W. Carrying out qualitative analysis. In: Ritchie J, Lewis J, editors. Qualitative research practice, a guide for social science students and researchers. London: Sage Publications Ltd; 2003.
Ivers NM, Grimshaw JM, Jamtvedt G, Flottorp S, O’Brien MA, French SD, Young J, Odgaard-Jensen J. Growing literature, stagnant science? Systematic review, meta-regression and cumulative analysis of audit and feedback interventions in health care. J Gen Intern Med. 2014;29(11):1534–41.
Guthrie B, Kavanagh K, Robertson C, Barnett K, Treweek S, Petrie D, Ritchie L, Bennie M. Data feedback and behavioural change intervention to improve primary care prescribing safety (EFIPPS): multicentre, three arm, cluster randomised controlled trial. BMJ. 2016;354:i4079.
Avery AJ, Rodgers S, Cantrill JA, Armstrong S, Cresswell K, Eden M, Elliott RA, Howard R, Kendrick D, Morris CJ, Prescott RJ, Swanwick G, Franklin M, Putman K, Boyd M, Sheikh A. A pharmacist-led information technology intervention for medication errors (PINCER): a multicentre, cluster randomised, controlled trial and cost-effectiveness analysis. Lancet. 2012;379(9823):1310–9.
Cresswell K, Sadler S, Rodgers S, Avery A, Cantrill J, Murray S, Sheikh A. An embedded longitudinal multi-faceted qualitative evaluation of a complex cluster randomized controlled trial aiming to reduce clinically important errors in medicines management in general practice. Trials. 2012;13(1):78.
McCleary N, Duncan E, Stewart F, Francis J. Active ingredients are reported more often for pharmalogic than non-pharmalogic interventions: an illustrative review of reporting practices in titles and abstracts. Trials. 2013;14:146.
We would like to thank all participating practices, the Advisory and Trial Steering Groups, and Debby O’Farrell who provided administrative support.
The study was supported by a grant (ARPG/07/02) from the Scottish Government Chief Scientist Office. The funder had no role in study design, data collection, analysis and interpretation, the writing of the manuscript or the decision to publish.
Availability of data and materials
No observational and interview qualitative data are available to share because the NHS Research Ethics approval for the study was on the basis of only the research team having access to the raw data.
BG and AG designed the study. AG led the data collection and analysis, supported by BG in analysis and interpretation. AG wrote the first draft of the manuscript with all authors commenting on subsequent drafts. All authors read and approved the final manuscript.
The authors declare that they have no competing interests.
Consent for publication
All participants gave consent for the publication of anonymised data.
Ethics approval and consent to participate
The study was approved by the East of Scotland Research Ethics Service (11/AL/0251), and informed consent was received from all participants prior to data collection.
Educational outreach visit presentation. (DOCX 699 kb)
Example of 8 weekly newsletter sent 24 weeks after the practice started the intervention where there was little initial change in prescribing. (DOCX 103 kb)
Screenshots from the DQIP tool. (DOCX 991 kb)
About this article
Cite this article
Grant, A., Dreischulte, T. & Guthrie, B. Process evaluation of the data-driven quality improvement in primary care (DQIP) trial: active and less active ingredients of a multi-component complex intervention to reduce high-risk primary care prescribing. Implementation Sci 12, 4 (2017). https://doi.org/10.1186/s13012-016-0531-2