Skip to main content

Explaining variable effects of an adaptable implementation package to promote evidence-based practice in primary care: a longitudinal process evaluation

Abstract

Background

Implementing evidence-based recommendations is challenging in UK primary care, especially given system pressures and multiple guideline recommendations competing for attention. Implementation packages that can be adapted and hence applied to target multiple guideline recommendations could offer efficiencies for recommendations with common barriers to achievement. We developed and evaluated a package of evidence-based interventions (audit and feedback, educational outreach and reminders) incorporating behaviour change techniques to target common barriers, in two pragmatic trials for four “high impact” indicators: risky prescribing; diabetes control; blood pressure control; and anticoagulation in atrial fibrillation. We observed a significant, cost-effective reduction in risky prescribing but there was insufficient evidence of effect on the other outcomes. We explored the impact of the implementation package on both social processes (Normalisation Process Theory; NPT) and hypothesised determinants of behaviour (Theoretical Domains Framework; TDF).

Methods

We conducted a prospective multi-method process evaluation. Observational, administrative and interview data collection and analyses in eight primary care practices were guided by NPT and TDF. Survey data from trial and process evaluation practices explored fidelity.

Results

We observed three main patterns of variation in how practices responded to the implementation package. First, in integration and achievement, the package “worked” when it was considered distinctive and feasible. Timely feedback directed at specific behaviours enabled continuous goal setting, action and review, which reinforced motivation and collective action. Second, impacts on team-based determinants were limited, particularly when the complexity of clinical actions impeded progress. Third, there were delivery delays and unintended consequences. Delays in scheduling outreach further reduced ownership and time for improvement. Repeated stagnant or declining feedback that did not reflect effort undermined engagement.

Conclusions

Variable integration within practice routines and organisation of care, variable impacts on behavioural determinants, and delays in delivery and unintended consequences help explain the partial success of an adaptable package in primary care.

Peer Review reports

Background

Implementing any evidence-based practice within the constraints and competing priorities of United Kingdom primary care is difficult. Implementing numerous evidence-based practices from a wide range of clinical guidelines in this context is even more challenging [1]. Systematic reviews indicate a range of implementation strategies, such as audit and feedback and educational outreach, can enhance implementation [2,3,4,5,6]Studies typically focus on evaluating interventions for single clinical conditions (e.g. type 2 diabetes) or behaviours (e.g. antibiotic prescribing). This limits generalisability, or the confidence that an implementation strategy that works for one targeted problem will work for another [7]. There are insufficient resources to develop and evaluate interventions for each implementation problem separately.

We developed an implementation package for UK primary care with the aims of being adaptable for different clinical priorities and sustainable within existing resources. We selected four “high impact” quality indicators: risky prescribing (focused on non-steroidal anti-inflammatory drugs; NSAIDs); control of type 2 diabetes; blood pressure control in people at high risk of cardiovascular events; and anticoagulation for stroke prevention in atrial fibrillation (Table 1) [1]. We conducted interviews with primary care staff using the Theoretical Domains Framework (TDF) and identified a common set of determinants of adherence to these indicators [8]. We consulted with primary care stakeholders to develop an implementation package based upon evidence-based implementation techniques, such as audit and feedback, educational outreach, and computerised prompts and reminders. This implementation package incorporated behaviour change techniques tailored to the determinants identified in the interviews with primary care staff [8],with content adapted to each of the four indicators [9,10,11] (Table 2). Whilst indicators could not be completely independent of the intervention (e.g. given that feedback used the indicators), the interventions were designed so that indicators and related content could be dropped in.

Table 1 Clinical Indicators targeted by the intervention package
Table 2 Intervention package TIDIER description [11]

To test this implementation package, we conducted two parallel, cluster-randomised trials using balanced incomplete block designs. Randomly assigned general practices received an implementation package targeting either diabetes control or risky prescribing in Trial 1 or targeting blood pressure control or anticoagulation in atrial fibrillation in Trial 2. Every practice was allocated to an active intervention, to balance any nonspecific effects across trial arms and thereby increase confidence that any difference in outcomes was attributable to the intervention [12].. We observed a significant, cost-effective reduction in risky prescribing and insufficient evidence of effect for the other three indicators.

Process evaluation aim and rationale

Theory-based process evaluations of implementation interventions can identify factors that influence implementation and achievement of desired outcomes. We incorporated a parallel process evaluation into the trials to explore how real-life implementation compared with planned, theorised implementation. To do this, we collected fidelity and process data throughout and after the trial. We also chose one sociological theory (Normalisation Process Theory (NPT)) and one behavioural framework (TDF) that offered complementary insights into individual and group behaviours that influence implementation. Whilst the TDF [10] identifies the cognitive, affective, social and environmental determinants most relevant to implementation, its strength is identifying self-reported influences on capability, opportunity and motivation [13]. NPT [14, 15] provides an understanding of the dynamic social processes involved in implementation [16, 17]. NPT proposes that achievement is more likely when participants value the intervention (coherence), commit to engage (cognitive participation), commit staff and resources and work towards change (collective action), and appraise the package as useful (reflexive monitoring).

We sought to identify the social processes around implementation within primary care guided by the NPT; and the influence on hypothesised determinants (TDF) namely: knowledge, beliefs about consequences, memory, social and professional role; and environmental context and resources [8].

Methods

Study design and participants

We used a multi-method approach comprising a longitudinal qualitative evaluation, a survey and an analysis of trial process data. Alongside opt-out trial recruitment, we recruited an additional eight practices from West Yorkshire, UK, to the qualitative evaluation. All were ineligible for the trials due to prior involvement in intervention development [8]. Process evaluation practices varied in list size and represented the geographical variation of the trial. Pre-intervention achievement was broadly comparable between trial and process evaluation practices across trials and indicators, with any variations reflecting the smaller sample of process evaluation practices [18]. Each practice was assigned a pseudonym and an independent statistician randomly assigned two practices to each indicator, balancing allocation by locality and practice list size (Table 3). Trial and process evaluation practices received the implementation package concurrently (Fig. 1).

Table 3 Process evaluation practices
Fig. 1
figure 1

Multifaceted adaptable implementation package as planned

A social scientist researcher (CH) independent of the trial team conducted the qualitative field work. She observed and collected data on how participants engaged with and understood the package, aiming to act as a non-participant observer. All trial data were analysed by an independent team of statisticians.

Data collection

To assess fidelity, we collected data on delivery (extent delivered as intended), receipt (extent understood and engaged) and enactment (extent applied in clinical practice) of intervention components from trial and process evaluation practices (see Table 4 for summary) [19]. Fidelity was also tracked electronically (for e-mailed feedback and computerised searches), using structured logs kept by outreach facilitators and in process evaluation practices, via observational notes kept by CH. To assess fidelity further, in particular, to evaluate the visibility and enactment of intervention components in trial practices, we also surveyed all practices by e-mail after data collection. This post-trial survey was added to the study protocol as a further source of fidelity data when the trial was underway, and explored whether individual intervention components were received by practice staff, perceived as relevant, shared and discussed, and changed organisation of care. CH also held de-briefing conversations with outreach facilitators to explore their perceptions of intervention delivery and uptake.

Table 4 Fidelity of delivery, receipt and enactment for each intervention component

For the qualitative evaluation, CH met with practice staff prior to intervention delivery to establish rapport and a sense of pre-intervention context. She collected observational (e.g. practice meetings), documentary (e.g. clinical protocols, letter templates etc.), and interview data related to awareness and use of the implementation package over 12 months at each practice. NPT and TDF constructs informed fieldwork, guiding but not delimiting data collection [20].

CH conducted individual semi-structured interviews with the relevant clinical lead, practice manager and other staff involved in the organisation or delivery of care for each indicator at two time-points in each practice. Initial interviews explored roles and responsibilities, barriers to achievement and early responses to the implementation package (Appendix 1. Longitudinal interview guide). Follow-up interviews throughout the intervention period explored the perceived usefulness of the package over time. All interviews were audio-recorded and transcribed verbatim.

CH used field notes to record informal conversations with staff, observations in non-clinical areas, of relevant practice meetings, and outreach meetings (Appendix 2. Observational guide).

Practices were prompted to collate indicator-related documents (e.g. treatment protocols, letter templates, patient leaflets and minutes from practice meetings) in a study box given to practice managers. Practices chose which documents to share with the researcher; related documents were reviewed at the end of the study.

CH conducted focus groups with each practice towards the end of the study to reflect on their overall experience, intended indicator work, and what did and did not support implementation (Appendix 3 Interview guide for final practice meeting). Practice managers were asked to invite relevant staff.

Data management and analysis

Interview transcripts and detailed field notes were anonymised and managed in NVivo 10 (QSR International, Warrington, UK). We developed a coding framework (Appendix 4 Table 9 Normalisation Process Theory (NPT) coding dictionary and Appendix 4 Table 10 Theoretical Domains Framework (TDF) coding dictionary) with inductive and deductive elements guided by NPT and TDF constructs [21, 22]. We created chronological practice narratives and process models for each practice after an initial directed content analysis. The narratives outlined delivery, exposure, and enactment within each practice over time and the process models illustrated the implementation processes within practices and their interactions with the components. CH undertook coding and constructed the practice narratives and process models. These were reviewed and refined iteratively in multi-disciplinary research team meetings (with experience in social sciences, implementation science and primary care). To explore fidelity, we compared practice process models with an idealised process model which outlined implementation as intended to identify and theorise delays and unintended consequences of the intervention.

Descriptive quantitative fidelity data collected from all trial and process evaluation practices informed interpretation of the process evaluation practice narratives. For the post-intervention survey fidelity was considered high if practices received feedback reports, accepted outreach, and accessed computerised searches; medium if they received feedback and either accepted outreach or accessed searches; and low if they only received feedback reports.

We conducted analyses and constructed practice narratives before the trials analysis in February 2017. We then interrogated the practice narratives further in the light of the trials findings in June 2017 (Fig. 2).

Fig. 2
figure 2

Comparing and contrasting engagement and predicted achievement in the four packages (categories predicted prior to trial results and confirmed by trial findings)

Results

We collected data from 144 trial and eight qualitative evaluation practices. CH conducted 64 interviews with practice staff, approximately ten hours of observation, and an end-of-study focus group at each process evaluation practice. Fifty-nine staff from 57 practices (38% of all trial and process evaluation practices) responded to the post-trial fidelity survey. CH interviewed all 15 outreach facilitators.

We prospectively identified three patterns of intervention exposure and enactment which help explain the success in reducing risky prescribing and the failures to improve the other three indicators. These patterns were that (i) the intervention achieved integration by meeting the needs of the practice and sustaining collective action, (ii) exposure to the intervention limited engagement and pace of new ways of working and (iii) there were drops or delays in action as unintended consequences of intervention components and their delivery. We illustrate these three patterns with examples from practices, drawing on NPT and TDF to situate these patterns theoretically. Figure 2 presents an overview.

Pattern 1 Achieving integration: meeting the needs of the practice and sustaining collective action (Table 5).

Staff in practices that targeted risky prescribing and anticoagulation considered that the intervention package supported change in important clinical areas; it both aligned well with practice goals and was sufficiently differentiated from what practice staff were already doing. Staff found the feedback reports informative in both showing how their achievement compared with that of other practices and highlighting the consequences of change. The feedback also appeared to leverage social influence effectively.

The Chair said ‘yes, I think it made it much clearer what the risk is, that you were actually saving people’s lives by anticoagulating’

Observation, final practice meeting, Flower (anticoagulation)

[GP partner] commented on the quote on the side, mentioning that [the expert quoted] was a known atrial fibrillation expert—knows what he is talking about

Observation, Valley (anticoagulation)

Outreach was a critical time for enrolling staff in the intervention. Facilitators generally perceived outreach sessions for risky prescribing and anticoagulation to be successful as participants were involved in relevant clinical work. This was perhaps easier to achieve for risky prescribing and anticoagulation, in comparison to diabetes and blood pressure. Practices identified fewer staff members (e.g. clinical lead prescriber or in-house pharmacist) with clear divisions of labour as critical to the organisation and delivery of care.

In contrast to diabetes and blood pressure practices, risky prescribing and anticoagulation practices had comparatively few patients to review (approximately 200-300 vs approximately 30-50, respectively). This meant that risky prescribing practices did not require substantial re-organisation of resources or working patterns to review patients. Anticoagulation staff began to re-organise resources to review patients previously reviewed in secondary care. Our computerised searches facilitated this, providing staff access to patient lists.

Only risky prescribing practices were observed re-directing staff resources into regular computerised searches. Prescribing clerks in one such practice started alerting doctors to review repeat prescriptions following computerised prompts; the other practice disabled the prompts during consultations as they were considered disruptive.

One risky prescribing practice implemented repeat audits, which may have enabled a continuous feedback loop and helped sustain the work. In this practice, it was notable that the searches were considered useful and perceived as routine work for the prescribing clerk and practice manager. Moreover, there was evidence that each staff member’s role was clearly outlined from the first intervention-related meeting and staff trusted each other’s capacity and ability to engage with this work over time.

We did, we searched once every month (...) And then we reviewed, brought in all those patients in that we hadn’t…treated that were on the recall list that we hadn’t treated, and we reviewed them. So, although it’s more work (...) We were on top of it

GP lead, interview, Treetop (risky prescribing)

The intervention package seemed to meet a perceived need in risky prescribing and anticoagulation practices, providing desired information about the topic, trusted evidence of the consequences of action, and motivation to change practice. In addition, staff collectively believed they had the capability to achieve what was required. It was relatively easy for practices to identify key people to carry out the work, and substantial re-organisation of resources was not required. For risky prescribing practices in particular, it seemed that the searches and (to a lesser extent) prompts made collective action both more feasible and sustainable.

Table 5 Achieving integration and collective action: TDF and NPT in practice

Pattern 2 Limited coherence: not targeting the right determinants and outcomes (Table 6).

The diabetes and blood pressure practices were initially enthusiastic about the intervention and were observed discussing reports in practice meetings and participating in outreach sessions. These practices had clinical leads and numerous clinical and administrative staff involved in care around these indicators. However, it soon became clear that the intervention tended not to fit with practice teams’ perspectives and needs in two significant ways.

First, practice staff felt they were already aware of, and working towards, achievement in these areas. The collective view tended to be that the practices had invested significant resources into delivering care in these areas and that there was no capacity—and little incentive—to change existing structures. Practices tended to believe that they already knew what was needed, there was little value to be gained in changing their systems and processes, and the intervention components did not add value to their work. The intervention was therefore not experienced as bringing anything new to the practices, and little effort was expended in considering change to work organisation within the practice team.

[GP partner said] we have got good systems, patients do get reviewed (…) we need to make sure [blood pressure] doesn’t break our systems.

Observation, educational outreach, Lake (blood pressure)

Second, practice teams drew on discourses around the feasibility and desirability of achieving the targeted outcomes. Outcomes targeted in the study for diabetes and blood pressure involved a composite endpoint (HbA1c, blood pressure and cholesterol) and achievement of recommended blood pressure levels in patients at high risk of cardiovascular events, respectively. These outcomes tended to be more ambitious than those required for the Quality and Outcomes Framework (QOF), an existing incentive scheme for UK primary care [23]. Within the practice teams, there were evident splits, with some staff seeing these more ambitious targets as desirable and pushing for additional work to meet them, and other staff considering themselves at capacity and stricter targets potentially damaging to patient rapport. Intervention-related discussions raised the policy context (where practices are remunerated for meeting QOF targets and perceived as under-funded) as well as current team contexts (in terms of skills, capabilities, and roles). Although there were champions of the intervention in some practices, there was little evidence of a shared coherent vision of its value or of clear agreed changes to staff roles and responsibilities or sequencing of interdependent team-based behaviours.

at the outreach meeting, the practice had discussed adding in some hypertension work during the flu work and he said yes, he remembered, but that was wildly unrealistic

Observation, GP interview, Hill (blood Pressure)

[It] was targeting too many patients, they didn’t have the resources. The chair agreed, when you see a list to review of about 100 patients, your heart sinks. The PM said we refined the searches, then the pharmacist looked at it, and then about 30 people were on the list given to the diabetes lead so he could look if it was clinically worthwhile to doing anything with them. He said that they just don’t have the time or capacity

Observation, final practice meeting, River (diabetes)

Whilst the majority (143; 94%) of trial and process evaluation practices created an action plan during outreach, facilitators reported that practice staff varied in their ability to select targets and set manageable goals for indicators which included significant numbers of patients. Action planning seemed more challenging for diabetes and blood pressure practices as the workload extended across the whole practice and patient lists were large. In particular, diabetes and blood pressure outreach sessions were often delivered during routine practice meetings and were challenging to manage due to the large number of attendees, each with greater or lesser incentive and role in completing work regarding the indicator. Action plans resulting from these sessions tended to be considered less feasible by facilitators in that they rarely specified named individuals for specific work or allocated a date for reviewing progress.

Ultimately, most staff in these practices only engaged passively with the intervention continuing to work to established targets and structures (see Fig. 3). There was little evidence that searches or outreach support contributed to changes in the organisation and engagement. Where there was engagement, it typically was not organised in a sustainable manner (e.g. one practice used a medical student as an extra resource rather than assign a role to a permanent staff member).

Fig. 3
figure 3

Fidelity of delivery and engagement as intended and observed variations indicated by stop signs

The GPs said they discussed the reports, it possibly raised the consciousness, but that’s it

Observation, final practice meeting, River (diabetes)

It worked really well while I had my student over the summer (…) I think we made a massive improvement at the beginning and then it’s sort of tapered off as [we] just couldn’t keep the momentum going I think

Lead GP, interview, Lake (blood pressure)

Table 6 Failure to Cohere: TDF and NPT in practice

Pattern 3 Drops or delays in action: unintended consequences of the intervention components and their delivery

Across all indicators, we observed how specific intervention components were delivered and received created unintended consequences that impeded implementation. Delays and difficulties in delivering outreach and associated pharmacist support impeded the ability of practices to improve their achievement. Figure 3 provides an overview of observed variations.

Outreach was perceived as important and frequently viewed by practices as the intervention starts. This was an unintended consequence of the outreach offer, as outreach was conceptualised by the intervention team as an adjunct to audit and feedback (delivered at the start of the intervention) rather than the main intervention component. To complicate matters further, it was intended that practices would receive an initial outreach meeting within the first six months of the intervention period with a follow-up visit between six and twelve months. Thirty-eight (57%) trial practices received a visit as intended (i.e., during months one to six) and 29 (43%) initial visits took place in months six to 12, limiting time available for implementing changes. Data from outreach facilitator logs suggested that delays were mostly due to ensuring key clinician availability and lack of meeting space, rather than availability of facilitators. Notably, many practices sought to ensure key clinicians were present, demonstrating their engagement with educational outreach as important. The combination of practices perceiving the outreach visit as the intervention start coupled with delays in delivery meant that practices enacted fewer changes than hoped for in the earlier months of the trial, which had an impact on potential for indicator improvement. In addition, facilitators did not have access to electronic health records to prepare for the outreach meeting, limiting their ability to discuss patient-specific barriers, facilitate goal setting and initiate action. For some practices, the delay in accessing outreach meant they did not actively engage with the intervention until over halfway through the intervention year.

we’ve kind of waited for [outreach support] to happen (…) I hadn’t appreciated that we actually needed to be chasing that up and organising it!

Lead GP, interview, Dale (diabetes)

Only sixteen (24%) of the trial and process evaluation practices that received an outreach visit were offered two days of pharmacist time to enable patient identification and clinical review. This support was mostly delivered remotely by a dedicated pharmacist, not by the visiting facilitator as planned. Moreover, outreach support could not be delivered within the first six months as intended; consequently, this delayed action in those practices that waited for assistance, and then limited the time available for actions to be implemented and take effect. This may be particularly relevant to diabetes and blood pressure practices that felt unable or were resistant to act due to greater patient numbers.

Reminders (of blood pressure targets, risky prescribing and anticoagulation contraindications) were rarely observed in the qualitative evaluation practices and seldom recalled by staff.

The diabetes and blood pressure practices mostly did not identify a need to change their work organisation. Diabetes and blood pressure trial outcomes were perceived as ambitious as they were based on achievement of a composite set of indicators. Composite indicators identified larger patient numbers for review for already stretched staff. However, they did continue to review the reports, comparing their achievements to other practices on the targeted outcomes. This repeated feedback had the unintended consequence of generating negative emotion in some practices, and ultimately de-motivating staff who then disengaged from the intervention or questioned the value of changing practice.

He said that they had felt like they had done quite a lot of work but this was not reflected in the figures. He laughed as he said it, it felt a bit dispiriting really. He felt they were doing so much work just to stay in the same place. Other people nodded and agreed

Observation, final practice meeting, River (diabetes)

In one practice, the practice manager stopped disseminating reports when there was no significant positive change in achievement. The intervention therefore became less visible within the practice over time.

I mean to be honest with you normally we’d share it with all the partners, but because the results didn’t look that good to me, I didn’t want to embarrass [GP - diabetes lead] by giving it to all the partners

Interview, Practice manager, Dale (diabetes)

Not only did the intervention fail to target the right determinants and fail to differentiate itself from routine work, it also stimulated a negative emotional response as it gave practices feedback that did not reflect their perceived efforts around those indicators. This also meant that the intervention lost its influence over time, as the staff either actively avoided the data or questioned its value or accuracy.

Table 7 Unintended consequences: TDF and NPT in practice

Discussion

We observed three main patterns that may explain why an adaptable implementation package was effective in improving care for one out of four targeted indicators. First, in integration and achievement, the package “worked” when it was considered distinctive and feasible. Timely feedback directed at specific behaviours enabled continuous goal setting, action and review, which reinforced motivation and collective action. For one indicator (risky prescribing), the social processes and behavioural determinants matched well and had the desired impact of increasing motivation and action in the desired direction. Second, impacts on team-based determinants were limited, particularly when the complexity of clinical actions impeded progress. In these cases, the intervention targeted an area of agreed clinical need but was not adequately tailored to the complexities of team dynamics and systems. Third, there were delivery delays and unintended consequences. Delays in scheduling outreach and an unintended overemphasis on the status of outreach reduced ownership and time for improvement. As a consequence of delayed action, receiving repeated stagnant or declining feedback also undermined engagement.

A recent mixed-method process evaluation suggested that the combined use of psychological and sociological theory increased the explanatory potential of a hospital-based process evaluation [24]. One novel feature of this study is that we compared general practice responses for four different evidence-based indicators targeted by an adapted implementation package with common components and behaviour change techniques. Our findings suggest the importance of selecting indicators that have clear actions for individuals. Complex indicators involving a sequence of interdependent team behaviours to change systems of care were less successful. Our use of both frameworks allowed us to create richer explanations of behaviour at both group and individual levels and how these levels interact, which was valuable for our comprehension of the trial outcomes. We illustrate how under particular conditions, the implementation package achieved integration and collective action, failed to cohere, and led to unintended consequences (Tables 5, 6, and 7). Using the TDF constructs allowed us to specify the relevant implementation behaviours to attend to in context whilst NPT generated an understanding of the process dynamics, both of which are required to inform the specificity for designing future implementation strategies.

Study limitations

Given the challenges of prospectively identifying patients consulting for four different indicators, no patient consultations were observed. Instead, we focussed on the perspectives of those directly involved in delivering care. We chose not to collect questionnaire TDF and NPT data (given the challenges of operationalising these items for a complex multi-component package) from the wider practice team, instead focussing on the perspectives of those directly involved. As the trials progressed, we noticed in the process evaluation that there were gaps in our knowledge of intervention component receipt and enactment. Structured logs captured awareness of audit reports at outreach support visits but awareness and use of other components were more difficult to track due to the autonomy of practices to access at any time; we added a post-trial fidelity survey to explore this more specifically across the trial practices.

TDF alone was used in the development of the implementation package, identifying common determinants from interview data. Our understanding of group processes and how determinants might interact was likely limited by this approach.

Implications for practice and research

We suggest several lessons for the design, delivery and evaluation of implementation strategies based on our findings (Table 8).

Table 8 Where should intervention designers and evaluators direct their efforts and resources?

When selecting or developing indicators of achievement consider their fit with professional values, patient benefit and practice goals to augment motivation to change. Limiting the number of indicators and associated corrective actions needed to be undertaken by different actors may support collective action. Framing indicators to showcase the benefit(s) of additional or modified ways of working (e.g. reduce unwanted outcomes such as strokes) as opposed to increased work (e.g. additional consultations and prescriptions) may enhance motivation. Indicators that specify clear corrective actions which are sensitive to efforts to improve, may enable rapid learning from changes. Whilst we sought to augment work already undertaken, this resulted in unintended consequences (e.g. impact of stringent targets on patient preferences and relationships) of reviewing patients near to targets.

When developing intervention components, it may help to clearly differentiate the additional work required by the intervention from pre-existing work. Most practices were already engaging in alternative approaches to improve achievement and may be experiencing “intervention fatigue” [25], limiting capacity for enactment. Differentiation was enabled by the environmental context and resources of the practices as well as by staff beliefs about their knowledge, skill and capabilities. Where the links between specific staff actions and achievement were more direct and clearer, staff seemed motivated to act, clear about their roles and responsibilities, and more likely to stay engaged with the work over time. Practices requested social exchange of what others are doing to influence achievement.

It is important to consider how practices will perceive and value different intervention components in combination, and exploring this with think aloud interviews at pilot stage could be of benefit [26]. TDF was useful to identify the relevant determinants but could not predict the direction or size of their impact in context or combination. The trial underestimated the weight practices would place on face-to-face elements of the intervention (e.g. outreach visits); this could have been predicted through piloting the package to explore theoretical “fit” of an implementation package at both the individual and group level. Using sociological and psychological theory together in the piloting stage may have enabled some unintended consequences of the process of intervention delivery and group sense-making to be identified and planned for.

Process mapping all of the relevant behaviours required by staff and patients may support the design of a more cohesive package. Changing diabetes and blood pressure outcomes involved a longer interdependent chain of actions from disparate individuals to collectively review notes, recall patients, conduct patient consultations; and motivate patient behaviour change. Our package was not designed to engage patients.

When developing feedback interventions estimating the time staff need to receive and act on feedback can guide the timing of feedbac k[26]. Making patient identifiable searches easy to adapt could allow practices to focus on their targets for achievement, and enable continuous feedback loops to track and maintain improvements. Feedback that suggests specific and feasible actions could minimise cognitive load and overcome habitual patterns of working [26]. Making visible the individual contributions towards changing team-based behaviours within feedback could increase normative accountability. However, repeated negative feedback may be dispiriting, decrease credibility and restrict dissemination of subsequent feedback. Feedback developers could consider alternative methods of presenting negative or unchanging feedback data that reflects effort expended in all parts of the implementation chain (e.g. reviewing patient notes).

Educational outreach allows for further flexibility and individual tailoring in delivery. Conducting patient-identifiable searches prior to meeting face-to-face can facilitate an open discussion of problems, how individuals work, and ways to overcome challenges. It is important to ensure that the facilitator is seen as credible in these discussions. We tendered for a company with expertise in delivering primary care outreach. Our pragmatic trial illustrates the challenges in organising meetings with practice staff who have limited opportunities to engage with improvement work. Computerised prompts with accompanying guidance for tailoring to clinical and administrative staff may prevent prompt fatigue.

When delivering intervention components our analysis suggests that interventions were not necessarily received by the people who could enact change. Identifying and enrolling a practice lead to coordinate dissemination of multi-component interventions, with the opportunity to continuously review their impacts, may improve effectiveness [27]. Future researchers could review baseline data or engage with practice staff to identify delays in delivery or misconceptions about intervention functions. During intervention, development consider the “hidden” contributions of non-clinicians to uptake and enactment [28]. We suggest specifying the relevance of interventions to named non-clinicians and clinical leads to facilitate intervention reach to those able to improve achievement. We also suggest frontloading the delivery of components deemed most important by practices (as identified in piloting); outreach visits were more influential than intended and hardest to deliver, resulting in a negative impact on implementation.

When evaluating implementation strategies, decisions have to be made as to when and how to evaluate promising interventions. We suggest that formative process evaluations are vital to enable a full understanding of all direct and indirect risks and impacts associated with intervention delivery, reach and uptake prior to rigorous evaluation. Whilst we pilot tested intervention component acceptability, we did not examine whether the package could support practices to improve achievement. This study demonstrates the value of integrating psychological and sociological perspectives in a process evaluation, particularly the likely impact of an intervention on individual and team behaviour change, prior to evaluation. Intervention developers could use NPT and TDF in adaptive designs to rapidly collect sufficient data to understand if interventions should be evaluated, refined or abandoned in advance of definitive trials [29, 30].

Conclusions

We drew upon the Theoretical Domains Framework and Normalisation Process Theory in a longitudinal study to explain the variable success of an adaptable implementation package promoting evidence-based practice in primary care. The package appeared to work best when it was distinct form and yet easily integrated within existing organisational routines, with clear direct patient-level benefits. It failed when delivery was delayed and professionals could not observe or did not expect any improvement resulting from their efforts.

Availability of data and materials

The datasets generated and/or analysed during the current study are not publicly available as the data may identify practice staff. Anonymised data may be made available from the corresponding author on reasonable request.

References

  1. Rushforth B, Stokes T, Andrews E, Willis TA, McEachan R, Faulkner S, et al. Developing 'high impact' guideline-based quality indicators for UK primary care: a multi-stage consensus process. BMC Fam Pract. 2015;16(1):156.

    Article  Google Scholar 

  2. Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay CR, Vale L, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Asses. 2004;8(6) iii-iv:1–72.

    Article  Google Scholar 

  3. Ivers NM, Grimshaw JM, Jamtvedt G, Flottorp S, O'Brien MA, French SD, et al. Growing literature, stagnant science? Systematic review, meta-regression and cumulative analysis of audit and feedback interventions in health care. J Gen Intern Med. 2014;29(11):1534–41.

    Article  Google Scholar 

  4. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback: effects on professional practice and patient outcomes. Cochrane Database Syst Rev. 2012;6:CD000259.

    Google Scholar 

  5. O'Brien MA, Rogers S, Jamtvedt G, Oxman AD, Odgaard-Jensen J, Kristoffersen DT, et al. Educational outreach visits: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2007;4:CD000409.

    Google Scholar 

  6. Shojania KG, Jennings A, Mayhew A, Ramsay CR, Eccles MP, Grimshaw J. The effects of on-screen, point of care computer reminders on processes and outcomes of care. Cochrane Database Syst Rev. 2009;(3):Cd001096.

  7. (ICEBeRG) ICEtBRG. Designing theoretically-informed implementation interventions. Implement Sci. 2006;1:4.

    Article  Google Scholar 

  8. Lawton R, Heyhoe J, Louch G, Ingleson E, Glidewell L, Willis TA, et al. Using the Theoretical Domains Framework (TDF) to understand adherence to multiple evidence-based indicators in primary care: A qualitative study. Implementation Sci. 2016;11:113. https://doi.org/10.1186/s13012-016-0479-2. PMID: 27502590; PMCID: PMC4977705.

  9. Michie S, Johnston M, Abraham C, Lawton R, Parker D, Walker A. Making psychological theory useful for implementing evidence based practice: a consensus approach. Qual Saf Health Care. 2005;14(1):26–33.

    Article  CAS  Google Scholar 

  10. Cane J, O'Connor D, Michie S. Validation of the theoretical domains framework for use in behaviour change and implementation research. Implement Sci. 2012;7:37.

    Article  Google Scholar 

  11. Glidewell L, Willis TA, Petty D, Lawton R, McEachan RRC, Ingleson E, et al. To what extent can behaviour change techniques be identified within an adaptable implementation package for primary care? A prospective directed content analysis. Implementation Sci. 2018;13(32). https://doi.org/10.1186/s13012-017-0704-7.

  12. Willis TA, Collinson M, Glidewell L, Farrin AJ, Holland M, Meads D, et al. An adaptable implementation package targeting evidence-based indicators in primary care: A pragmatic cluster-randomised evaluation. PLoS Med. 2020;17(2):e1003045.

    Article  Google Scholar 

  13. Atkins L, Francis J, Islam R, O’Connor D, Patey A, Ivers N, et al. A guide to using the Theoretical Domains Framework of behaviour change to investigate implementation problems. Implementation Sci. 2017;12(1):77.

    Article  Google Scholar 

  14. May C, Finch T. Implementing, Embedding, and Integrating Practices: An Outline of Normalization Process Theory. Sociology. 2009;43(3):535–54.

    Article  Google Scholar 

  15. Murray E, Treweek S, Pope C, MacFarlane A, Ballini L, Dowrick C, et al. Normalisation process theory: a framework for developing, evaluating and implementing complex interventions. BMC Med. 2010;8.

  16. May CR, Cummings A, Girling M, Bracher M, Mair FS, May CM, et al. Using Normalization Process Theory in feasibility studies and process evaluations of complex healthcare interventions: a systematic review. Implement Sci. 2018;13(1):80.

    Article  Google Scholar 

  17. McEvoy R, Ballini L, Maltoni S, O'Donnell CA, Mair FS, MacFarlane A. A qualitative systematic review of studies using the normalization process theory to research implementation processes. Implementation Sci. 2014;9(2). https://doi.org/10.1186/1748-5908-9-2.

  18. Foy R, Willis T, Glidewell L, McEachan R, Lawton R, Meads D, Collinson M, Hunter C, Hulme C, West R et al: Developing and evaluating packages to support implementation of quality indicators in general practice: the ASPIRE research programme, including two cluster RCTs. 2020, 8:4.

  19. Bellg AJ, Borrelli B, Resnick B, Hecht J, Minicucci DS, Ory M, et al. Enhancing treatment fidelity in health behavior change studies: best practices and recommendations from the NIH Behavior Change Consortium. Health Psychol. 2004;23(5):443–51.

    Article  Google Scholar 

  20. Charmaz K. Grounded theory: Objectivist and constructivist methods. In: Denzin NK, Lincoln YS, editors. Strategies of Qualitative Inquiry. Thousand Oacks, CA: SAGE Publications; 2003. p. 249–91.

    Google Scholar 

  21. Gale NK, Heath G, Cameron E, Rashid S, Redwood S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol. 2013;13:117.

    Article  Google Scholar 

  22. Ritchie J, Spencer L, O'Connor W. Carrying out Qualitative Analysis. In: Ritchie J, Lewis J, editors. Qualitative Research Practice: A guide for Social Science Students and Researchers. London: Sage Publications; 2003. p. 219–62.

    Google Scholar 

  23. Doran T, Fullwood C, Gravelle H, Reeves D, Kontopantelis E, Hiroeh U, et al. Pay-for-Performance Programs in Family Practices in the United Kingdom. N Engl J Med. 2006;355(4):375–84.

    Article  CAS  Google Scholar 

  24. Currie K, King C, McAloney-Kocaman K, Roberts NJ, MacDonald J, Dickson A, et al. Barriers and enablers to meticillin-resistant Staphylococcus aureus admission screening in hospitals: a mixed-methods study. J Hosp Infect. 2019;101:100–8.

    Article  CAS  Google Scholar 

  25. Szymczak JE. Seeing risk and allocating responsibility: talk of culture and its consequences on the work of patient safety. Soc Sci Med. 2014;120:252–9.

    Article  Google Scholar 

  26. Brehaut JC, Colquhoun HL, Eva KW, et al. Practice feedback interventions: 15 suggestions for optimizing effectiveness. Ann Intern Med. 2016;164(6):435–41.

    Article  Google Scholar 

  27. Hasson H. Systematic evaluation of implementation fidelity of complex interventions in health and social care. Implement Sci. 2010;5:67.

    Article  Google Scholar 

  28. Sinnott C. Interactions: understanding people and process in prescribing in primary care. BMJ Qual Saf. 2018;27(3):176–8.

    Article  Google Scholar 

  29. Etchells E, Woodcock T. Value of small sample sizes in rapid-cycle quality improvement projects 2: assessing fidelity of implementation for improvement interventions. BMJ Qual Saf. 2018;27(1):61–5.

    Article  Google Scholar 

  30. Burke RE, Shojania KG. Rigorous evaluations of evolving interventions: can we have our cake and eat it too? BMJ Qual Saf. 2018;27(4):254–7.

    Article  Google Scholar 

Download references

Acknowledgements

WE thank the members of the ASPIRE Programme Steering Committee for their advice and expertise particularly Anne Sales, Jeremy Grimshaw and Jill Francis. We thank John Gabbay, Andree le May and Trish Greenhalgh for advising CH and LG throughout the process evaluation. The ASPIRE programme team comprises Sarah Alderson, Paul Carder, Susan Clamp, Robert West, Martin Rathfelder, Claire Hulme, Judith Richardson, Tim Stokes, and Ian Watt, in addition to the named authors. The ASPIRE programme team can be contacted via Robbie Foy (r.foy@leeds.ac.uk).

Authors’ contributions

LG, VW, RMc, RL, TW, RF and the programme team designed this study. CH collected the qualitative data. LG, CH, VW, RMc, RL, TW and RF conducted the theory-based analyses. MC and MH randomly allocated practices to intervention arms. SH, MC, MH collected the fidelity data for trial and process evaluation practices. LG, TW, SH, MC, MH, AF and RF conducted the fidelity analyses. RF and VW led the conception of the work package. LG led the operationalisation of the planned evaluation and manuscript writing. All authors read and approved the final manuscript.

Funding

This study is funded by the National Institute for Health Research (NIHR) [Programme Grants for Applied Research (Grant Reference Number RP-PG-1209-10040)] (https://www.nihr.ac.uk/). The views expressed are those of the author(s) and not necessarily those of the NIHR or the Department of Health and Social Care.

Author information

Authors and Affiliations

Authors

Consortia

Corresponding author

Correspondence to Liz Glidewell.

Ethics declarations

Ethics approval and consent to participate

This study was approved by the UK National Research Ethics Service (14/SC/1393).

Consent for publication

Not applicable

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The paper was written following SQUIRE guidelines.

Appendices

Appendix

Appendix 1 Longitudinal interview guide

Initial interview

Role and Context

What is your role in the practice? What are your main responsibilities?

Who do you tend to interact with most in the practice? (e.g. nurses, doctors, PM, reception, patients etc.)

How often do you tend to be here? (e.g. how many shifts/hours a week, any commitments outside the practice like CCG involvement)

Do you attend meetings within the practice or with other staff in the practice?

What involvement do you have in providing or organising care for [clinical topic]?

Barriers/Facilitators to best practice in specific clinical topic

Recently in the practice, have there been any changes around [relevant clinical topic]? What has motivated or influenced these changes? What impact have these changes had on you, your workload or on work in the practice?

In your opinion/experience, what do you find to be the barriers around implementing best practice in [clinical topic area]?

Explore any of the barriers in further detail, probe around meaning and for specific examples

In your opinion/experience, what enables changes to practice, especially around adopting best practice?

Can you think of an example where you implemented a successful change, relevant to the clinical topic? How did this come about? What do you think made it successful?

Areas to probe around:

Capability (Skills, Knowledge, Psychological, Behavioural Regulation)

Motivation (Social role/identity, beliefs about capabilities, beliefs about consequences, motivation, emotion)

Opportunity (Social influences, environmental context/resources)

Awareness of and expectations around ASPIRE

How aware of the ASPIRE research programme are you? Have you attended any meetings or had any conversations with others in the practice about ASPIRE?

[If aware of ASPIRE]

What do you know about ASPIRE?

What do you hope or expect from involvement with ASPIRE?

Does ASPIRE make sense to you? Are there any parts of ASPIRE that do not make sense or you feel could be improved?

[If unaware or has forgotten], ASPIRE aims to help practices provide care in line with best practice by using a number of quality improvement tools like audit and feedback, educational outreach and computerised searches and prompts

Who in the practice would know about this? Why do you think they would know about it? What is their role?

What experience do you have of [searches/prompts/audit and feedback/outreach]? Do you think they help to improve the care offered to patients in relation to [clinical topic]? [If yes], in what way/s? [If not], in what way/s have they not been useful? [Follow up by exploring barriers, q7]

Audit and feedback form/s

[Have copy or copies at interview] The practice was sent an audit and feedback form by email and post in [month] (and then on other dates…)

Have you seen the audit and feedback forms? [If yes] How did you end up seeing them? What did you think of it/them? Was there anything useful about it/them? How could it have been more useful to you? Did anything about it surprise you? Anything not make sense to you?

Was the audit and feedback form or forms, to your knowledge, discussed within the practice? If so, how was it discussed and who was involved in this? Were you involved?

[If no, show the form to the participant] What do you think of the form/s? What looks useful to you (and in what way/s is it useful)? Is there anything surprising or unclear in it?

Was there anything in the A&F forms that you decided not to work on in the practice? If not, why not?

Educational outreach

Your practice had an outreach session on [date], did you attend the meeting?

[If yes], what did you think of the meeting? What seemed useful to you? Was anything raised unexpected or new to you? What (if anything) stood out from this meeting?

[If no], are you aware of what happened at the meeting? Did anyone tell you about it? Do you think it was relevant to you? Why were you unable to attend?

What has happened after the meeting? Was an action plan agreed? Did you have any actions as a result of the outreach? Were any changes made to practice? Has this affected you in any way or affected any part of the practice’s work?

Other ASPIRE elements (more relevant at 2 nd interview phase)

[For risky prescribing], have you come across any computerised prompts for prescribing safely? What are the prompts like? How do they work? Are they useful to you (if so, how)?

[All practices], ASPIRE have provided searches to identify relevant patients—have you come across these? Are these useful (and in what way/s)? [If not], would you find this sort of thing useful?

[For hypertension/diabetes], have you seen a patient checklist from ASPIRE to be used in consultation with patients? What do you think of this type of tool? Is it helpful, and if so, how is it helpful?

[For prescribing/AF], have you seen ASPIRE significant event audit forms? Are you familiar with significant event audits? Do you find them helpful for improving patient care, and if so, how? Have you experienced any difficulties with significant event audit - any barriers to doing or using them?

[All practices], have you seen any pens, post it notes or laminates provided by ASPIRE? Do you feel these are useful? [If yes], in what ways? [If no], are these types of reminders ever useful? Can you think of any times when any of these things have been useful to you in the practice? [Probe for detail—why were they useful at that time, how did you use them, where did they come from etc]

[All practices], ASPIRE are offering some support from a pharmacist to help work on the clinical topic. Are you aware of your practice being offered or taking this offer up? What do you think of this offer?

[If not aware of offer] would you find this useful?

[If the support has been experienced], did you find this useful, and if so, how? What did the facilitator/pharmacist do? Which members of the team did they liaise/ work with?

Implementation

To your knowledge, has anything in the practice been done differently as a result of involvement in ASPIRE?

If yes, explore what has been done differently, who is involved in the changes and what the intentions are behind the changes

Are you aware of any plans to do anything differently in the practice as a result of ASPIRE?

If yes, explore what plans there are, who is involved in them and what the intentions are behind these plans

Has involvement in ASPIRE affected you or your work in any ways? If so, how? Has it had any impact in the practice in general or any impact on anyone else (that you are aware of)?

Drill into specific details

Close

Who else in the practice do you think would be good to talk to?

Second interview

Change over time

How have things been in the practice since we last spoke/since I was last in the practice?

Have there been any changes in the practice recently?

any staffing changes

any changes in what you do in your role

any changes at an organisational level

any new initiatives or research taking place in the practice (e.g. initiatives in the CCG and local area)

(If there have been any changes) In what ways have these recent changes affected the practice? Have they affected your role? Have they affected the delivery of care? Have they affected the atmosphere in the practice? How?

Clinical topic

How is care organised around [diabetes/hypertension/prescribing NSAIDs, atrial fibrillation]? Who is involved in this?

Are you aware of any changes in the practice around care for this topic or provision of services for this topic? Have there been any changes to your role in relation to this topic?

If there have been changes, what has motivated these changes? How have the changes affected care, your role, other practice staff, the patients?

What effect (if any) do you think these changes have had on the practice’s achievement for that clinical topic? Have the changes affected anything else, e.g. care in another area?

Intervention components

How aware have you been of ASPIRE in the last couple of months? Have you seen or used any of these components?

Reports—paper/email (take examples)

ASPIRE searches

Prompts (if relevant)

Any emails or visits from ASPIRE (esp. outreach support, outreach visits, CQC/QOF communications)

Pens, post-it notes, laminates (if applicable)

SEAs (if applicable)

Anything else relating to ASPIRE (e.g. ASPIRE box)?

If not aware/involved, did you feel that this was important/relevant to you?

(About different components) In what ways were these used? Who has used them? Were they discussed formally or informally? Have they had an impact on your work in any way, or on anyone else’s work in the practice?

(If they had educational outreach or drew up an action plan) Are you aware of any activity arising in the practice after the outreach meeting? Did you see an action plan during or after this meeting? Did you personally have any tasks assigned to you as a result of taking part in ASPIRE? If so, what has happened since? If not, who has been doing work as a result of ASPIRE? How was this decided? What do you think of the way this has been done?

Implementation

To your knowledge, has anything in the practice been done differently as a result of involvement in ASPIRE?

If yes, explore what has been done differently, who is involved in the changes and what the intentions are behind the changes

Explore positives and negatives of changes—expected changes, unexpected changes, ones that seem to be benefitting practice, staff and/or patients, ones that do not or where they cannot value impact yet

Are you aware of any plans to do anything differently in the practice as a result of ASPIRE?

If yes, explore what plans there are, who is involved in them and how, and what the intentions/goals are behind these plans

Explore progress on the plans—how far has this gotten? When are things likely to happen? What’s affecting the timing of the plans?

Has involvement in ASPIRE affected you or your work in any ways? If so, how? Has it had any impact in the practice in general or any impact on anyone else (that you are aware of)?

Drill into specific details

Usefulness and fit

In your opinion, can you think of any specific ways that ASPIRE has been useful for the practice? Who do you think has benefitted (and how)?

Is there anything you think would have been helpful, but has not been offered or done as part of involvement in ASPIRE?

Was this intervention a good fit for your practice? If so, in what ways? If not, why not? What do you think did not fit with the practice? What in particular worked or did not work for your practice?

Probe around clinical topic, intervention components, targets for intervention (e.g. practitioner behaviour and practice organisation)

Close

Who else in the practice do you think would be good to talk to?

Appendix 2 Observational guide

General Practice—Organisation and Setting

Organisation of care—general

Setting—physical space, frequency and nature of meetings, size of practice

Practice group dynamics—no. of staff, relations between them, decision-making

Other initiatives on the agenda—related to clinical topic/unrelated but potential to

impact

ASPIRE related

Responses to and expectations of A&F form

Responses to and expectations of educational outreach

Responses to and expectations of the searches

Responses to and expectations of the prompts

Responses to and expectations of offer of help

Interactions around set up of meetings

Questions relating to the research

Responses to and expectations of other ASPIRE elements (checklist, prompt, pens/post

its/laminates, SEA forms)

My role and relation to practices

Understandings of/reactions to my role

Clinical topic

Organisation of care specific to practice topic

Work done by practice or others for practice on clinical topic

Responses to clinical topic

Perceived barriers or areas of concerns or needs of the practice, including patient,

practice and system factors (domains, CMO)

Reasons for/discussion of lack of need or concern around this topic

Questions relating to the clinical topic

Patient cases or examples discussed

Appendix 3 Interview guide for final practice meeting

How you feel you did as a practice in relation to the topic area

Move conversation along quickly

How interested was the practice in this topic before the year started? Did ASPIRE stimulate an interest in improving in this area?

What helped or didn’t help over the year with regards to the topic area

What helped from ASPIRE, what helped generally

What didn’t help

What effects, if any, the various ASPIRE intervention components have had over the year, and what they have meant to people in the practice

Audit and feedback reports

Outreach visits (1 or 2)

Outreach support

Searches

Prompts (risky prescribing only)

Laminates and pens/post its

Taking part in the process evaluation

What could be done differently in the future

How research can support primary care to implement research evidence

What the practice themselves could do to improve implementation

Role of CCGs, federations, network

What, if anything, the practice intends to do next with regards to work on the specific clinical topic

Appendix 4 Normalisation Process Theory (NPT) coding dictionary

Tables 9 and 10

Table 9 NPT—understanding the process of implementation within practices
Table 10 Theoretical Domains Framework (TDF) coding dictionary. TDF—understanding the factors which impede or promote intervention behaviours or action [10]

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Glidewell, L., Hunter, C., Ward, V. et al. Explaining variable effects of an adaptable implementation package to promote evidence-based practice in primary care: a longitudinal process evaluation. Implementation Sci 17, 9 (2022). https://doi.org/10.1186/s13012-021-01166-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13012-021-01166-4

Keywords