Skip to main content

Does access to a demand-led evidence briefing service improve uptake and use of research evidence by health service commissioners? A controlled before and after study

Abstract

Background

The Health and Social Care Act mandated research use as a core consideration of health service commissioning arrangements in England. We undertook a controlled before and after study to evaluate whether access to a demand-led evidence briefing service improved the use of research evidence by commissioners compared with less intensive and less targeted alternatives.

Methods

Nine Clinical Commissioning Groups (CCGs) in the North of England received one of three interventions: (A) access to an evidence briefing service; (B) contact plus an unsolicited push of non-tailored evidence; or (C) unsolicited push of non-tailored evidence. Data for the primary outcome measure were collected at baseline and 12 months using a survey instrument devised to assess an organisations’ ability to acquire, assess, adapt and apply research evidence to support decision-making. Documentary and observational evidence of the use of the outputs of the service were sought.

Results

Over the course of the study, the service addressed 24 topics raised by participating CCGs. At 12 months, the evidence briefing service was not associated with increases in CCG capacity to acquire, assess, adapt and apply research evidence to support decision-making, individual intentions to use research findings or perceptions of CCG relationships with researchers. Regardless of intervention received, participating CCGs indicated that they remained inconsistent in their research-seeking behaviours and in their capacity to acquire research. The informal nature of decision-making processes meant that there was little traceability of the use of evidence. Low baseline and follow-up response rates and missing data limit the reliability of the findings.

Conclusions

Access to a demand-led evidence briefing service did not improve the uptake and use of research evidence by NHS commissioners compared with less intensive and less targeted alternatives. Commissioners appear well intentioned but ad hoc users of research. Further research is required on the effects of interventions and strategies to build individual and organisational capacity to use research.

Peer Review reports

Background

In the National Health Service (NHS) in England, Clinical Commissioning Groups (CCGs) are responsible for the planning and commissioning of health care services in a defined geographical area. In 2012, the Health and Social Care Act mandated research use as a core consideration in health service commissioning arrangements [1]. Each CCG now must, in the exercise of their functions, promote the use in the health service evidence obtained from research [1].

NHS commissioners now have a key role in improving the uptake and use of knowledge to inform commissioning and decommissioning of services, and there is a substantive evidence base upon which they can draw. However, uptake of this knowledge to increase efficiency, reduce practice variations and ensure best use of finite resources within the NHS is not always realised. This is in part through system failings to fully implement interventions and procedures of known effectiveness [2, 3]. There has also been rapid, sometimes policy-driven deployment of unproven interventions despite known uncertainties relating to costs, impacts on service utilisation and clinical outcomes, patient experience and sustainability [4]. And the NHS has been slow to identify and disinvest in those interventions known to be of low or no clinical value [5].

Whilst it is widely acknowledged that different sources of knowledge combine in evidence-informed decision-making [6] and that the process itself is highly contingent and context dependent [7], the value of systematic reviews to health care decision-making is well recognised [8, 9]. However, a number of challenges have undermined the usefulness of systematic reviews in decision-making contexts [8, 1015]. These include difficulties in locating and appraising relevant reviews, a lack of timeliness or user friendliness and a perceived failure of reviews to address relevant questions, contextualize the findings or make actionable policy recommendations. An initiative aiming to enhance uptake of systematic review evidence by NHS commissioners and senior managers was developed as an adjunct to the implementation theme of the NIHR CLAHRC for Leeds, York and Bradford [16]. Development of the service was informed by a scoping review of existing resources [17] and previous experience in producing and disseminating the renowned Effective Health Care and Effectiveness Matters series of bulletins. The service attempted to inform real decisions by making use of existing sources of synthesised research evidence. The service approach was both consultative and responsive and involved building relations and having regular contact (face to face and email) with a range of NHS commissioners and managers. This enabled the team to discuss issues and for those that required a more considered response, formulate questions from which contextualised briefings could be produced and their implications discussed. In doing so, we utilised a framework designed to clarify the problem and frame the question to be addressed [18]. The service had some early impacts notably including work to inform service reconfiguration for adolescent eating disorders; enabling commissioners to invest in more services on a more cost-effective outpatient basis [19].

Although feedback from users was consistently positive, the evidence briefing service had been developmental and no formal evaluation had been conducted. The service as constituted was a resource-intensive endeavour and made use of the considerable review capacity and infrastructure available at the Centre for Reviews and Dissemination (CRD). As such, this study aimed to assess whether access to a demand-led evidence briefing service would improve the uptake and use of research evidence by NHS commissioners compared with less intensive and less targeted alternatives.

Methods

This was a controlled before and after study involving CCGs in the North of England [20]. The study protocol has been published previously [21].

Setting, participants and recruitment

Nine CCGs from one geographical area in England agreed to participate, and the recruitment process is presented in Fig. 1. We had originally anticipated that we would invite 9–10 CCGs from one geographical area based on the 2012/13 Primary Care Trust (PCT) cluster arrangements. By the start of the study, some consolidation in the proposed commissioning arrangements had occurred in the transition from PCTs to CCGs and so seven CCGs were invited to participate. Of these, six agreed to participate. One CCG declined, intimating that they could not participate in any intervention. No CCG asked for financial reimbursement for taking part in the study.

Fig. 1
figure 1

Flow diagram of CCG recruitment

We had originally intended to randomly allocate CCGs to interventions. However, a combination of expressed preferences (one CCG indicated that they would like to be a ‘control’) and the prospect of further consolidation in commissioning arrangements meant that this was not feasible. Taking these factors into account, two CCGs were allocated to receive on-demand access to the evidence briefing service, three coterminous CCGs (who were likely to merge) received on-demand access to advice and support from the CRD team and one to a ‘standard service’ control arm. After this initial allocation, research leads from CCGs in a neighbouring geographical area approached the team and asked to participate. After discussions with representatives of the five CCGs, a further three CCGs were recruited as ‘standard service’ controls.

Interventions

Participating CCGs received one of three interventions aimed at supporting the use of research evidence in their decision-making:

  1. A.

    Contact plus responsive push of tailored evidence

    CCGs in this arm received on-demand access to an evidence briefing service provided by research team members at CRD. In response to questions and issues raised by a CCG, the CRD team would synthesise existing evidence together with relevant contextual data to produce tailored evidence briefings to a specified timescale agreed with the CCG. We anticipated responding to six to eight substantive issues per CCG during the intervention phase. This was a responsive service, and CCGs could contact the intervention team at any time to request their services. Contact initiated by the CRD intervention team was made on a monthly basis and was expected to include discussion of questions and priority topics and offers of advice and support around identifying, appraising and interpreting evidence. A full account of the service offered is available elsewhere [20].

  1. B.

    Contact plus an unsolicited push of non-tailored evidence

    CCGs allocated to this arm received on demand access to advice and support from CRD as those allocated to receive on demand access to the evidence briefing service. However, the CRD intervention team did not produce evidence briefings in response to questions and issues raised but instead disseminated the evidence briefings generated in the responsive push intervention.

  1. C.

    ‘Standard service’ unsolicited push of non-tailored evidence

    The third intervention constituted a ‘standard service’ control arm; thus, an unsolicited push of non-tailored evidence. In this, CRD used email- and web-based communication processes to disseminate evidence briefings generated in intervention A and any other non-tailored briefings produced by CRD over the intervention period.

    The intervention phase ran from the end of April 2014 to the beginning May 2015. As this study was evaluating uptake of a demand-led service, the extent to which the CCGs engaged with the interventions was determined by the CCGs themselves.

Baseline and follow-up assessment

We collected data for our two primary outcome measures (perceived organisational capacity to use research evidence and reported research use) at baseline (phase 1) and again 12 months after the intervention period was completed (phase 3).

The survey instrument has been published and described previously [21]. The instrument was designed to collect four sets of information that assessed: the organisations’ ability to acquire, assess, adapt and apply research evidence to support decision-making; the intentions of individual CCG staff to use research evidence in their decision-making; perceptions of the quality and quantity of interactions with researchers; and captured information on individual respondent characteristics.

Survey administration

Each participating CCG supplied a list of names and email addresses for potential respondents. These were checked by a member of the evaluation team, and where inaccurate or missing details were identified, these were sourced and corrected. Survey instruments were sent by personalised email to identified participants via an embedded URL. The questionnaire was hosted by SurveyMonkey website (http://www.surveymonkey.com). Reminder emails were sent out to non-respondents at 2, 3 and 4 weeks. A paper version was also posted out, and phone call reminders were made by the research team. In addition, the named contact in each CCG sent an email to all their colleagues encouraging completion.

As CCGs were new and evolving entities at the time of the study, we needed to be able to determine if any changes viewed from baseline were linked to the intervention(s) and were not just a consequence of the development of the CCG(s) over the course of the study. To guard against this maturation effect/bias and to test the generalisability of findings, we administered Section A of the instrument to all English CCGs to assess their organisational ability to acquire, assess, adapt and apply research evidence to support decision-making. The most senior manager (chief operating officer or chief clinical officer) of each CCG was contacted and asked to complete the instrument on behalf of their organisation. For the national survey, we used publically available information (NHS England and CCG websites) supplemented by phone calls to CCG headquarters to construct our sampling frame consisting of every CCG in England.

Analysis

The primary analysis measured the effect of study interventions on two main outcomes at two time points: baseline and 12 months. The key dependent variable was CCG perceived organisational capacity to use research evidence in their decision-making as measured by Section A of the survey instrument. Following Norman [22], we calculated the means, standard deviations and confidence intervals for each of the seven subscales in the instrument before and after the intervention period; CCG capacity to acquire research, to look for research in the right places, to tell if research is valid and of high quality, to tell if research is relevant and applicable, to summarise results in a user-friendly way, to lead by example and value research use to support decision-making processes for research use. We also calculated an overall mean—of CCG capacity—and standard deviation (mean of the sub scale means). Interpreting the means is similar to the original scale: a mean score nearer to 1 equates to very little capacity and a mean score nearer to 5 indicates good capacity in the respective sub domain or for research evidence use generally.

We also measured the effect of interventions upon our second main outcome of perceived research use and CCG member’s intentions to use research. Based on the theory of planned behaviour [23], we calculated means for the four subscales in our survey instrument: intention to use research, attitudes towards research, group norms and perceived behavioural control. The Likert scale used ranged from a score of 1, indicating the lowest amount of the concept being measured (e.g. no intention to use research) through to a score of 7 indicating the highest amount of the concept being measured (e.g. the most positive attitudes towards research use in a CCG).

We undertook a factorial ANOVA (SPSS version 22.0 general linear model procedure), comparing the main effect of a single independent variable (CCG status) on a dependent variable (capacity to acquire, assess, adapt and apply research evidence to support decision-making) ignoring all other independent variables (i.e. the effect ignoring the potential for confounding from other independent factors). A factorial ANOVA was also conducted to compare the main effects of time and evidence briefing service received and the interaction effect of time and evidence briefing on intention to use research evidence (the ‘intention’ component of the survey instrument).

To examine the effects of (i) perceived contact and (ii) the amount of perceived contact with the evidence briefing service, (iii) institutional support for research, (iv) a sense of being equal partners during contact, (v) common in-group identity, (vi) achievement of goals, and (vii) perceptions of researchers generally, we undertook a mixed 3 (intervention, A vs. B vs. C) × 2 (time, baseline vs. outcome) ANOVA using SPSS version 22.0, with the intervention as a between-subject independent variable, and repeated measures on the second factor, time.

Missing data

Only analysing the data for which we had complete responses could lead to potentially biased results [24], and as anticipated at the protocol stage, the use of multiple imputation techniques were required [25]. We assumed that data were missing at random (visual comparison of original vs. imputed data and significance testing of response and nonresponse data impact on outcome variables). We used guidance on interpreting effect sizes in before and after studies to examine the clinical/policy significance of any changes [26].

Blinding

The CRD evidence briefing team were blinded from both baseline and follow-up assessments until after all the data collection was complete. The CRD team were made aware of baseline and follow-up response rates. Participating CCGs were also blinded from baseline and follow-up assessments and analysis.

Qualitative evaluation

Part of our original plan was to collect and analyse documentary evidence of the actual use of evidence in decision-making using executive and governing body meeting agendas, minutes and associated documents. This was to be supplemented with interviews to explore perceived use of evidence and any unanticipated consequences. Early in the intervention phase, it became apparent that with a few exceptions, there was a lack of recorded evidence of research use (a finding in itself). Executive and governing body meetings were mainly used to ratify recommendations and so would not tell us anything about sources or processes. With research use and decisions occurring elsewhere and often involving informal processes, we undertook four case studies to explore the use of research evidence in decision-making in the intervention sites. A full account of case study methods and analysis are available elsewhere [20].

Results

Over the course of the study, we addressed 24 questions raised by the participating CCGs, 17 of which were addressed during the intervention phase (see Table 1). The majority of requests were focussed on options for the delivery and organisation of a range of services and way of working rather than on the effects of individual interventions.

Table 1 Questions addressed by the evidence briefing service

Requests for evidence briefings from the CCGs served different purposes. Four broad categories of research use have been proposed: conceptual (not directly linked to discrete decisions but to provide knowledge about possible options for future actions); symbolic or tactical (to justify existing decisions and actions); instrumental (where evidence directly informs a discrete decision-making process); and imposed (where there are organisational, legislative or funding requirements that research be used) [27, 28]. Derived through a consensus-based approach, Table 1 shows that most requests received were categorised as conceptual.

Response rates

Contact details for 181 baseline (A = 45; B = 61; C = 75) and 168 follow-up (A = 43; B = 60; C = 65) participants were supplied by CCGs; none were undeliverable.

In total, 123 questionnaires were returned at baseline (A = 37/45; B = 54/61; C = 32/75) giving an overall response rate of 68%. Of these, 101 were completed, 13 were deemed to be incomplete (one section or less completed) and 9 were from individuals declining to participate or indicating they had departed the CCG. At 1 year follow-up, 76 questionnaires were returned (A = 23/43; B = 28/60; C = 25/65) giving an overall response rate of 44%. Of these, 71 were completed, two were deemed to be incomplete (one section or less completed) and three were from individuals declining to participate or indicating they had departed the CCG.

Characteristics of respondents

Survey respondents reported holding a range of roles within the CCGs. Most respondents were highly qualified but only a minority reported having had prior experience in commissioning or undertaking research (see Table 2). Sites with a lower response rate had a higher proportion of clinically qualified respondents (X 2 (2, N = 53) = 6.15, p = .05) but other than this difference, there were no significant differences in the characteristics of the respondents receiving the three interventions.

Table 2 Characteristics of survey respondents

Overall capacity to acquire, assess, adapt and apply research evidence to support decision-making

The total capacity to acquire, assess, adapt and apply research evidence to support decision-making appeared to improve slightly over time, irrespective of the presence of any intervention (Table 3). The main effect of time in the factorial ANOVA yielded an F ratio of F(1, 127) = 4.49 p < .05 η 2 p  .034, indicating a significant difference over time in all three groups of CCGs total capacity to acquire, assess, adapt and apply research evidence to support decision-making . The main effect of the evidence briefing service received yielded an F ratio of F(2, 127) = 0.77 p = >.5, η 2 p  .012. The interaction of time and the intervention was also not significant yielding an F ratio of F(2, 127) = 0.213 p > .05, η 2 p  .003. Exposure to the intervention had no significant effect on perceived CCG capacity.

Table 3 Intervention effects on CCG capacity to acquire, assess, adapt and apply research evidence to support decision-making

Did the evidence briefing service improve CCGs’ intentions to use research evidence to support decision-making?

As with the effect of the evidence briefing service on capacity to use research for decision-making, we also wanted to examine the effect on CCG’s collective intention to use research evidence for decision-making. Attitude towards the use of research in decision-making was the strongest of these dimensions, and perceived behavioural control the weakest (Table 4). All intervention groups had apparent small and non-statistically significant declines in almost all of the theory of planned behaviour dimensions from baseline to follow-up. This difference is—in real terms—marginal: the positions were broadly similar before and after encountering the intervention.

Table 4 Intervention impact on theory of planned behaviour domains

Did the evidence briefing service improve CCGs’ perceptions of intergroup contact?

Perceptions of contact appeared generally more positive from the start among the respondents receiving the evidence briefing service than in the other intervention groups (Table 5). There were increases in most other dimensions of contact from baseline to follow-up across the groups. None of these reached statistical significance, and the magnitude of these gains appeared a little lower in intervention A than in interventions B and C.

Table 5 Intervention impact on perceptions of intergroup contact between CCGs and researchers

Did the evidence briefing service improve CCGs’ perceptions of researchers?

Using a ‘feeling thermometer’ measure where participants reported perceptions of researchers on a scale of 0 (very negative) to 100 (very positive). Perceptions of researchers were positive among the respondents receiving intervention A, at baseline, almost at the level of the post-intervention responses across the board.

There was a significant interaction between intervention and time, F(2, 57) = 3.29, p = .045. Post hoc analyses demonstrate that perceptions of researchers in general were significantly more positive at the follow-up (M = 77.20) than at the baseline (M = 46.35) in intervention B, (1, 19) = 9.76, p = .006. Similarly, perceptions of researchers were also significantly more positive at the outcome (M = 78.21) than at the baseline (M = 41.25) in ‘control’ intervention C, (1, 23) = 23.72, p = .0005. In contrast, there was no change in attitude towards researchers between the baseline (M = 67.31) and the outcome (M = 72.69) in intervention A, F(1, 15) = 0.36, p = .56. In sum, the evidence briefing service did not change perceptions of researchers (in general).

Discussion

In this study, access to an evidence briefing service was not associated with increases in CCG capacity to acquire, assess, adapt and apply research evidence to support decision-making.

Regardless of intervention received, at baseline, participating CCGs indicated that they lacked a consistent approach to their research-seeking behaviours and their capacity to acquire research remained so at follow-up. CCGs were noncommittal (neither agreeing nor disagreeing) on whether they had the capacity to assess the quality, reliability and applicability of research for use in decision-making. This perception remained unchanged at follow-up. There was also no change on perceptions of CCGs capacity to adapt and summarise research results for use in decision-making; neither agreeing nor disagreeing that the CCG had the capacity to do so. Finally, individual’s perceptions that their CCG did not have systems and processes in place to apply research routinely also remained unchanged.

Exposure to the evidence briefing service did not appear to have any effect on individuals’ intentions to use research evidence in decision-making or their perceptions of a shift in collective CCG norms towards the use of research for decision-making. Regardless of intervention received, these measures were positively orientated at baseline and sustained at follow-up.

The respective baseline and follow-up response rates of 68 and 44% are not unreasonable given the number of competing requests for information CCGs routinely are faced with. Our response rates compare favourably with other surveys conducted over the same time period [29, 30] and with a contemporaneous study of the effects of an evidence service on policy makers’ use of research evidence that failed to recruit [31, 32]. However, we acknowledge that we experienced considerable attrition in both baseline and follow-up surveys. In the study case sites, the percentage of individuals completing both surveys ranged from ~60% for those receiving intervention A to ~30% in the CCGs who were allocated to receive intervention C, the non-responsive version of the service. As the turnover of CCG staff was relatively stable over the course of the study, this may represent a degree of selection bias.

We were reliant on the quality of the sampling frames provided by each CCG. We found that contact information provided by CCGs, and especially that sourced for the national benchmarking component of the study, was sometimes inaccurate and or incomplete. As such, each CCG had to be contacted to obtain, check and recheck the contact details of the staff provided.

We utilised an 87-item questionnaire to collect data relevant to the primary outcome, and although all responses were on short scales (no open responses), piloting estimated that it would take participants up to 45 min to complete. We employed a range of factors to increase the odds of response including, prenotification, follow-up contact, online and postal formats, reminder copies, CCG and university sponsorship [33]. We are aware that both shorter questionnaires and financial incentives are also associated with increased response rates [33]. In this instance, it may be that the perceived return for time invested of access to a funded evidence briefing service either immediately or after the intervention phase was complete (the offer made to participants in the ‘control’ intervention C) and was deemed inadequate compensation by some participants. The CCGs allocated to intervention C had expressed initial enthusiasm for participation. However, the lack of any immediate return from or a sufficient relationship with the evidence briefing service over the course of the study may go some way to explaining why those allocated to the ‘control’ had the lowest response rate. Survey length may also have contributed to the lack of completeness in the data collected.

Given these limitations, we have been cautious in our interpretation of any apparent effect of the evidence briefing service on the primary outcome measures. Indeed, we have been careful to avoid the pitfalls of p values in assessing whether this study provides evidence ‘for’ or ‘against’ rejection of the null hypothesis [34]. Whilst the statistical tests applied generated some apparent statistical differences beyond that which we would have expected to see by chance, we think appropriate caution is necessary in interpreting the real world significance of what was observed. It would be reasonable to consider a shift of at least one point on any Likert scale as indicative of some effect and anything lower observed (i.e. no shift on the scale) as unlikely to be behaviourally significant.

Analysis undertaken to trace evidence briefings revealed with few exceptions a lack of recorded evidence of use. Most discussions between contacts in CCGs and the evidence briefing team were informal and rarely involved minuted meetings or formal gatherings of CCG staff. Indeed, we were often responding to requests from one, two or three named individuals who would be leading a piece of work or clinical area on behalf of the CCG as a whole. As such, analysis of records supporting the more formal executive and governing body meetings provided little information about sources used or the decision-making process itself. The ‘unseen and informal spaces’ [35] of decision-making processes, the small numbers of staff involved and the reality that no audit trail existed for sources used during these processes meant that there was little or no ‘traceability’ [36] of use of evidence briefings at an organisational level. Our experience aligns well with others who have faced similar challenges identifying whether systematic reviews are used and the extent to which they add value to decision-making processes in public health [36].

In this study, we sought to add insight as to how much added value the service would offer over alternative or more basic approaches. The evidence briefing service as constituted represented a resource intensive intervention. There was sustained engagement by individuals in the CCGs receiving the evidence briefing service, and because we employed a degree of flexibility in the service delivered (employing a combination of full evidence briefings and shorter more exploratory evidence notes in response to questions raised), we were able to deliver a number of outputs beyond our original. However, impact on explicit instrumental decision-making processes was limited. Whilst we recognise that conceptual use of research to raise awareness and increase knowledge is an entirely appropriate goal in itself, we would question whether this represents a sufficient level of impact to justify sustaining a resource intensive intervention of this type.

The impact of evidence briefings on explicit instrumental decision-making processes was limited to establishing region wide policies relating to interventions of no or low clinical benefit; a process that needed to be both transparent and defendable for participating CCGs. This study may suggest that it is at this meso level where services packaging evidence derived from systematic reviews may most efficiently be deployed to impact on decision-making in a commissioning context. Disinvestment decisions relating to interventions of no or low clinical value remain high on the commissioning agenda. And in our study, established processes to harness research evidence for this type of policy formulation were lacking.

If meso level activity may represent the best focus for resource intensive services, we still need to consider how to systematise research use among individual CCGs. The SPIRIT Action Framework hypothesises that a catalyst is required for the use of research, the response to which is determined by the capacity of the organisation to engage with available research [37]. Where there is sufficient capacity, a series of research engagement actions might occur that facilitate research use. The Framework predicts that the greater the organisational capacity, the more research engagement actions (accessing and appraising research, generating new research and interacting with researchers) will occur which will in turn result in a greater use of research evidence. Using the Framework to reflect on this study, we had catalysts and engagement opportunities (around the questions raised and the briefings produced), but the service as constituted did little to enhance the capacity of the organisation to engage with research. Both baseline and follow-up data suggest that commissioners are well intentioned ad hoc users of research evidence and that they work in a setting where there is a lack of systems and processes to do this routinely. This suggests a knowledge and skills gap that this study has not addressed. The evidence briefing team offered training on how to acquire, assess, adapt and apply evidence to CCGs receiving the evidence briefing service or intervention B. Training could have in part addressed these knowledge and skills gaps, but the offers were not taken up. Rather than making training a demand-led ‘offer’, it may have been better to identify the capacity for research use of each CCG at the outset, and then to provide training relevant to their current state. At the very least, this study has highlighted the importance of building organisational capacity as a component of evidence use, an area that appears to be under researched [38].

Public health specialists have traditionally supported and facilitated the use of research evidence in a commissioning context [3941]. Throughout this study, we observed that despite its relocation to local government, public health specialist remained accessed by CCGs despite being no longer central to decision-making processes. Some senior staff in participating CCGs had much prior experience of support from public health teams under previous commissioning arrangements. And as the interventions followed soon after the preceding arrangements had ceased, it is perhaps no surprise that the CCG commissioning staff made use of the service offered by CRD. Nevertheless, all the CCGs continued to place value on the knowledge and expertise of trusted ‘critical friends’ [42] in the shape of public health consultants. They provided a bridge between the old and new commissioning arrangements and brought valuable insights and networks from beyond the boundaries of the CCG. Although we often observed commissioners looking out and undertaking fact finding trips to see what other CCGs around the country were doing, the same individuals were often unaware that colleagues in adjacent areas were undertaking similar work or grappling with similar questions. Public health specialists were the individuals viewed most likely to fill this local knowledge gap and to mitigate against a general dissatisfaction with the knowledge-sharing capabilities of the formal Commissioning Support arrangements. Whether fair or otherwise, there was a general perception among CCG informants that the Commissioning Support lacked the necessary infrastructure and or expertise to efficiently acquire, assess and adapt research for use in decision-making. The one-to-one transactional arrangements Commissioning Support had with CCGs were themselves viewed as a barrier to wider knowledge sharing across the region. This danger of ‘network closure’ undermining local knowledge sharing and historically trusted relationships has been anticipated previously [43].

Wye and colleagues have argued that researchers need to build relationships and engage with commissioners locally using commissioners’ preferred methods of conversations and stories, to find out what is wanted and how best to deliver it [41]. Whilst we do not discourage the cultivation of these relationships, the reality of the decision-making process is that any engagement is resource intensive and so researchers need to carefully consider how best to target those relations that will deliver the best return. Somebody needs to be around or ‘in the room’ when ideas first germinate, to spot the potential catalysts to research use and to question what is the evidence for this? Why do we want to pursue this course of action? Given this, Wye et al.’s suggestion that researchers cultivate relationships with local public health teams could represent the intermediary channel through which the use of research by individual CCGs can be influenced [41]. Public health staff are more likely to be ‘in the room’ and have the necessary skills and local networks to facilitate knowledge sharing within and across commissioning landscape. The current emphasis on innovation and the development of new models of health and social care is favouring coproduction approaches to the design, commissioning and delivery of services. This shift may strengthen the intermediary role of public health. But if this intermediary role is to be sustained, public health specialists will need to be supported and resourced to return to playing a more central role in commissioning.

Alongside capacity building and engagement, macro level intervention is also needed to enhance research use at the level of the individual CCG. The Health and Social Care Act mandates research use as a core consideration but, whereas the current policy climate explicitly incentivises innovation and integration, there is no equivalent incentive for finding and applying research to support the many decisions required to turn this vision into a reality. The CCG Assurance Framework focuses on leadership, financial and performance management, planning and delegated functions but contains no specific metrics on whether CCGs are fulfilling their statutory duties in respect of use of evidence obtained from research [44]. If we are serious about shifting CCGs from being well intentioned but inconsistent users of research evidence then a more explicit set of requirements may be necessary. Ideally, the incentive structure that exists for health service innovation and integration may need to be replicated to support CCGs fulfilment of their statutory duties in respect of the use of research under the Act. Without this, the current ad hoc engagement with research is likely to remain.

We are conscious that our findings relate to a specific decision-making context and setting and have been generated at time when the commissioning arrangements are rapidly evolving. Given this, further comparative evaluation and clarification of the role and value of similar demand-led evidence briefing services in other context and settings may be warranted. The SPIRIT Action Framework may provide a guide upon which the evaluation of any future services seeking to increase the use of research in policy can be based [37].

Conclusions

Access to a demand-led evidence briefing service as constituted in this study did not improve the uptake and use of research evidence by NHS commissioners compared with less intensive and less targeted alternatives. Our study suggests commissioners are well intentioned but lack access to the necessary skills and infrastructure to make use of research evidence routine.

References

  1. Health and Social Care Act 2012, c.7. Available at: www.legislation.gov.uk/ukpga/2012/7/contents/enacted (Accessed: 23 Aug 2016).

  2. Sheldon TA, Cullum N, Dawson D, Lankshear A, Lowson K, Watt I, West P, Wright D, Wright J. What’s the evidence that NICE guidance has been implemented? Results from a national evaluation using time series analysis, audit of patients’ notes, and interviews. BMJ. 2004;329(7473):999.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Owen-Smith A, Kipping R, Donovan J, Hine C, Maslen C, Coast J. A NICE example? Variation in provision of bariatric surgery in England. BMJ. 2013;346:f2453.

    Article  PubMed  Google Scholar 

  4. Car J, Huckvale K, Hermens H. Telehealth for long term conditions. BMJ. 2012;344:e4201.

    Article  PubMed  Google Scholar 

  5. Hollingworth W, Rooshenas L, Busby J, Hine CE, Badrinath P, Whiting PF, Moore THM, Owen-Smith A, Sterne JAC, Jones HE, et al. Using clinical practice variations as a method for commissioners and clinicians to identify and prioritise opportunities for disinvestment in health care: a cross-sectional study, systematic reviews and qualitative study. In: Health Services and Delivery Research. Southampton: NIHR Journals Library; 2015.

    Google Scholar 

  6. Lomas J, Culyer A, McCutcheon C, McAuley L, Law S. Conceptualizing and combining evidence for health system guidance. Ottawa: Canadian Health Services Research Foundation; 2005.

    Google Scholar 

  7. Nutley SM, Walter IC, Davies HTO. Using evidence: how research can inform public services. University of Bristol: The Policy Press; 2007.

    Google Scholar 

  8. Lavis JN, Robertson D, Woodside JM, McLeod CB, Abelson J. How can research organizations more effectively transfer research knowledge to decision makers? Milbank Q. 2003;81:221–48.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Sheldon TA. Making evidence synthesis more useful for management and policy-making. J Health Serv Res Policy. 2005;10 Suppl 1:1–5.

    Article  PubMed  Google Scholar 

  10. Innvaer S, Vist G, Trommald M, Oxman A. Health policy-makers’ perceptions of their use of evidence: a systematic review. J Health Serv Res Policy. 2002;7:239–44.

    Article  PubMed  Google Scholar 

  11. Mitton C, Adair CE, McKenzie E, Patten SB, Waye PB. Knowledge transfer and exchange: review and synthesis of the literature. Milbank Q. 2007;85:729–68.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Orton L, Lloyd-Williams F, Taylor-Robinson D, O’Flaherty M, Capewell S. The use of research evidence in public health decision making processes: systematic review. PloS One. 2011;6(7):e21704.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  13. Murthy L, Shepperd S, Clarke MJ, Garner SE, Lavis JN, Perrier L, Roberts NW, Straus SE. Interventions to improve the use of systematic reviews in decision-making by health system managers, policy makers and clinicians. Cochrane Database Syst Rev. 2012;9:Cd009401.

    Google Scholar 

  14. Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014;14:2.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Tricco AC, Cardoso R, Thomas SM, Motiwala S, Sullivan S, Kealey MR, Hemmelgarn B, Ouimet M, Hillmer MP, Perrier L, et al. Barriers and facilitators to uptake of systematic reviews by policy makers and health care managers: a scoping review. Implement Sci. 2016;11(1):4.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Hanbury A, Thompson C, Wilson PM, Farley K, Chambers D, Warren E, Bibby J, Mannion R, Watt IS, Gilbody S. Translating research into practice in Leeds and Bradford (TRiPLaB): a protocol for a programme of research. Implement Sci. 2010;5:37.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Chambers D, Wilson PM, Thompson CA, Hanbury A, Farley K, Light K. Maximizing the impact of systematic reviews in health care decision making: a systematic scoping review of knowledge-translation resources. Milbank Q. 2011;89:131–56.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Chambers D, Wilson P. A framework for production of systematic review based briefings to support evidence-informed decision-making. Syst Rev. 2012;1:32.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Chambers D, Grant R, Warren E, Pearson S-A, Wilson P. Use of evidence from systematic reviews to inform commissioning decisions: a case study. Evid Policy. 2012;8:141–8.

    Article  Google Scholar 

  20. Wilson P, Farley K, Bickerdike L, Booth A, Chambers D, Lambert M, Thompson C, Turner R, Watt I. Effects of a demand-led evidence briefing service on the uptake and use of research evidence by commissioners of health services: a controlled before and after study. Health Serv Deliv Res. 2017;5(5).

  21. Wilson PM, Farley K, Thompson C, Chambers D, Bickerdike L, Watt IS, Lambert M, Turner R. Effects of a demand-led evidence briefing service on the uptake and use of research evidence by commissioners of health services: protocol for a controlled before and after study. Implement Sci. 2015;10(1):7.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Norman G. Likert scales, levels of measurement and the “laws” of statistics. Adv Health Sci Educ. 2010;15(5):625–32.

    Article  Google Scholar 

  23. Ajzen I. The theory of planned behaviour. Organ Behav Hum Decis Process. 1991;50:179–211.

    Article  Google Scholar 

  24. Sterne JA, White IR, Carlin JB, Spratt M, Royston P, Kenward MG, Wood AM, Carpenter JR. Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls. BMJ. 2009;338:b2393.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Rubin DB. Multiple imputation for nonresponse in surveys. New York: Wiley; 1987.

    Book  Google Scholar 

  26. Kazis LE, Anderson JJ, Meenan RF. Effect sizes for interpreting changes in health status. Med Care. 1989;27(3 Suppl):S178–189.

    Article  CAS  PubMed  Google Scholar 

  27. Weiss C, Murphy-Graham E, Birkeland S. An alternate route to policy influence: how evaluations affect D.A.R.E. Am J Eval. 2005;26(1):12–30.

    Article  Google Scholar 

  28. Weiss CH. The many meanings of research utilization. Publ Admin Rev. 1979;39:426–31.

    Article  Google Scholar 

  29. Robertson R, Holder H, Bennett L, Ross S, Gosling J. Clinical Commissioning Groups—one year on: member engagement and primary care development. Slide pack. London: Nuffield Trust and The King’s Fund; 2014.

    Google Scholar 

  30. Robertson R, Holder H, Bennett L, Ross S, Gosling J, Curry N. Risk or reward? The changing role of CCGs in general practice. London: Nuffield Trust and The King’s Fund; 2015.

    Google Scholar 

  31. Lavis JN, Wilson MG, Grimshaw JM, Haynes RB, Hanna S, Raina P, Gruen R, Ouimet M. Effects of an evidence service on health-system policy makers’ use of research evidence: a protocol for a randomised controlled trial. Implement Sci. 2011;6:51.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Wilson MG, Grimshaw JM, Haynes RB, Hanna SE, Raina P, Gruen R, Ouimet M, Lavis JN. A process evaluation accompanying an attempted randomized controlled trial of an evidence service for health system policymakers. Health Res Policy Syst. 2015;13:78.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Edwards PJ, Roberts I, Clarke MJ, Diguiseppi C, Wentz R, Kwan I, Cooper R, Felix LM, Pratap S, Edwards PJ, Roberts I, Clarke MJ, Diguiseppi C, Wentz R, Kwan I, Cooper R, Felix LM, Pratap S. Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev. 2009;3:Mr000008.

    Google Scholar 

  34. Colquhoun D. An investigation of the false discovery rate and the misinterpretation of p-values. Roy Soc Open Sci. 2014;1(3):140216.

    Article  Google Scholar 

  35. Rushmer RK, Cheetham M, Cox L, Crosland A, Gray J, Hughes L, Hunter DJ, McCabe K, Seaman P, Tannahill C, et al. Research utilisation and knowledge mobilisation in the commissioning and joint planning of public health interventions to reduce alcohol-related harms: a qualitative case design using a cocreation approach. In: Health Services and Delivery Research. Southampton: NIHR Journals Library; 2015.

    Google Scholar 

  36. Armstrong R, Pettman T, Burford B, Doyle J, Waters E. Tracking and understanding the utility of Cochrane reviews for public health decision-making. J Public Health. 2012;34(2):309–13.

    Article  Google Scholar 

  37. Redman S, Turner T, Davies H, Williamson A, Haynes A, Brennan S, Milat A, O’Connor D, Blyth F, Jorm L, Green S. The SPIRIT Action Framework: a structured approach to selecting and testing strategies to increase the use of research in policy. Soc Sci Med. 2015;136–137:8.

    Google Scholar 

  38. Moore G, Redman S, Haines M, Todd A. What works to increase the use of research in population health policy and programmes: a review. Evid Policy. 2011;7(3):277–305.

    Article  Google Scholar 

  39. Weatherly H, Drummond M, Smith D. Using evidence in the development of local health policies. Some evidence from the United Kingdom. Int J Technol Assess Health Care. 2002;18(4):771–81.

    Article  PubMed  Google Scholar 

  40. Clarke A, Taylor-Phillips S, Swan J, Gkeredakis E, Mills P, Powell J, Nicolini D, Roginski C, Scarbrough H, Grove A. Evidence-based commissioning in the English NHS: who uses which sources of evidence? A survey 2010/2011. BMJ Open. 2013;3:5.

    Article  Google Scholar 

  41. Wye L, Brangan E, Cameron A, Gabbay J, Klein J, Pope C. Knowledge exchange in health-care commissioning: case studies of the use of commercial, not-for-profit and public sector agencies, 2011-14. In: Health Services and Delivery Research. Southampton: NIHR Journals Library; 2015.

    Google Scholar 

  42. Warwick-Giles L, Coleman A, Checkland K. Co-owner, service provider, critical friend? The role of public health in clinical commissioning groups. J Public health 2015;3(19):fdv137.

  43. Petsoulas C, Allen P, Checkland K, Coleman A, Segar J, Peckham S, McDermott I. Views of NHS commissioners on commissioning support provision. Evidence from a qualitative study examining the early development of clinical commissioning groups in England. BMJ Open. 2014;4(10):e005970.

    Article  PubMed  PubMed Central  Google Scholar 

  44. NHS England. CCG assurance framework operating manual 2015/16. Leeds: NHS England; 2015.

    Google Scholar 

Download references

Acknowledgements

We would like to thank the staff at the participating CCGs who despite considerable time pressures took the time to provide quantitative and qualitative data.

Funding

This study was funded as part of a programme of research funded by the NIHR Health Services and Delivery Research programme (Project ref: 12/5002/18). PW also receives funding as part of the NIHR CLAHRC Greater Manchester. The views expressed are those of the authors and do not necessarily reflect those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health.

Availability of data and materials

All available data can be obtained from the corresponding author. All data will be shared in a way that safeguards the confidentiality and anonymity of the respondents.

Authors’ contributions

PW, CT and IW conceived the study. KF, DC, LB, ML, PW, CT, IW and RT contributed to the research design and protocol development. PW, DC, LB and AB provided the evidence briefing service. CT, KF and RT undertook the evaluation component. ML and IW provided advice throughout the study. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Not applicable

Ethics approval and consent to participate

This study was granted research ethics permission by the Department of Health Sciences, University of York Research Ethics Board (17 July 2013). Appropriate research governance approval was obtained from North of England Commissioning Support (Project ID: 145027).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paul M Wilson.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wilson, P.M., Farley, K., Bickerdike, L. et al. Does access to a demand-led evidence briefing service improve uptake and use of research evidence by health service commissioners? A controlled before and after study. Implementation Sci 12, 20 (2017). https://doi.org/10.1186/s13012-017-0545-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13012-017-0545-4

Keywords