- Open Access
- Open Peer Review
Explaining outcomes in major system change: a qualitative study of implementing centralised acute stroke services in two large metropolitan regions in England
© Fulop et al. 2016
- Received: 13 January 2016
- Accepted: 25 May 2016
- Published: 3 June 2016
Implementing major system change in healthcare is not well understood. This gap may be addressed by analysing change in terms of interrelated components identified in the implementation literature, including decision to change, intervention selection, implementation approaches, implementation outcomes, and intervention outcomes.
We conducted a qualitative study of two cases of major system change: the centralisation of acute stroke services in Manchester and London, which were associated with significantly different implementation outcomes (fidelity to referral pathway) and intervention outcomes (provision of evidence-based care, patient mortality). We interviewed stakeholders at national, pan-regional, and service-levels (n = 125) and analysed 653 documents. Using a framework developed for this study from the implementation science literature, we examined factors influencing implementation approaches; how these approaches interacted with the models selected to influence implementation outcomes; and their relationship to intervention outcomes.
London and Manchester’s differing implementation outcomes were influenced by the different service models selected and implementation approaches used. Fidelity to the referral pathway was higher in London, where a ‘simpler’, more inclusive model was used, implemented with a ‘big bang’ launch and ‘hands-on’ facilitation by stroke clinical networks. In contrast, a phased approach of a more complex pathway was used in Manchester, and the network acted more as a platform to share learning. Service development occurred more uniformly in London, where service specifications were linked to financial incentives, and achieving standards was a condition of service launch, in contrast to Manchester. ‘Hands-on’ network facilitation, in the form of dedicated project management support, contributed to achievement of these standards in London; such facilitation processes were less evident in Manchester.
Using acute stroke service centralisation in London and Manchester as an example, interaction between model selected and implementation approaches significantly influenced fidelity to the model. The contrasting implementation outcomes may have affected differences in provision of evidence-based care and patient mortality. The framework used in this analysis may support planning and evaluating major system changes, but would benefit from application in different healthcare contexts.
- Implementation approaches
- Implementation outcomes
- Stroke care
- Centralisation of healthcare
The field of implementation science articulates the need for a nuanced approach when evaluating the outcomes of change. An important distinction is drawn between ‘implementation outcomes’, i.e. the adoption of, fidelity to, and sustainability of a given intervention [1–9], and ‘intervention outcomes’, for example, changes in provision of care or patient outcomes . This enables study of factors that influence implementation (including the nature of the intervention and how its implementation is facilitated), and the potential relationships between these and intervention outcomes [4, 6], allowing insights into the ‘black box’ of implementation.
Understanding how evidence-based practice is implemented in complex settings such as healthcare is enhanced when its various components are considered (decision to change, intervention selection, planning and implementation of change, and outcomes) [2, 5, 6, 10]. The value of theory, as represented through conceptual frameworks, is recognised as benefitting the design, application, and understanding of implementation approaches [2, 11–13]. Such frameworks provide, firstly, an analysis of how contextual factors, such as national policy or a ‘burning platform’, can influence the decision to change, and the type of intervention that is implemented [8, 11, 13]. Second, how characteristics of the intervention implemented (e.g. a new service model), such as its complexity or its compatibility with local context, might influence the outcomes of implementation [3, 7, 9, 10, 13]. Third, how the implementation approaches employed, i.e. how change is facilitated, managed, and led, can influence implementation outcomes [1, 2, 10, 11, 13, 14].
However, research exploring the relationships between implementation approaches, implementation outcomes, and intervention outcomes remains limited . To address this gap, we present a mixed methods evaluation of major system change of acute stroke care; these changes took place in two large metropolitan regions in England, London, and Greater Manchester (hereafter ‘Manchester’), which had significantly different intervention outcomes .
Implementing major system change in healthcare settings
Major system change in healthcare is seen as having the potential to increase the provision of evidence-based care and improve clinical outcomes . It therefore represents an important area for implementation research. Major system change involves reorganisation of services (sometimes termed ‘reconfiguration’ ), at regional level, and may include significant alterations to a care pathway. One such change is service centralisation, whereby service provision across a given region is concentrated in a reduced number of hospitals [17–22]. It may involve many stakeholders across multiple organisations, and—when implemented successfully—is hypothesised to optimise the balance between quality of care, access, workforce capacity and cost . The impact of centralisation on outcomes has been demonstrated in several specialist healthcare settings, including trauma [23–25], cardiac surgery , neonatal intensive care , and acute stroke care [28, 29]. However, evidence on how changes of this scale are implemented, and the relationship between implementation approaches and the impact of changes on quality of care and costs, remains limited . For example, a review of the evidence of ‘successful’ and ‘less successful’ major system changes in healthcare settings defined ‘success’ in relation to implementation outcomes rather than intervention outcomes .
Developing a framework to analyse major system change
The decision to change, e.g. the drivers for change, governance, and leadership of the decision-making process (component 1 (C1), Fig. 1) may influence the nature of the model (i.e. the intervention) that is implemented (C2) [8, 10]. Through processes of adaptation, both contextual factors (e.g. managerial capacity to lead change) and the model selected (e.g. the scale of change required) may influence the implementation approaches used (e.g. the degree to which local staff may require hands-on support in managing change) (C3) [1, 9]. Through both its complexity and its compatibility with the context of its introduction, the model selected may also influence implementation outcomes, in terms of uptake and fidelity [4, 7]. The model may influence intervention outcomes directly, though it is important that the extent to which the effects of the model are mediated through the process of implementation be considered . Implementation approaches, such as how change is facilitated and local staff are supported (C3), have potential to influence implementation outcomes (C4) [1, 8]. Implementation outcomes (C4) are likely to influence overall intervention outcomes, including provision of evidence-based care, clinical outcomes, patient and carer experience, and cost-effectiveness (C5) . Finally, assessment of implementation outcomes may prompt a decision to change again and implement amended or alternative models . The relationships between these components are unlikely to be linear; some (e.g. C1-3) may occur simultaneously, and some components may be bypassed, e.g. model characteristics (C2) may influence implementation outcomes (C4) directly.
Major system change in Manchester and London acute stroke services
In each region, a small number of hyperacute stroke units (HASUs) were designated to deliver these evidence-based care processes. In addition, in London, 24 stroke units (SUs) were designated to provide acute rehabilitation to patients until they were ready to return to the community. In Manchester, 10 district stroke centres (DSCs) were designated to provide all aspects of acute stroke care required beyond the first 4 h. Referral pathways differed in terms of ‘inclusivity’; whereas all patients in London were eligible for treatment in a HASU (the ‘24 h pathway’), in Manchester only patients arriving at hospital within 4 h of symptoms developing (in order to facilitate administration of thrombolysis) were eligible, with patients presenting later transferred to their nearest DSC (the ‘4 h pathway’). Further, while stroke services in five hospitals were closed in London as part of the changes, no services closed in Manchester [15, 32]. These significant differences in the type of models implemented in the two regions reflect the limited evidence at the time on optimal service models for providing evidence-based care . Stroke clinical networks (hereafter referred to as ‘networks’) played an important role in the changes. Networks were set up following the national stroke strategy, and brought together representatives of all relevant stakeholder groups under a central leadership team, in order to ‘review and organise delivery of stroke services across the care pathway’  .
In this paper, we present a comparative study of these two major system changes, examining the relationships between implementation approaches employed and the implementation outcomes (C3 and C4, Fig. 3). We address how implementation approaches and implementation outcomes were influenced by differences in the model selected (C2, Fig. 3) and how they influenced the differing intervention outcomes (C5, Fig. 3). Through this analysis, we will contribute to understanding of implementation of major system change in terms of the relationships between the models selected and implementation approaches applied, and how these each may influence implementation outcomes and intervention outcomes (C3–C5, Fig. 3) .
The changes took place in Manchester and London (populations 2.68 and 8.17 million, respectively ). Implementation took place in Manchester between December 2008 and April 2010, and between October 2009 and July 2010 in London.
Summary of data analysed
Project plans, consultation documents, impact assessments, external reviews, designation criteria, service protocols, meeting minutes
National level interviews
Politicians; clinical leaders with national remit
Pan-regional level interviews (governance level)
Planners and leaders of changes, including programme managers, committee chairs, commissioners, system managers, network representatives, and patient organisations
Service level interviews
Clinicians, service managers, and senior managers:
Manchester: 24/7 HASU
Manchester: in-hours HASU
Manchester: post-4 h DSC
London: HASU, North London, high score
London: HASU, South London, low score
London: SU, North London
London: SU, South London
London: decommissioned service
Service level total
We combined analyses of semi-structured stakeholder interviews and documents. We conducted 125 semi-structured interviews with stakeholders at governance (N = 45) and service (N = 80) levels (Table 1) over the period April 2012 to December 2013.
Interviews at governance level covered background to the centralisations (including drivers for change); governance; developing the proposal for change; agreeing the model; implementing changes; impact of centralisation; and reflections on the changes (see Additional file 1). Interviews at service level covered background to changes; processes of service development; impact of centralisation; and reflections on changes (see Additional file 2). In addition, 653 documents were collected from governance and service levels (Table 1).
Participant recruitment and data collection
Potential interviewees were contacted via e-mail or telephone. Interviews were conducted only with fully informed, written consent and were audio-recorded and professionally transcribed. All documents analysed were either in the public domain, or obtained from local change leaders and service leads.
We compared the London and Manchester changes in terms of the implementation approaches employed and the implementation outcomes. Findings were considered in relation to our previously published findings, i.e. the different models implemented , and their differing impact on intervention outcomes (likelihood of patients receiving evidence-based care , and patient mortality ).
Data analysis from interviews and documents combined inductive and deductive approaches , as themes were drawn from our framework (Fig. 1) and emerged from the empirical data. Documents were analysed to identify various aspects of the changes, including drivers, key events and activities, and overarching chronology. Interviews were analysed to draw out similar information, and to understand why and in what ways aspects of implementation were influential, in order to compare the two regions. Analysis took place in three phases, building on the narrative summaries and timelines of the changes developed from documentary analysis used in a previous analysis . In phase one, service-level narrative summaries were developed, using the constant comparative method , from documentary evidence and initial readings of interviews. These were developed separately for the changes in London and Manchester (by AIGR and CP), and covered a number of cross-cutting themes: service-level context; service development processes (including thrombolysis and repatriation protocols, recruiting, and training staff); launching new services; and perceived impact of changes. In phase two, we used the overall timelines and summaries and service-level summaries to identify key tasks in implementing the models in each region, and contrasts in how these tasks were accomplished. In phase three, a subgroup of the authors (CP, AIGR, SM, and NJF) applied the framework (Fig. 1) to a cross-region analysis that sought to test explanations of the differing implementation outcomes identified in previously published quantitative analyses. This phase drew on further thematic analysis of interview and documentary data to identify factors influencing the contrasting implementation approaches, and how the approaches may have influenced the resultant implementation outcomes.
To enhance reliability, emerging findings from each phase were shared and discussed regularly with other co-authors until an agreement was reached. To enhance validity, an interim version of this analysis was shared with people who had been involved in the planning and implementation of the changes in London and Manchester (some of whom we had interviewed for this study).
This study received ethical approval in September 2011 from the London East NHS Research Ethics Committee (Ref 11/LO/1396).
Factors influencing implementation approaches
Implementation approaches differed across the two regions according to the degree to which implementation was phased, the degree to which implementation was linked to standards set out in service specifications and financial incentives, and the degree to which networks provided hands-on facilitation (Fig. 4, C3).
Degree to which implementation was phased (‘big bang’ vs ‘phased’)
“if we started having north east London going off in one direction about some particular aspects of care and south east doing something a bit different then you very quickly lose the coherence” (stroke physician, London).
The launch was postponed by several weeks, to ensure all services developed adequate capacity to launch simultaneously, meaning all potential stroke patients in London could be taken to HASUs (London Network Board minutes, January–July 2010).
“The Chair closed the discussion stating that the timing of the opening of HASUs needs to be agreed with London Ambulance Service.” (Minutes, Extraordinary Stroke Project Board meeting, June 2009).
“…you could become completely overwhelmed and the whole thing might just collapse” (stroke physician, Manchester).
Use of service specifications and financial incentives
In both regions, service specifications were developed by local clinicians, and defined appropriate staffing, infrastructure, education, training, and audit processes. However, the London specifications quantified in greater detail how these services should be delivered (e.g. by identifying the number of specialist nursing and therapy staff required at different times of the day). While standards were used in service selection processes in both regions, only in London did the launch of services depend on these standards being achieved, assessed through a formal accreditation process.
In London, standards were linked to financial incentives, whereby receipt of enhanced funding for stroke services (the ‘stroke tariff’) was conditional on meeting these standards. Following the launch, services were required to meet additional standards reflecting further service developments (achieving a locally defined ‘gold standard’); subsequently, services were reviewed on an annual basis to assess whether standards continued to be achieved (if not, provisions were in place for the tariff to be ‘clawed back’). This approach gave change leaders in London a degree of control and assurance that services were likely to provide evidence-based care.
In Manchester, while commissioners endorsed the changes, payment was not associated with meeting standards on the basis that this might be seen as punitive and inconsistent with the collaborative approach employed. This meant that the new services could be launched whether or not standards had been met .
Degree of hands-on facilitation by networks
“so much learning came out of it through this […] informing how the model should look and the paperwork, the communication protocols, the Standard Operating Procedures between, you know, it was all very emergent” (network representative, Manchester).
“The Programme Board was quite unrelenting really about, ‘these are the targets, we’ve got to hit them’” (network representative, London).
“We were there to remind them of what they had signed up to, to remind them of what they had committed to do and to remind them of the quality standards that they needed to meet but always in a supportive manner” (network representative, London).
This approach was driven by the tight timeline for a single launch date, linked to achieving service standards: this justified the network providing staff to carry out the intense facilitation approach.
“I don’t know whether it was an unwritten principle, it probably wasn’t a written principle but actually what we do is hold consensus and try and deliver this through unanimity” (commissioner representative, Manchester).
Factors influencing implementation outcomes
As previously established (Fig. 3), there were differences, firstly, in implementation outcomes i.e. greater fidelity to the referral pathway in London than in Manchester and secondly, in intervention outcomes i.e. greater likelihood of providing evidence-based care in London than in Manchester (with provision equally high in London and Manchester HASUs, but lower in Manchester DSCs) . We first discuss factors influencing fidelity to the referral pathway (model complexity, ‘big bang’ vs phased implementation, and degree of ‘hands-on’ facilitation by networks). Second, we discuss factors influencing service development (use of service specifications and ‘hands-on’ facilitation). As set out in the preceding section, many of the factors influencing implementation outcomes related to implementation approaches, including the degree to which implementation was phased, use of standards and financial incentives, and degree of hands-on facilitation (Fig. 4, C4).
Fidelity to referral pathway
Fidelity to the referral pathway was influenced strongly by how consistently it was understood by healthcare staff. Understanding of the referral pathway was influenced by the complexity of the models (number of decisions relating to patient transfer), and the number of phases in which these models were implemented.
Influence of model complexity on fidelity to referral pathway
“we cannot give crews fragmented messages, you can’t say that you can get this type of care between 8 and 5 Monday to Friday but not on the second Wednesday of the month because there’s a meeting, crews don’t work that way” (ambulance service, London).
“We need to have a definite time of onset […] or the time when they were last seen well, and if that time exceeds the four hours then we won't be taking them to the Hyper Acute Stroke Unit.” (ambulance service, Manchester).
“Time’s always a challenge: between that time and that time they’ll go there, all the rest of the time they’ll go somewhere else. And that’s… that’s never, never easy to communicate or for people to remember” (ambulance service, Manchester).
“I don’t understand who’s supposed to be going here and who’s supposed to be going there, and if I don’t, I bet other people don’t know.” (stroke physician, Manchester).
Influence of ‘big bang’ vs phased implementation on fidelity to the referral pathway
“The one thing that we really did push for was a ‘go live’ date, not a ‘go live’ date in one area and another in other areas” (ambulance service, London).
“If you phase it, it does create a degree of confusion. Because you start off with something, and then you change it, and then you change it, and then you may change it again” (ambulance service, Manchester).
Influence of ‘hands-on’ facilitation by networks on fidelity to referral pathway
“It’s not just the people on the road that need to understand that, it’s people in the control room as well, so they’re familiar. […] So there’s the protocol and then there’s the training to support that” (ambulance service, London).
“Somebody from the Stroke Network came to speak about ‘…we are not meant to be treating any stroke,’ […] So if you are here and you develop a stroke, your thing is to get you to [local HASU], rather than as I said, ‘We’re going to go and scan you first’. […] As a consequence of that, they’ve all gone…” (senior management, decommissioned service, London).
In Manchester, audit data indicated that a significant proportion of patients eligible for treatment in HASU were not being treated in one, reflecting concerns raised by clinical leads in oversight meetings (meeting minutes, 2009–2010). At the time of the Manchester 12 month review of the centralised system, it was noted that the network was working with both hospital and ambulance and hospital staff to corroborate data and identify potential solutions .
Service development processes
Reflecting the extent to which implementation was actively managed overall, service development in London and Manchester was influenced both by the degree to which service specifications and financial incentives were used, and the degree to which facilitation of service development was ‘hands-on’.
Influence of service specifications and financial incentives on service development
“In some respect in terms of staff and sort of thing, it was taken out of our hands because the standards just lay it down, this is what you need for X number of beds” (HASU physiotherapist, London).
“Having to meet all these standards for assessment, it’s been a real driver for change and improvement. I think the reconfiguration has provided a stick for hospital management to invest in stroke services” (SU stroke physician, London).
In Manchester, standards were not linked to financial incentives, nor used as a criterion for the launch of services; this may have contributed to DSCs not providing the planned level of evidence-based care.
Influence of ‘hands-on’ facilitation by networks on service development
“When we had problems, they [the network] wanted us to call them and say, ‘You know what, we’re a bit stuck here, what can you do to help?’ […] ‘is there experience you have from another site that might be helpful?’. I think we developed a very good relationship with them, and that was obviously key to, you know, opening the HASU” (HASU service manager, London).
Further, the ‘hands-on’ approach to facilitation influenced the timing of London’s ‘big bang’ launch. For example, it was only through this ongoing local engagement—and responsiveness to progress that was being reported—that the initial timescale for a coordinated launch was altered.
“I heard that they [London] have £2.50 spent for every £1 spent in Manchester. As I say I don’t know if that’s accurate but it would seem that the financial thing wasn’t as such a consideration in London […] but it was a factor in Manchester” (network representative, Manchester)
Understanding outcomes of major system change
In this section, we bring together the current findings with those from previous analyses to illustrate how components of major system change (Fig. 1) contributed to the significantly different outcomes associated with the changes to acute stroke services in Manchester and London. These relationships are summarised in Fig. 4, and described below.
The changes in London and Manchester appeared to be influenced significantly by the degree to which change leaders ‘held the line’ on the models to be implemented (Fig. 4, C1 and C2) . The models implemented and implementation approaches employed played an important role in the implementation outcomes observed.
London’s inclusive 24 h model (i.e. all suspected stroke patients were eligible for HASU), requiring relatively few referral decisions to be made, increased likelihood of staff following the referral pathway. In contrast, Manchester’s 4 h model was significantly more selective (limiting the number of patients who were transferred to HASU) and complex, increasing uncertainty amongst staff about where suspected stroke patients were to be treated (C2). Further, these models were implemented differently, reflecting a contrast in the degree to which implementation was actively managed in the two regions (C3). London adopted a ‘big bang’ approach; the new system was launched on a single date, increasing likelihood of the referral pathway being followed. This launch was dependent on services being accredited against standards linked to financial incentives, increasing the likelihood of services providing evidence-based care. Significant ‘hands-on’ facilitation was provided by the London network to ensure that services met the required standards. In Manchester, services were launched in multiple phases, limiting confidence in the referral pathway. Service specifications were not linked either to service launch or financial incentives; this may in part have limited development of DSCs.
Implementation outcomes (C4) had a significant influence on intervention outcomes (C5). Almost all London patients were treated in a HASU, and all HASUs were likely to provide evidence-based care; this meant London patients were overall more likely to receive evidence-based care, and in turn had a larger reduction in mortality than patients in Manchester and elsewhere in England. In contrast, Manchester patients were far less likely to be treated in a HASU, with two thirds treated in DSCs, which were significantly less likely to provide evidence-based care; as a result, Manchester patients’ likelihood of receiving evidence-based care, and associated mortality, did not differ significantly from elsewhere in England [28, 29]. The 12 month review in Manchester noted national audit data indicating that DSCs were providing evidence-based care less frequently than HASUs . Based on this information, and discussion with an external advisory group, it was agreed that further centralisation of acute stroke services should be explored .
This paper examines the complex, non-linear relationships between type of model selected, implementation approaches, implementation outcomes, and intervention outcomes. By analysing centralisation of acute stroke care in two regions, distinguishing between implementation outcomes and intervention outcomes (following Proctor ), we make a significant contribution to understanding of major system change , specifically in terms of the factors influencing outcomes . We have demonstrated a number of inter-related factors potentially influencing such outcomes (as detailed in our previous research [28, 29]). Certain characteristics of intervention and implementation approaches are associated with more positive implementation outcomes and intervention outcomes, and many of these reflect existing implementation and diffusion theories, described below [1, 2, 4, 6–11].
In terms of intervention characteristics, we found that in this case ‘simpler’ and more inclusive referral pathways (such as London’s 24 h model) were more likely to be understood and followed by both hospital and ambulance staff. This effect might reflect such established concepts as ‘feasibility’ [4, 11], ‘compatibility’ , and ‘complexity’ [7, 10], whereby an intervention is more likely to be adopted if it is readily incorporated into existing or standard activities.
The concept of ‘execution’, i.e. where implementation is achieved as intended , was highly relevant to this analysis. In terms of timeliness of implementation, the advantages of a ‘big bang’ launch and associated planning, and the disadvantages of phased implementation, were clear: a single launch date gave clear understanding across all stakeholders when implementation was complete. This finding may seem counter-intuitive, given previous research indicating risks related to ‘big bang’ implementation . However, in changes like these, where service models have multiple interdependent components, a ‘big bang’ approach appears to be beneficial, and represents an example of ‘adaptation’ [1, 9], where an implementation approach is selected to reflect the scale of the task and the complexity of the new system. Further research is required to establish the extent to which this finding applies to similar changes in different healthcare settings. Also associated with ‘execution’ was the use of standards: linking the launch of the new model to achieving standards appeared to increase the likelihood of there being uniform capacity to provide evidence-based care, but also gave a shared understanding of what had to be delivered in all services. By associating achievement of standards with financial incentives, the London changes reflected the hypothesised benefits of altering remuneration to encourage adoption of the intervention . The contribution of ‘hands-on’ facilitation, for example by external change agents, to implementation outcomes, is acknowledged by various implementation frameworks [1, 10].
These differences in relation to ‘big bang’ vs phased launch, use of standards, and hands-on facilitation reflect an underlying contrast in the implementation approaches adopted in our studied regions, with implementation in London facilitated significantly more actively than in Manchester.
This paper has a number of limitations. First, because this research was retrospective in nature, interviewees were looking back on the changes, with some awareness (e.g. by monitoring national audit data) of the degree to which changes implemented had succeeded in influencing provision of evidence-based care. This may have affected the way in which they articulated their views of the implementation of the changes. Future research on changes of this kind would benefit from being carried out contemporaneously with the changes, ideally from pre-implementation stage, and extending over a sufficient time period to allow formal evaluation of impact on intervention outcomes, such as patient mortality. Second, while we believe our data indicate that the identified implementation strategies played a significant role in the implementation outcomes observed, the relative contribution of each component cannot be established. Third, we sampled only a proportion of services in this study, and other factors may have been important to implementation in other services in the reconfigured systems. However, we believe that by conducting interviews at pan-regional level, and sharing findings with local stakeholders, our findings provide a strong representation of implementation in both regions. Finally, this is a study of major system change in one particular domain of acute care. Studies of major system change in other acute (and non-acute) care settings would be of value to aid identification of potentially generalisable lessons.
This paper used a framework drawing on key features of change identified in existing implementation theory to analyse two examples of major system change in acute stroke care. We found that model selection (‘simplicity’ and inclusivity) and implementation approach (single launch date, prioritisation of standards and financial incentives, and hands-on facilitation) make significant contributions to implementation outcomes observed, and in turn intervention outcomes [28, 29].
We believe this paper demonstrates the value of considering the interdependencies between intervention, implementation approach, and outcomes when planning and evaluating major system change. However, the particular relationships identified in this analysis may vary according to the nature of the change being implemented. The framework described in this paper is likely to be strengthened through further use in evaluating major system changes conducted in other healthcare settings.
This paper presents independent research funded by the National Institute for Health Research Health Services and Delivery Research Programme (NIHR HS&DR) (Study Reference 10/1009/09). SJT, SM, and NJF were partly supported by the NIHR Collaboration for Leadership in Applied Health Research and Care (CLAHRC) North Thames at Bart’s Health NHS Trust. CDAW was partly supported by the NIHR Biomedical Research Centre (BRC) at Guy’s and St Thomas’ NHS Foundation Trust and King’s College London and also by the NIHR Collaboration for Leadership in Applied Health Research and Care (CLAHRC) South London. The study was granted ethical approval by the London East NHS research ethics committee (Reference 11/LO/1396). The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, or the Department of Health. We thank Andrew Wilshere for commenting on and proofing earlier versions of this manuscript, and feedback from two reviewers. Finally, we wish to thank all those who gave their time to participate in this research.
All authors contributed to the study design. NJF conceived the analytical approach on which this manuscript is based. AIGR, CP, and SJT conducted the interviews. NJF, AIGR, and CP conducted the analysis and, with SM, led the interpretation of that analysis. NJF and AIGR drafted the manuscript. All authors made critical revisions for important intellectual content, and approved the final manuscript. All authors agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the article are appropriately investigated and resolved.
Professor Tony Rudd is the Clinical Director for Stroke in London and the National Clinical Director for Stroke, NHS England. Professor Pippa Tyrrell was the Stroke Clinical Lead for Greater Manchester to 2013, and led the Greater Manchester stroke service centralisation from 2007–13. All other authors declare that they have no competing interests.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
- Kitson AL, Rycroft-Malone J, Harvey G, McCormack B, Seers K, Titchen A. Evaluating the successful implementation of evidence into practice using the PARiHS framework: theoretical and practical challenges. Implement Sci. 2008;3:1.View ArticlePubMedPubMed CentralGoogle Scholar
- Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:53.View ArticlePubMedPubMed CentralGoogle Scholar
- Pronovost P, Goeschel C, Marsteller J, Sexton J, Pham J, Berenholtz S. Framework for patient safety research and improvement. Circulation. 2009;119:330.View ArticlePubMedGoogle Scholar
- Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38:65–76.View ArticlePubMedGoogle Scholar
- Stetler CB, Legro MW, Wallace CM, Bowman C, Guihan M, Hagedorn H, et al. The role of formative evaluation in implementation research and the QUERI experience. J Gen Intern Med. 2006;21:S1–8.View ArticlePubMedPubMed CentralGoogle Scholar
- Mendel P, Meredith LS, Schoenbaum M, Sherbourne CD, Wells KB. Interventions in organizational and community context: a framework for building evidence on dissemination and implementation in health services research. Adm Policy Ment Health. 2008;35:21–37.View ArticlePubMedGoogle Scholar
- Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82:581–629. doi:10.1111/j.0887-378X.2004.00325.x.View ArticlePubMedPubMed CentralGoogle Scholar
- Best A, Greenhalgh T, Lewis S, Saul J, Carroll S, Bitz J. Large-system transformation in health care: a realist review. Milbank Q. 2012;90:421.View ArticlePubMedPubMed CentralGoogle Scholar
- Graham ID, Logan J, Harrison MB, Straus SE, Tetroe J, Caswell W, et al. Lost in knowledge translation: time for a map? J Contin Educ. 2006;26:13–24.View ArticleGoogle Scholar
- Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.View ArticlePubMedPubMed CentralGoogle Scholar
- May C. Towards a general theory of implementation. Implement Sci. 2013;8:18.View ArticlePubMedPubMed CentralGoogle Scholar
- Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med. 2012;43:337–50.View ArticlePubMedPubMed CentralGoogle Scholar
- Davidoff F, Dixon-Woods M, Leviton L, Michie S. Demystifying theory and its use in improvement. BMJ Qual Saf. 2015;24:bmjqs-2014-003627.View ArticleGoogle Scholar
- Imison C, Sonola L, Honeyman M, Ross S. The reconfiguration of clinical services in the NHS: what is the evidence? London: King's Fund; 2014.Google Scholar
- Fulop N, Boaden R, Hunter R, McKevitt C, Morris S, Pursani N, et al. Innovations in major system reconfiguration in England: a study of the effectiveness, acceptability and processes of implementation of two models of stroke care. Implement Sci. 2013;8:5. doi:10.1186/1748-5908-8-5.View ArticlePubMedPubMed CentralGoogle Scholar
- Fulop N, Walters R, Perri, Spurgeon P. Implementing changes to hospital services: factors influencing the process and ‘results’ of reconfiguration. Health Policy. 2012;104:128–35.View ArticlePubMedGoogle Scholar
- Prabhakaran S, O’Neill K, Stein-Spencer L, Walter J, Alberts MJ. Prehospital triage to primary stroke centers and rate of stroke thrombolysis. JAMA Neurol. 2013;70:1126–32.View ArticlePubMedGoogle Scholar
- Smith EE, Dreyer P, Prvu-Bettger J, Abdullah AR, Palmeri G, Goyette L, et al. Stroke center designation can be achieved by small hospitals: the Massachusetts experience. Crit Pathw Cardiol. 2008;7:173–7.View ArticlePubMedGoogle Scholar
- Weir N, Buchan A. A study of the workload and effectiveness of a comprehensive acute stroke service. J Neurol Neurosurg Psychiatry. 2005;76:863–5.View ArticlePubMedPubMed CentralGoogle Scholar
- Lahr MM, Luijckx G-J, Vroomen PC, van der Zee D-J, Buskens E. Proportion of patients treated with thrombolysis in a centralized versus a decentralized acute stroke care setting. Stroke. 2012;43:1336–40.View ArticlePubMedGoogle Scholar
- Bruins Slot K, Murray V, Boysen G, Berge E. Thrombolytic treatment for stroke in the Scandinavian countries. Acta Neurol Scand. 2009;120:270–6.View ArticlePubMedGoogle Scholar
- Cadilhac DA, Purvis T, Kilkenny MF, Longworth M, Mohr K, Pollack M, et al. Evaluation of rural stroke services: does implementation of coordinators and pathways improve care in rural hospitals? Stroke. 2013;44:2848–53.View ArticlePubMedGoogle Scholar
- Sampalis JS, Denis R, Lavoie A, Frechette P, Boukas S, Nikolis A, et al. Trauma care regionalization: a process-outcome evaluation. J Trauma Acute Care Surg. 1999;46:565–81.View ArticleGoogle Scholar
- Mullins RJ, Mann NC. Population-based research assessing the effectiveness of trauma systems. J Trauma Acute Care Surg. 1999;47:S59–66.View ArticleGoogle Scholar
- MacKenzie EJ, Rivara FP, Jurkovich GJ, Nathens AB, Frey KP, Egleston BL, et al. A national evaluation of the effect of trauma-center care on mortality. N Engl J Med. 2006;354:366–78.View ArticlePubMedGoogle Scholar
- Grumbach K, Anderson GM, Luft HS, Roos LL, Brook R. Regionalization of cardiac surgery in the United States and Canada: geographic access, choice, and outcomes. JAMA. 1995;274:1282–8.View ArticlePubMedGoogle Scholar
- Shah V, Warre R, Lee SK. Quality improvement initiatives in neonatal intensive care unit networks: achievements and challenges. Acad Pediatr. 2013;13:S75–83.View ArticlePubMedGoogle Scholar
- Morris S, Hunter RM, Ramsay AIG, Boaden R, McKevitt C, Perry C, et al. Impact of centralising acute stroke services in English metropolitan areas on mortality and length of hospital stay: difference-in-differences analysis. BMJ. 2014;349:g4757.View ArticlePubMedPubMed CentralGoogle Scholar
- Ramsay AIG, Morris S, Hoffman A, Hunter RM, Boaden R, McKevitt C, et al. Effects of centralizing acute stroke services on stroke care provision in two large metropolitan areas in England. Stroke. 2015;46:2244–51. doi:10.1161/STROKEAHA.115.009723.View ArticlePubMedPubMed CentralGoogle Scholar
- National Institute for Health and Clinical Excellence. Stroke: diagnosis and initial management of acute stroke and transient ischaemic attack (TIA): NICE. 2008.Google Scholar
- Intercollegiate Stroke Working Party. National clinical guideline for stroke. 4th ed. London: Royal College of Physicians; 2012.Google Scholar
- Turner S, Ramsay AI, Perry C, Boaden RJ, Mckevitt C, Morris S, et al. Lessons for major system change: centralisation of stroke services in two metropolitan areas of England. J Health Serv Res Policy. 2016. doi:10.1177/1355819615626189.PubMed CentralGoogle Scholar
- Department of Health. National stroke strategy. London: Crown; 2007.Google Scholar
- Foy R, Sales A, Wensing M, Aarons GA, Flottorp S, Kent B, et al. Implementation science: a reappraisal of our journal mission and scope. Implement Sci. 2015;10:51.View ArticlePubMedPubMed CentralGoogle Scholar
- Office for National Statistics: 2011 Census: Usual resident population, local authorities in England and Wales. Office for National Statistics website [27 March 2014]; Available from: http://www.ons.gov.uk/ons/rel/census/2011-census/key-statistics-for-local-authorities-in-england-and-wales/rft-table-ks101ew.xls.
- Bradley EH, Curry LA, Devers KJ. Qualitative data analysis for health services research: developing taxonomy, themes, and theory. Health Serv Res. 2007;42:1758–72.View ArticlePubMedPubMed CentralGoogle Scholar
- Mays N, Pope C. Qualitative research: rigour and qualitative research. BMJ. 1995;311:109.View ArticlePubMedPubMed CentralGoogle Scholar
- Greater Manchester and Cheshire Stroke Network Support Team. Development of stroke services in Greater Manchester: twelve month review. Manchester: Greater Manchester and Cheshire Cardiac and Stroke Network; 2011.Google Scholar
- McNulty T, Ferlie E. Process transformation: Limitations to radical organizational change within public service organizations. Organ Stud. 2004;25:1389–412.View ArticleGoogle Scholar