This article has Open Peer Review reports available.
Determinants of successful clinical networks: the conceptual framework and study protocol
© Haines et al; licensee BioMed Central Ltd. 2012
Received: 20 November 2011
Accepted: 13 March 2012
Published: 13 March 2012
Clinical networks are increasingly being viewed as an important strategy for increasing evidence-based practice and improving models of care, but success is variable and characteristics of networks with high impact are uncertain. This study takes advantage of the variability in the functioning and outcomes of networks supported by the Australian New South Wales (NSW) Agency for Clinical Innovation's non-mandatory model of clinical networks to investigate the factors that contribute to the success of clinical networks.
The objective of this retrospective study is to examine the association between external support, organisational and program factors, and indicators of success among 19 clinical networks over a three-year period (2006-2008). The outcomes (health impact, system impact, programs implemented, engagement, user perception, and financial leverage) and explanatory factors will be collected using a web-based survey, interviews, and record review. An independent expert panel will provide judgements about the impact or extent of each network's initiatives on health and system impacts. The ratings of the expert panel will be the outcome used in multivariable analyses. Following the rating of network success, a qualitative study will be conducted to provide a more in-depth examination of the most successful networks.
This is the first study to combine quantitative and qualitative methods to examine the factors that contribute to the success of clinical networks and, more generally, is the largest study of clinical networks undertaken. The adaptation of expert panel methods to rate the impacts of networks is the methodological innovation of this study. The proposed project will identify the conditions that should be established or encouraged by agencies developing clinical networks and will be of immediate use in forming strategies and programs to maximise the effectiveness of such networks.
The role of clinical networks in improving evidence-based practice
It is widely accepted that patients who receive evidence-based care achieve better outcomes. However, despite increases in more rigorous clinically relevant research, the slow and haphazard uptake or failure to adopt such evidence into practice persists [1, 2].
Clinical networks are more commonly being viewed as an important strategy for increasing evidence-based practice and improving models of care . It is argued that clinical networks provide 'bottom up' views on the best ways to tackle complex healthcare problems and can facilitate or champion changes in practice at the clinical interface [3, 4]. Most clinical networks are established to improve the quality of and access to care for patients, including those who require care across a range of care settings. The term clinical network has been used to describe many variants of networks, ranging from fully integrated service delivery systems to informal communities of practice . In this study, we define the term clinical networks to mean voluntary clinician groupings that aim to improve clinical care and service delivery using a collegial approach to identify and implement a range of strategies.
Clinical networks in New South Wales, Australia--focus of the study
The New South Wales Agency for Clinical Innovation clinical network model
Description of Network Model
The goals of the networks are to improve health services and health outcomes by developing services based on clinical need, improving the quality of care and safety for patients, increasing equity of access and equity of outcomes within the hospital system, and enabling clinician-and consumer-driven planning.
The clinical networks are composed of volunteer health professionals across a range of clinical areas to disseminate knowledge on evidence-based care.
The networks are free to select those issues that they believe will be effective in improving care. Each network is chaired by clinicians, has a Network Manager employed by the Agency, and implements activities in association with State Health Department and the Area Health Services.
Inputs to support networks
To support the networks, the Agency provides
• funds to employ a Network Manager and approximately AUD$30000 for small projects and operations;
• training and support for the Network Managers;
• funds for larger-scale projects on a competitive basis of approximately AUD$100000 (per project);
• accommodation and office facilities for the Network Managers, most of whom are located together in the main Agency office;
• bimonthly meetings for Network Managers to report on activities, discuss common problems, and share ideas and potential solutions;
• monitoring of progress and feedback by close involvement in network activities, ongoing supervision of managers, and overview of annual reports from each network that address their activities against their annual plans;
• profiling of the work of the networks through formal annual reports.
The evidence gap: What makes clinical networks successful?
Some clinical networks are more effective than others. Clinical networks can engage clinicians in service redesign and reform [6, 7], develop and implement protocols [6, 7], develop and implement guidelines [8–10], facilitate knowledge sharing , and design and implement quality-improvement programs that result in improved quality of care in hospitals [7, 8, 10]. However, other research has reported that clinical networks have not had an impact . In studies evaluating more than one network, varying success between networks has been reported , with others being unable to sustain improvements after the funding cycle ended .
Much of the research into clinical networks focuses on describing the establishment and activities of single networks . Few studies have aimed to identify critical factors that determine the effectiveness of a network . A recent Swedish qualitative study compared factors associated with three successful clinical networks with three networks that did not develop successfully . Three major determinants of developing a successful network were identified: professional dedication, legitimacy, and confidence. However, this study examined only a small number of networks, provided limited information regarding study design and methods, and did not quantify the strength of any observed association.
Given their widespread implementation and data indicating variable success, there is considerable interest in understanding how clinical networks can best be established and supported to maximise their impact on patient care and service delivery.
The study takes advantage of a unique opportunity provided by the Agency's non-mandatory model of clinical networks to investigate the factors that contribute to the success of clinical networks. Multiple coexisting networks, such as those operating under the Agency, provide an opportunity to holistically examine the range of factors that affect the success of clinical networks.
Research objective and hypotheses
The objective of this study is to investigate the external support, organisational, and program factors associated with successful clinical networks. Based on our conceptual model described below, success is defined as follows:
Healthcare impact: The extent to which there is evidence of impact on healthcare and patient outcomes.
System impact: The extent to which there is evidence of impact on system-wide change.
Programs: The number of quality-improvement initiatives undertaken and the quality of their design.
Engagement: The extent of engagement by network members in network activities.
User perception: The extent to which stakeholders perceive the networks as effective and valuable.
Financial leverage: The value of any additional resources leveraged.
A high level of external support from area health service and hospital management.
Effective organisation, specifically strong clinical leadership and efficient internal management.
Well-designed quality-improvement programs, specifically those that are based on an analysis of the problem, have a specific targeted structural or behavioural change, have an explicit implementation plan, and monitor impact.
Given the heterogeneity of clinical disciplines and health conditions focused upon by clinical networks, multiple metrics of the success of networks is required . For example, disease-free survival, readmission rates, or mortality rates will vary in applicability for different networks. A key component of our approach was to develop a defensible suite of outcomes to judge the successfulness of clinical networks that are justifiable to scientific, clinical, and policy communities.
The outcomes have potential to influence each other, and in many ways, could be interdependent. For this model, the outcomes have been grouped into 'end outcomes', which are more long-term indicators of success, and 'intermediate outcomes', which may function as indicators of success independently or as intermediary steps towards success illustrated in another way. The explanatory factors underneath the model could be relevant at different stages along this pathway, potentially having an influence on the different outcomes. These factors could function to 'enable' the outcomes and, as such, could be included in the model contributing to any of the outcomes.
Following the rating of network success, a qualitative study will be conducted, complementing the quantitative study. This will assist with interpretation of the results by providing more in-depth examination of factors that contributed to the successful networks. We will purposely select up to three networks to focus on in more detail. In-depth interviews with key informants associated with those networks will explore the reasons for the associations we may find between explanatory factors and outcomes.
Outcome indicators (see Additional file 1)
Evidence of impact on healthcare and patient outcomes: This study requires a standard approach to measure changes in quality of care and patient outcomes, taking into consideration that the networks have developed different initiatives focused on a wide range of different conditions dealt with by different health services. Using the definition in Additional file 1 secondary evidence of each network's impact on healthcare and patient outcomes will be collected through interviews with network leaders and managers. This evidence will then be submitted to the expert panel for rating on the extent of impact on quality of care and patient outcomes for each network.
Evidence of impact on system-wide change: The Institute of Healthcare Innovation in the United States has advocated for the wider adoption of network initiatives throughout the health system as a measure of network success [17–19], and this has been used to assess health system performance [20, 21]. The secondary evidence of each network's impact on system-wide change will be collected through interviews with Network Chairs and Managers. An expert panel will rate the extent of impact on system-wide change for each network based on this secondary evidence.
Developed and implemented quality-improvement initiatives: A census of network activities will be identified through a review of minutes of network meetings, annual plans, and other relevant existing documents. Details of quality-improvement initiatives will be corroborated through interview and by sighting secondary supporting evidence .
Engagement of multidisciplinary clinicians: Describing and classifying members of clinical networks has often been used to judge the success of clinical networks [6, 13, 23], along with tracking attendances, membership [11, 24], and perceptions of engagement  through in-depth interviews and focus groups [6, 9, 13, 26, 27]. We will assess multiple dimensions of engagement using both record review and surveys to assess the extent and depth of engagement of network members.
Perceived as valuable: Previous studies of clinical networks have assessed perceptions of the value and effectiveness of networks using semi-structured interviews [6, 11] and focus groups  with patients, health service personnel, and clinicians. It is not possible to use these existing questions verbatim because they are focused on specific networks, but we will use these as a guide and draw upon key words and themes from the qualitative study  to design the survey to assess perceived value.
Leveraged additional resources: Resources obtained for network activities from other sources apart from the Agency will be extracted from financial records using audit methods .
Explanatory factor indicators (see Additional file 2)
External support: Clinical networks operate within a complex political, cultural, and organisational context . Although Turrini and colleagues  identified community cohesion, local support, and participation as critical factors in the success of networks, few studies have considered the external context in which networks operate [12, 13]. We developed questions for our web survey based upon keywords and themes from the qualitative study , which strongly supported assessing relationships with health agencies as a determinant of network success. Perception of network members about aspects of external context defined in Additional file 2 will be assessed through the web survey.
Perceived leadership: A growing body of research evidence supports the influence of leadership on the success of networks [6, 9, 22, 30]. Using a web survey of network members, we will assess the strength and quality of the leadership of the network across six key aspects derived from previous literature (see Additional file 2).
Internal management: Models of effective healthcare organisations emphasise the importance of efficient internal management [14, 31] and the impact of internal structures (e.g., size, staffing, governance) and processes for facilitating communication and knowledge sharing between network members [9, 11, 32]. Previous studies of clinical networks have predominantly used document review and semi-structured interviews to assess internal management [9, 32, 33]. Aspects of internal management of each network (defined in Additional file 2) will be assessed through record review and perceptions of network members in the web survey.
Well-designed quality-improvement initiatives: Each network will be categorised on how well the quality-improvement initiatives that contributed to their main outcomes were designed in terms of the four criteria detailed in Additional file 2.
Data collection methods and samples
Web survey: The aim of the web survey is to assess network members' perceptions of value, effectiveness, leadership, management, external support, and engagement. The web survey has been developed by building upon appropriate existing measures relating to clinical networks and those in the wider organisational literature (see Additional file 2 and above). In addition, all questions have been tailored to the local context by taking account of the views and vocabulary elicited in qualitative exploratory interviews with key stakeholders who have explicit knowledge of the networks of the Agency . A record review (Agency minutes and membership lists) was used to identify a total of 4,280 individuals who participated in the 19 clinical networks of the Agency from January 2006 to December 2008. All 3,316 network members and participants with known email addresses will be contacted and invited to participate in the web survey. The web survey will ask retrospective questions about the attitudes and perceptions of network members and participants during the study period (2006-2008). To aid recall and minimize recall bias, a number of measures will be employed, including [35, 36] (a) recall prompts to assist respondents to identify the relevant time period and (b) recall aids to enable respondents to use recognition rather than recall as a strategy for reporting specific activities/quality-improvement initiatives since these questions may be easier to answer if referring to specific initiatives.
Interviews: 19 interviews to determine the impacts of the networks will be conducted with both managers and clinical chairs at the same time for each network. The aim of this interview is to identify the most important impacts on quality of care and system-wide change that resulted from their network's activities between 2006 and 2008. Participants will be asked (a) to identify the most important impacts, (b) to explain the importance of each impact, (c) how their network activities led to that impact, and (d) for evidence of the impacts. The interview will not be a qualitative exploration but rather an evidence-gathering activity. The interview will also be recorded as a back-up (not transcribed). The Network Managers will also be required to obtain evidence to demonstrate the impacts and how those impacts relate to the network activities. Following this interview and receipt of the evidence, a pro forma detailing the four areas covered in the interview listed above for each impact will be completed by the study team and reviewed by Network Managers and Chairs for accuracy. These pro formas and associated evidence will be passed on to the expert panel for rating.
Expert panel: This study will use an adaptation of the RAND/UCLA (University of California, Los Angeles) appropriateness method [37, 38]. This is a systematic consensus method that has been widely used to derive expert consensus on clinical indications. The traditional use combines expert opinion with a systematic review of the scientific evidence to determine whether a given procedure would be appropriate in specific situations. More recently, the RAND/UCLA approach has been adapted to assess the appropriateness of quality-improvement initiatives and whether they would be likely to improve health outcomes or healthcare quality [39–41]. Lisa Rubenstein and colleagues used the method to establish organisational quality-improvement priorities, focusing on system-based objectives rather than specific issues within patient care . The panel will have a chair and four members with extensive expertise in system-wide clinical care and quality-improvement programs as well as the expert panel method. The panellists were selected through a voting and nomination process with the study investigators. In order to ensure independence of the panel members from the Agency, the study team members from the Agency were not involved in the voting and selection of panellists, and all panellists completed a conflict of interest declaration. Based on the evidence provided by the networks on quality of care and system-wide change, the panellists will individually rate the importance of each impact on a nine-point scale. In a moderated meeting, the panel will then discuss any discrepancies in their scores and rerate each impact for a final score of overall success.
Document review: The Agency has kept detailed records of all meetings and initiatives and provides regular reports to the NSW Department of Health and to the Minister. These records will be audited using a standardised coding schedule, including free-text annotations, to identify initiatives undertaken and network membership. Additional resources leveraged will be extracted from income and expenditure statements in financial records.
Statistical methods--association between outcomes and exposures
The unit of analysis for this study is the network. Nonparametric Spearman correlation coefficients will be obtained to investigate the association between the various outcomes (to determine if those who score highly on one outcome also score highly on other outcomes), between the various explanatory variables, and between each outcome and each explanatory variable. Other potential factors that may confound the association between the hypothesised explanatory factors and outcomes of clinical networks will be examined. These include months of operation, Network Manager's average full-time equivalent working hours during the study period, turnover of staff, turnover of chairs, budget allocated to network by Agency, and start-up network budget. Multiple linear regression analysis will be undertaken to examine the relationship between all exposures and each of the outcomes. Variables will be included in the regression model if they have a p value of 0.25 or less on univariate analysis, and a stepwise process will be used to include/exclude variables until the final model is determined. Because the number of observations for these models is small (19), it will not be possible to include a large number of variables in each model. Therefore, various models will be generated for each outcome that consider meaningful groups of explanatory variables at a time. A significance level of 5% will be used to assess statistical significance in the final model. If appropriate (i.e., there is variation in precision of the summary measure estimates), we will use the method of Kulathinal  to adjust for variation in the measurement errors among networks.
There have been no previous quantitative studies examining the association between organisational and contextual factors and the effectiveness of clinical networks. We have used data from a recent Australian study examining the association between clinical performance and organisational determinants in 19 healthcare organisations to estimate the likely effect size; in this study, Spearman correlation coefficients for associations of relevance to our study generally range from 0.45 to 0.71. With 19 networks and a 5% significance level, we have 80% power to detect a correlation coefficient as being statistically significant if it is 0.6 or more. Thus, we will have sufficient power to detect moderate to large associations that, given the findings of Braithwaite , are achievable and clinically meaningful.
The final stage of this study will be a qualitative study to complement the quantitative results by exploring, from the viewpoints of key stakeholders, the functions and relationships between the network features and processes associated with making an impact on quality of care and system-wide change. The aim of this study is to develop in-depth reasons and explanations for network success that could then be used to inform future network development. The successful impacts within the networks will be explored, with the aim of gaining insight into the process(es) that led to the impact. The results of the analysis will inform (a) the selection of networks and (b) the explanatory factors that will be explored qualitatively. The data will be collected through individual, face-to-face semi-structured interviews with key informants involved in the success of each network. A snowballing approach to sampling will be used to locate the key informants, starting with a Network Chair and the Network Manager from 2006 to 2008. The sample will include a mix of those connected with the network and those who are not linked to the network to gain multiple perspectives. The qualitative exploration will involve thematic analysis of data to identify the main themes that emerge across the accounts of success as having made an impact. This study will function to illuminate aspects of the quantitative analysis and to drill down to identify why and how those features identified in the quantitative work contributed to network success.
This project will use the unique opportunity provided by the clinical networks of the NSW Agency of Clinical Innovation to undertake the first quantitative study to examine the factors that contribute to the success of clinical networks and, more generally, the largest study of clinical networks undertaken internationally. The mixed-methods approach combined with the adaptation of expert panel methods to rate impacts of networks is the methodological innovation of this study.
Challenges inherent in this study relate to difficulties in comparing these very different networks--the 'apples and oranges' problem. We need to rate each network's impact on quality of care and system-wide change, taking into account heterogeneity of impacts. A further challenge in this comparative study will be whether it is possible to adequately take account of other large differences between the networks that may influence the impacts they can achieve, such as the focus and size of their clinical discipline and their stage of operational establishment. Our methods, as outlined in this protocol, will go some way to addressing these challenges, but further validation is likely to be required.
The project is based on a strong working partnership between the research group and the clinical networks. This enables the research to be framed around the real-world operational issues of the networks and for the study to be designed so it is sensitive to the operational constraints of the networks. The research team has expertise in social and behavioural science, economics, clinical epidemiology, biostatistics, clinical care, and evaluation of health service interventions. Furthermore, a number of members are leaders in the implementation of clinical networks (including the CEO, Executive Director, and former Chair of the Agency). With this combination of collaborators, the study will meet scientific standards and will also be used by the Agency when setting policy directions for the networks.
There is an urgent need to understand the factors that increase the likelihood that clinical networks will be effective because they are being widely implemented in Australia and other countries. The proposed project will identify the conditions that should be established or encouraged by agencies developing clinical networks and will be of immediate use in forming strategies and programs to maximise the effectiveness of such networks. The findings will form the basis of strategies to improve less effective networks and to ensure that any new networks are established as well as possible. The outcomes and tools developed as part of this project can be adopted by this Agency and others for ongoing monitoring of impact.
Acknowledgements and funding
As well as the named authors, the other members of the Clinical Networks Research Group are Peter Castaldi (University of Sydney), Deanna Kalucy (Sax Institute), Patrick McElduff (University of Newcastle), Kate Needham (NSW Agency for Clinical Innovation), Carol Pollock (Royal North Shore Hospital), Rob Sanson-Fisher (University of Newcastle), Anthony Scott (University of Melbourne), Hunter Watt (NSW Agency for Clinical Innovation).
This research is supported by the National Health and Medical Research Council of Australia through their partnership project grant scheme (ID: 571447). This protocol has been approved by the University of Sydney, Human Research Ethics Committee in August 2011 (ID: 13988).
- Squires JE, Hutchinson AM, Boström A-M, O'Rourke HM, Cobban SJ, Estabrooks CA: To what extent do nurses use research in clinical practice? A systematic review. Implement Sci. 2011, 6: 2-17. 10.1186/1748-5908-6-2.View ArticleGoogle Scholar
- Glasziou P, Haynes B: The paths from research to improved health outcomes. ACP Journal Club. 2005, 142: A9-A10.Google Scholar
- Goodwin N, Perri 6, Peck E, Freeman T, Posaner R: Managing Across Diverse Networks of Care: Lessons from Other Sectors. Book Managing Across Diverse Networks of Care: Lessons from Other Sectors (Editor ed.^eds.). 2004, City: National Co-ordinating Centre for NHS Service Delivery and OrganisationGoogle Scholar
- Stewart GJ, Dwyer JM, Goulston KJ: The Greater Metropolitan Clinical Taskforce: an Australian model for clinician governance. MJA. 2006, 184: 597-599.PubMedGoogle Scholar
- Sax Institute: What have the clinical networks acheived and who has been involved? 2006-2008: Retrospective study of the quality improvement activities of and participation in a taskforce of clinical networks. Book What have the clinical networks acheived and who has been involved? 2006-2008: Retrospective study of the quality improvement activities of and participation in a taskforce of clinical networks. 2011, Sax InstituteGoogle Scholar
- Hamilton KE, Sullivan FM, Donnan PT, Taylor R, Ikenwilo D, Scott A, Baker C, Wyke S: A managed clinical network for cardiac services: set-up, operation and impact on patient care. International Journal of Integrated Care. 2005, 5: 1-13.View ArticleGoogle Scholar
- Cadilhac DA, Pearce DC, Levi CR, Donnan GA: Improvements in the quality of care and health outcomes with new stroke care units following implementation of a clinician-led, health system redesign programme in New South Wales, Australia. Qual Saf Health Care. 2008, 17: 329-333. 10.1136/qshc.2007.024604.View ArticlePubMedGoogle Scholar
- Laliberte L, Fennell ML, Papandonatos G: The relationship of membership in research networks to compliance with treatment guidelines for early-stage breast cancer. Medical Care. 2005, 43: 471-479. 10.1097/01.mlr.0000160416.66188.f5.View ArticlePubMedGoogle Scholar
- Tolson D, McIntosh J, Loftusa L, Cormie P: Developing a managed clinical network in palliative care: a realistic evaluation. International Journal of Nursing Studies. 2007, 44: 183-195. 10.1016/j.ijnurstu.2005.11.027.View ArticlePubMedGoogle Scholar
- Ray-Coquard I, Philip T, de Laroche G, Froger X, Suchaud J-P, Voloch A, Mathieu-Daude H, Fervers B, Farsi F, Browman G, Chauvin F: A controlled 'before-after' study: impact of a clinical guidelines programme and regional cancer network organization on medical practice. British Journal of Cancer. 2002, 86: 313-321. 10.1038/sj.bjc.6600057.View ArticlePubMedPubMed CentralGoogle Scholar
- Addicott R, McGivern G, Ferlie E: Networks, Organizational Learning and Knowledge Management: NHS Cancer Networks. Public Money & Management. 2006, 87-94.Google Scholar
- Nies H, Van Linschoten P, Plaisier A, Romijn C: Networks as regional structures for collaboration in integrated care for older people. Book Networks as regional structures for collaboration in integrated care for older people (Editor ed.^eds.). 2003, CityGoogle Scholar
- Ahgren B, Axelsson R: Determinants of integrated health care development: chains of care in Sweden. International Journal of Health Planning & Management. 2007, 22: 145-157. 10.1002/hpm.870.View ArticleGoogle Scholar
- Bate P, Mendel P, Robert G: Organizing for Quality. 2008, Oxford: Radcliffe PublishingGoogle Scholar
- Craig P: Developing and evaluating complex interventions: the new Medical Research Council guidelines. BMJ. 2008, 337: 979-983. 10.1136/bmj.a979.View ArticleGoogle Scholar
- McInnes E, Middleton S, Gardner G, Haines M, Haerstch M, Paul C, Castaldi P: A qualitative study of stakeholder views of the preconditions for and outcomes of successful networks. Bmc Health Serv Res. 2011, under reviewGoogle Scholar
- Berwick DM: Disseminating innovations in health care. JAMA. 2003, 289: 1969-1975. 10.1001/jama.289.15.1969.View ArticlePubMedGoogle Scholar
- Massoud M, Nielsen G, Nolan K, Nolan T, Schall M, Sevin C: A Framework for Spread: From Local Improvements to System-Wide Change. Book A Framework for Spread: From Local Improvements to System-Wide Change (Editor ed.^eds.). 2006, City: Institute for Healthcare ImprovementGoogle Scholar
- Institute for Heathcare Improvement: Getting Started Kit: Sustainability and Spread - How-to Guide - 100,000 Lives Campaign. Book Getting Started Kit: Sustainability and Spread - How-to Guide - 100,000 Lives Campaign (Editor ed.^eds.). 2008, City: Institute for Heathcare ImprovementGoogle Scholar
- Nolan K, Schall MW, Erb F, Nolan T: Using a Framework for Spread: The Case of Patient Access in the Veterans Health Administration. Journal of Quality and Patient Safety. 2005, 31: 9-Google Scholar
- Solberg L: Lessons for non-VA care delivery systems from the U. S. Department of Veterans Affairs Quality Enhancement Research Initiative: QUERI Series. Implement Sci. 2009, 4: 9-10.1186/1748-5908-4-9.View ArticlePubMedPubMed CentralGoogle Scholar
- Greene A, Pagliari C, Cunningham S, Donnan P, Evans J, Emslie-Smith A, Morris A, Guthrie B: Do managed clinical networks improve quality of diabetes care? Evidence from a retrospective mixed methods evaluation. Qual Saf Health Care. 2009, 18: 456-461. 10.1136/qshc.2007.023119.View ArticlePubMedGoogle Scholar
- Fleury MJ, Mercier C, Denis JL: Regional planning implementation and its impact on integration of a mental health care network. International Journal of Health Planning & Management. 2002, 17: 315-332. 10.1002/hpm.684.View ArticleGoogle Scholar
- Kennedy JL, Lee J, van den Berg C, Kimble R: The power and influence of networks. Book The power and influence of networks (Editor ed.^eds.). 2010, CityGoogle Scholar
- Ferlie E, Fitzgerald L, McGivern G, Dopson S, Exworthy M: Networks in Health Care: a Comparative Study of Their Management, Impact and Performance. Report for the National Institute for Health Research Service Delivery and Organisation programme. Book Networks in Health Care: a Comparative Study of Their Management, Impact and Performance. Report for the National Institute for Health Research Service Delivery and Organisation programme (Editor ed.^eds.). 2010, CityGoogle Scholar
- Sardell A: Clinical networks and clinician retention: the case of CDN. Journal of Community Health. 1996, 21: 437-451. 10.1007/BF01702604.View ArticlePubMedGoogle Scholar
- Touati N, Roberge Dl, Denis JL, Cazale L, Pineault R, Tremblay D: Clinical leaders at the forefront of change in health-care systems: advantages and issues. Lessons learned from the evaluation of the implementation of an integrated oncological services network. Health Services Management Research. 2006, 19: 105-122. 10.1258/095148406776829068.View ArticlePubMedGoogle Scholar
- NICE: Principles for best practice in clinical audit. Book Principles for best practice in clinical audit (Editor ed.^eds.). 2002, City: National Institute for Health and Clinical Excellence (NICE), Oxford: Radcliffe Medical PressGoogle Scholar
- Turrini A, Cristofoli D, Frosini F, Nasi G: Networking literature about determinants of network effectiveness. Public Administration. 2010, 88: 528-550.View ArticleGoogle Scholar
- Ovretveit J: Improvement leaders: what do they and should they do? A summary of a review of research. [Review]. Quality & Safety in Health Care. 2010, 19: 490-492. 10.1136/qshc.2010.041772.Google Scholar
- Yano E: The role of organizational research in implementing evidence-based practice. QUERI Series Implementation Science. 2008Google Scholar
- Addicott R, Ferlie E: Understanding power relationships in health care networks. Journal of Health Organization & Management. 2007, 21: 393-405. 10.1108/14777260710778925.View ArticleGoogle Scholar
- Addicott R, McGivern G, Ferlie E: The Distortion of a Managerial Technique? The Case of Clinical Networks in UK Health Care. British Journal of Management. 2007, 18: 93-105. 10.1111/j.1467-8551.2006.00494.x.View ArticleGoogle Scholar
- Speroff T, O'Connor GT: Study Designs for PDSA Quality Improvement Research. Quality Management in Health Care. 2004, 13: 17-32.View ArticlePubMedGoogle Scholar
- Wagenaar W: My memory: a study of autobiographical memory over six years. Cognitive Psychology. 1986, 18: 225-252. 10.1016/0010-0285(86)90013-7.View ArticleGoogle Scholar
- Sudman S, Bradburn N: Asking Questions. 1982, San Fransisco: Jossey-BassGoogle Scholar
- Brook RH, Chassin MR, Fink A, Solomon DH, Kosecoff J, Park RE: A method for the detailed assessment of the appropriateness of medical technologies. International Journal of Technology Assessment in Health Care. 1986, 2: 53-63. 10.1017/S0266462300002774.View ArticlePubMedGoogle Scholar
- Shekelle P: The appropriateness method. Medical Decision Making. 2004, 24: 228-231. 10.1177/0272989X04264212.View ArticlePubMedGoogle Scholar
- McGory ML, Kao KK, Shekelle PG, Rubenstein LZ, Leonardi MJ, Parikh JA, Fink A, Ko CY: Developing quality indicators for elderly surgical patients. Annals of Surgery. 2009, 250: 338-347. 10.1097/SLA.0b013e3181ae575a.View ArticlePubMedGoogle Scholar
- Rubenstein LV, Parker LE, Meredith LS, Altschuler A, dePillis E, Hernandez J, Gordon NP: Understanding team-based quality improvement for depression in primary care. Health Services Research. 2002, 37: 1009-1029. 10.1034/j.1600-0560.2002.63.x.View ArticlePubMedPubMed CentralGoogle Scholar
- Braithwaite J, Greenfield D, Westbrook J, Pawsey M, Westbrook M, Gibberd R, Naylor J, Nathan S, Robinson M, Runciman B, Jackson M, Travaglia J, Johnston B, Yen D, McDonald H, Low L, Redman S, Johnson B, Corbett A, Hennessy D, Clark J, Lancaster J: Health service accreditation as a predictor of clinical and organisational performance: a blinded, random, stratified study. Quality & Safety in Health Care. 2010, 19: 14-21. 10.1136/qshc.2009.033928.View ArticleGoogle Scholar
- Kulathinal SB, Kuulasmaa K, Gasbarra D: Estimation of an errors-in-variables regression model when the variances of the measurement errors vary between the observations. Stat Med. 2002, 21: 1089-1101. 10.1002/sim.1062.View ArticlePubMedGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.