Open Access

The guideline implementability research and application network (GIRAnet): an international collaborative to support knowledge exchange: study protocol

  • Anna R Gagliardi1Email author,
  • Melissa C Brouwers2 and
  • Onil K Bhattacharyya3
Implementation Science20127:26

DOI: 10.1186/1748-5908-7-26

Received: 8 February 2012

Accepted: 2 April 2012

Published: 2 April 2012

Abstract

Background

Modifying the format and content of guidelines may facilitate their use and lead to improved quality of care. We reviewed the medical literature to identify features desired by different users and associated with guideline use to develop a framework of implementability and found that most guidelines do not contain these elements. Further research is needed to develop and evaluate implementability tools.

Methods

We are launching the Guideline Implementability Research and Application Network (GIRAnet) to enable the development and testing of implementability tools in three domains: Resource Implications, Implementation, and Evaluation. Partners include the Guidelines International Network (G-I-N) and its member guideline developers, implementers, and researchers. In phase one, international guidelines will be examined to identify and describe exemplar tools. Indication-specific and generic tools will populate a searchable repository. In phase two, qualitative analysis of cognitive interviews will be used to understand how developers can best integrate implementability tools in guidelines and how health professionals use them for interpreting and applying guidelines. In phase three, a small-scale pilot test will assess the impact of implementability tools based on quantitative analysis of chart-based behavioural outcomes and qualitative analysis of interviews with participants. The findings will be used to plan a more comprehensive future evaluation of implementability tools.

Discussion

Infrastructure funding to establish GIRAnet will be leveraged with the in-kind contributions of collaborating national and international guideline developers to advance our knowledge of implementation practice and science. Needs assessment and evaluation of GIRAnet will provide a greater understanding of how to develop and sustain such knowledge-exchange networks. Ultimately, by facilitating use of guidelines, this research may lead to improved delivery and outcomes of patient care.

Keywords

Guidelines Guideline development Guideline implementation Research networks Knowledge exchange

Background

Guideline implementability

Guidelines are syntheses of best available evidence that, along with professional judgment and patient preferences, support decision making by clinicians, managers, and policy makers about the organisation and delivery of healthcare. However, they continue to be underused [17]. Research has shown that guideline format and content influence perceptions about and use of guidelines. Specifically, these intrinsic guideline qualities have been shown to promote greater understanding of how users are to apply the recommendations, stimulating confidence in users' ability to practice the recommended behavior, leading to greater intent to use guidelines and actual use [813]. Thus, use of guidelines might be optimised by improving their format and content. The concept of implementability was first defined by Shiffman as characteristics of guidelines that may enhance their implementation by users, and he issued consensus recommendations for generating guidelines with actionable wording [14]. To further investigate the concept of implementability, we reviewed the medical literature to identify features desired by different users or associated with guideline use [15]. The guideline implementability framework included 22 elements organised within eight domains: adaptability, usability, relevance, validity, applicability, communicability, resource implications, implementation, and evaluation (Table 1). Our analysis of guidelines on various clinical indications judged by experts as high quality found that most did not contain implementability elements, highlighting numerous opportunities to potentially improve guideline development and use by integrating one or more of these elements.
Table 1

Guideline implementability framework

Domain

Definition

Element

Examples

Adaptability

The guideline is available in a variety of versions for different users or purposes

Sources

Internet, peer-reviewed journal

  

Versions

Full text, summary, print, digital

  

Users

Tailored for patients or caregivers

Usability

Content is organised to enhance the ease with which the guideline can be used

Navigation

Table of contents

  

Evidence

Narrative, tabulated, or both

  

Recommendations

Narrative, graphic (algorithms), or both; recommendation summary (single list in full or summary version rather than dispersed)

Validity

Evidence is summarised and presented such that its quantity and quality are apparent

Number of references

Total number of distinct references to evidence upon which recommendations are based

  

Evidence graded

A system is used to categorise quality of evidence supporting each recommendation

  

Number of recommendations

Total number of distinct recommendations

Applicability

Information is provided to help interpret and apply guidelines for individual patients

Clinical considerations

Information such as indications, criteria, risk factors, and drug dosing that facilitates application of the recommendations explicitly highlighted as tips or practical issues using subtitles or text boxes or summarised in tables and referred to in recommendations or narrative

Communicability

Resources for providers or patients to inform, educate, support, and involve patients

Inform, educate, support

Informational, educational, or supportive resources for patients/caregivers, or contact information (phone, fax, email, or URL) for such resources

  

Decision making

Questions or tools for clinicians to facilitate discussion with patients, or decision aids to support patient involvement

Relevance

The focus or purpose of the guideline is explicitly stated

Objective

Explicitly stated purpose of guideline (clinical, education, policy, quality improvement)

  

Stakeholders

Specify who would deliver (individuals, teams, departments, institutions, managers, policy makers, internal/external agents) and receive the services (specify type of patients)

  

Needs

Identification of stakeholder needs, perspectives, interests, or values

Resource implications

Anticipated changes, resources, and competencies required to adapt and accommodate guideline utilisation are identified

Technical

Equipment or technology needed, or the way services should be organised

  

Regulatory

Industrial standards for equipment or technology, or policy regarding their use

  

Human resources

Type and number of health professionals needed to deliver recommended services

  

Professional

Education, training, or competencies needed by clinicians/staff to deliver recommendations

  

Workflow

Anticipated changes in workflow or processes during/after adoption of recommendations

  

Costs

Direct or productivity costs incurred by acquiring resources or training to accommodate guidelines, or as a result of service reductions during transition from old to new processes

Implementation

Processes for planning and applying local strategies to promote guideline utilisation are described

Identify barriers

Individual, organisational, or system barriers that could challenge adoption, or instructions for local needs assessment of guideline users

  

Tailor guideline

Instructions, tools, or templates to tailor guideline/recommendations for local context

  

Integrated tools

Point-of-care templates/forms (clinical assessment, standard orders) to integrate guidelines within care delivery processes

  

Promote utilisation

Possible mechanisms by which to promote guideline utilisation

Evaluation

Processes for evaluating guideline implementation and utilisation are described

Implementation

Methods for evaluating the implementation process

  

Utilisation

Audit tools or performance measures/quality indicators to assess the organisation, delivery, and outcomes of guideline-recommended care

Collaborating to investigate guideline implementability

Further research is needed to operationalise the implementability concept by developing and evaluating implementability tools. Validation and definitive testing of tools and interventions based on the implementability framework will require considerable collaboration with guideline developers, implementers, and users to more strategically generate knowledge, build a consolidated research base, and accelerate its application into practice. To accomplish this, a more formalised network is needed to leverage and sustain existing relationships and resources and to create capacity for collaboration on research and application of guideline implementability. With funding from the Canadian Institutes of Health Research, we are launching the Guideline Implementability Research and Application Network (GIRAnet). Partners include guideline agencies in Canada, United States, Australia, New Zealand, The Netherlands, Italy, Scotland, England, and Finland and the Guidelines International Network (G-I-N). G-I-N is a nonprofit association of guideline developers, implementers, and users, including 94 organisational and 76 individual members from 46 countries http://www.g-i-n.net. G-I-N represents a natural gateway for research collaboration with stakeholders. The purpose of the network is to
  1. 1.

    create a formal and identifiable collaborative of those interested in guideline implementability;

     
  2. 2.

    generate a user-informed research agenda based on guideline implementability;

     
  3. 3.

    leverage available capacity to develop, implement, and evaluate tools and interventions based on the implementability framework;

     
  4. 4.

    plan for and establish a sustainable environment that will enable ongoing evaluation of implementability tools and interventions;

     
  5. 5.

    accelerate the translation of this new knowledge into guideline development and quality-improvement practices.

     

Network design

Approach

A review of research in various disciplines relevant to collaborative partnerships (including management networks; interprofessional health-services research, teamwork, and collaboration; continuity of care; knowledge translation; communities of practice; and quality-improvement collaboratives) found that collaboration can only be effective with investment in infrastructure that, at a minimum, includes a dedicated individual who will communicate with and engage participants, facilitate the creation of linkages, organise forums for interaction that include in-person meetings, and actively support the development of strategic plans and the undertaking of network activities from which participants derive benefit [1632]. We designed the structures and activities of GIRAnet to align with this evidence.

Infrastructure and activities

The GIRAnet structure includes the administrative site, steering committee, research group, and various levels of membership. The administrative site will lead and coordinate all network activities; lead the development of implementability tools; and support distributed development, implementation, and evaluation of implementability tools. Prioritisation and direction for network activities will be provided by a steering committee that will advise on these matters. It is comprised of representatives from guideline-development agencies in nine partner countries plus G-I-N. Some examples of network activities include compiling a directory of existing implementability tools; developing and sharing guidance on the development and use of implementability tools, including toolkits and training sessions; conducting a needs assessment of guideline developers and implementers to better understand the resources needed to develop implementability tools; planning mechanisms by which to sustain a research network focused on guideline implementation; and generating a user-informed research agenda. A research group will provide input on research activities, including methods for the development and evaluation of tools and interpretation of evaluation findings. It is comprised of the manuscript co-authors, who are collaborating on operationalising the implementability concept by focusing their efforts on differing domains, and others who undertake academic research related to the development, implementation, and use of actionable guidelines. Members refers to anyone interested in guideline development, implementation, or related research who may wish to remain informed about network activities and products or take a more active role and participate in the evaluation of implementability tools as described in the next section.

Research design

Approach

The specific mandate of GIRAnet is to enable research that will develop, implement, and evaluate tools and interventions based on the implementability framework. The overall research plan is based on the MRC Framework for Developing and Evaluating Complex Interventions [33]. This approach includes five steps: development, pilot testing, evaluation, reporting, and implementation. However, evaluation must be informed by, and tailored to, the results of development and pilot testing; therefore, the proposed work will take place over three years and address development and pilot testing and planning for subsequent evaluation (Table 2). We will focus on identifying and/or developing implementability tools for guidelines on arthritis, cancer (breast, prostate, colorectal, lung), chronic obstructive pulmonary disease, depression, diabetes, ischemic heart disease, and stroke. These are relevant to national funding priorities in Canada, affect both men and women, and are the major cause of death and disability worldwide.
Table 2

Overall methodological approach based on the MRC Framework for the Development of Complex Interventions

Steps in MRC Framework

This proposal

Future research

 

Year 1

Year 2

Year 3

 

Development

Create implementability tool prototypes based on guideline exemplars and medical literature

Pilot testing

Refine prototypes by testing with/to learn:

• Developers, data collection/inclusion

• Users/tool impact

Evaluation

Conduct small- scale study to plan for large-scale evaluation

Conduct large-scale multisite study to evaluate impact of guidelines featuring implementability tools

Reporting

Disseminate findings to guideline developers and researchers through publications, meetings

Implementation

Create and implement kits for guideline developers to include implementability tools in guidelines

Implementability domains

We will focus on three specific implementability domains/elements not under investigation by others, for which there is little or no research and, according to our research, are seldom included in guidelines (Table 3). Resource Implications refers to equipment or technology needed; industrial standards; policies governing their use; type and number of health professionals needed to deliver services; education, training, or competencies needed by staff to deliver services; anticipated changes in workflow or processes during or after adoption; and costs. Health professionals we interviewed said that this type of information would help them prepare for the impact on practice of adopting new guidelines [34]. Implementation refers to identifying barriers associated with adoption and selecting and tailoring implementation strategies that address those barriers. Our research found that professionals who fund, manage, and deliver health services lack knowledge about how to implement guidelines. Interviews with policy makers, managers, health professionals, and researchers revealed confusion about responsibility and approaches for implementation [32]. Evaluation refers to tools based on performance measures to assess baseline and postintervention compliance with guidelines. Our research found that self-assessment tools based on guidelines are lacking [35]; physicians said they lacked tools to monitor their own performance [36], and providing physicians with self-assessment tools and instructions resulted in identification of learning needs in 66.7% of patient cases they reviewed and modifications in intended care plans for 34.2% of those cases [37].
Table 3

Implementability domains and elements of interest

Implementability domain/elements

Definition

Examples of tools

Knowledge to Action phase*

Accommodation

• Technical

• Regulatory

• Human resources

• Professional

• Workflow process

• Costs

Equipment or technology needed; industrial standards; policies governing their use; type and number of health professionals needed to deliver services; education, training, or competencies needed by staff to deliver services; anticipated changes in workflow or processes during or after adoption

• Literature search strategies for identifying these elements for condition-specific guidelines

• Template statement for inclusion in guidelines

• Strategy for whom to engage in guideline development to enable use of an integrated knowledge-translation strategy and governance structure

Adapting Knowledge to Local Context; Assessing Barriers, Facilitators of Knowledge Use (phases 2, 3)

Implementation

• Barriers and facilitators

• Strategies

• Tailoring

Identifying individual, organisational, and system barriers associated with adoption; selecting and tailoring implementation strategies that address barriers

• Literature search strategies for identifying barriers

• Criteria and algorithms for selecting interventions

• Options for tailoring interventions

• Template statement for inclusion in guidelines

• Surveys to facilitate systematic barrier analysis and mitigation

Assessing Barriers, Facilitators of Knowledge Use; Selecting, Tailoring, and Implementing Interventions (phases 3,4)

Evaluation

• Performance measures

• Benchmarks

• Evaluation processes

Tools based on performance measures that can be used by organisations or individuals to assess their baseline and postintervention compliance with recommendations

• Program evaluation kit

• Self-audit kit

Monitoring Knowledge Use, Evaluating Outcomes, Sustaining Knowledge Use (phases 5, 6, 7)

*From: Graham ID, Logan J, Harrison MB, Straus SE, Tetroe J, Caswell et al.: Lost in knowledge translation: time for a map? J Cont Ed Health Prof 2006; 26:13-24.

Theoretical framework

Cognitive science theory suggests that guidelines may be difficult to use because they present complex information that prescribes action that may not match clinical circumstances or user preferences, and individual knowledge and experience shape the way that information in a guideline is processed [38]. Guidelines featuring implementability tools may overcome these limitations to facilitate guideline interpretation and use. However, we need greater insight on the mechanism(s) by which this occurs to better understand how guideline format and content can be optimised. Through literature review, we compiled a conceptual framework that describes how implementability elements may influence type and process of decision making and guideline use and outcomes. Implementability elements may support various types of decisions: evidence informed (effectiveness data), experiential (professional expertise), shared (negotiation with patients/caregivers), and policy decision making (resource allocation) [15]. Implementability elements may support different types of decision-making processes: intuitive–trigger or reconcile with previous experience–and analytic–create or simulate new mental model [1620]. This framework will guide data collection and analysis. Study findings will validate and extend the framework and help us to refine prototype implementability tools (Figure 1).
Figure 1

Theoretical framework. This framework will guide data collection and analysis, and study findings will confirm and extend its components. The framework proposes that different types of guideline users would apply information reflecting implementability domains in different ways. The way they interpret and use the information may vary by type of decision making, and by decision making process. Use of the implementability information in various ways may lead to different potential outcomes.

Phase one: identify and develop prototype tools

The content of international guidelines on indications of interest will be analysed to identify exemplar implementability tools reflecting domains of interest (Resource Implications, Implementation, Evaluation). Content analysis describes phenomena in written, verbal, or visual communication to generate or validate a framework or model [39]. We will use a directed approach [40]. This means that explicit content in guidelines will be coded using elements from the implementability framework. We used these methods when we first examined guidelines for implementability [15]. To supplement exemplars, or if none are identified, literature searches will be conducted to identify information reflecting implementability elements with which to populate tools and templates. For example, if guidelines for arthritis management contained little or no resource-implications information, we will search the medical literature to identify relevant technical, regulatory, human resource, and workflow issues. Similarly, if no diabetes guidelines contained performance-assessment instructions, we will search the medical literature for examples of assessment measures with which to create program or evaluation tools. The search strategies themselves will ultimately be used in toolkits for developers, along with indication-specific and generic implementability tools that are relevant across disease indications so that they can find similar information for other guidelines.

Phase two: pilot test and refine prototype tools

Cognitive interviewing will be used to understand how developers can best integrate implementability tools in guidelines and how health professionals can use them for interpreting and applying guidelines. Findings will be used to refine indication-specific prototypes and generic templates in the instructional manual. This approach is based on cognitive theory to understand human information processing (attention span, word recognition, language processing, action, problem solving, reasoning) and has been used widely to understand how respondents interpret survey questions and for usability testing of information technology [41]. It therefore can be applied to study how developers and users perceive the content, format, use, and impact of implementability tools. The fundamental procedure is the semistructured interview. Interviewing can take place concurrently or subsequent to the respondent reviewing the prototype implementability tools in question, and we will use both to minimise recall bias. The process involves probing, where respondents paraphrase information, define meanings, and identify information that is difficult to understand or use. For example, questions may address the content and format of the tool, feasibility of using the tool and anticipated acceptability of the tool by colleagues, and perceptions of how the tool will be used. It also involves think-aloud protocols, where respondents are asked to verbalise their thought processes as they read and interpret and consider information in the tool, its meaning, and how it will be used. The interviewer classifies problems and determines how the tool should be refined, including aspects that should be either removed or modified and how, and the reason for the change. Types of problems include lexical (understanding of the meaning of words and phrases), inclusion/exclusion (scope of the information), temporal (time taken to read, interpret, and consider information), logical (relevance of the information in general, and as presented), and computational (any problems that do not fall in above categories). Qualitative methods will be used for sampling, data collection, and data analysis [42].

Phase three: pilot test tool evaluation

Through analysis of exemplar guidelines and supplementary literature search, phase-one development will assemble and create indication-specific and generic implementability tools and templates. Through interviews with guideline developers and users, phase-two pilot testing will assess the feasibility of, and resources needed to develop, implementable guidelines featuring those tools and the impact of those implementability tools on attitude, confidence, outcome expectancy, and intent to use guidelines. An international, multisite quantitative and qualitative study can then be conducted to evaluate the impact of newly developed or modified guidelines featuring implementability tools on actual behaviour or use of guidelines. While before-after observational design is not the strongest test of impact, this is methodologically the most appropriate next step in the evaluation of implementability tools [43]. It will enable comparisons of impact across type of implementability tool and clinical indication and inform the planning of a future, more definitive, time series or pragmatic randomised study to evaluate the cost-effectiveness of implementable guidelines versus usual guidelines or other type of intervention. Furthermore, evaluation in multiple international sites enabled through G-I-N and GIRAnet means that fewer participating health professionals are needed from each site, recruitment and data collection can take place in a shorter period of time, and results are more broadly generalisable and more rapidly translated to practice. However, the conduct of such an international multisite study will be logistically challenging. Moreover, conduct will be dependent on the findings of phase-one development and phase-two pilot testing. Therefore, an intermediate step is needed to prepare for phase-three evaluation. This will take the form of a small-scale pilot test of the planned evaluation, including quantitative analysis of chart-based behavioural outcomes and qualitative analysis of interviews with participants to gather feedback that will inform and refine the subsequent full-scale evaluation.

Discussion

Infrastructure funding to establish GIRAnet will be leveraged with the in-kind contributions of collaborating national and international guideline developers to advance our knowledge of implementation practice and science. While establishing and maintaining such a network will be challenging, collaborating partners have expressed enthusiasm for greater sharing of information about best practices related to guideline development and implementation, which is imperative to its success.

Findings will further implementation practice by translating implementability theory into action. We will do so by developing implementability tools with and for developers in the field and then more broadly disseminating and implementing those tools to guideline developers through GIRAnet, G-I-N, and various other media and forums. Findings could be used to develop checklists or tools by which to inform guideline development and evaluate guidelines. Developers can use this knowledge to refine their programs, practices, and products and understand how implementability content can be collected and integrated, highlighting resource implications they must consider when applying this approach for promoting guideline use. Thus, we will promote application of implementability tools in real-world health-system contexts. This knowledge can also inform guideline development standards and instructional manuals. We recently reviewed six such instructional manuals and found that they included little implementability information (manuscript submitted).

Findings will further implementation science by exploring the views of different users to elucidate how implementability tools would be interpreted and used, leading to a greater understanding of their potential impact and of measures by which impact could be evaluated in future studies, and validating a theoretical framework of implementability. By developing and evaluating an alternative mechanism of implementing guidelines that can be intrinsically introduced at the time of guideline development, we advance our understanding of how to link implementation with development. By investigating and tailoring methods for evaluating prototypes, we may develop methods for rapid-cycle testing that may be widely applicable for assessing other knowledge-based interventions. In particular, we will use the findings to inform ongoing research, leading to definitive testing of implementability tool impact through future time series or randomised studies comparing the cost-effectiveness of implementable guidelines with other approaches for promoting guideline use. While various bodies of literature describe the infrastructure needed to support collaborative research, there is no definitive model that may be applicable to all contexts. Needs assessment and evaluation of GIRAnet will provide greater understanding of how to develop and sustain such knowledge-exchange networks. Ultimately, by facilitating use of guidelines, this research may lead to improved delivery and outcomes of patient care.

Declarations

Acknowledgements

This study and the cost of this publication is funded by the Canadian Institutes of Health Research, which took no part in the study design or decision to submit this manuscript for publication and will take no part in the collection, analysis, and interpretation of data or writing of subsequent manuscripts.

Authors’ Affiliations

(1)
University Health Network
(2)
McMaster University
(3)
St. Michael's Hospital

References

  1. Grimshaw J, Eccles M, Thomas R, MacLennan G, Ramsay C, Fraser C, Vale L: Toward evidence-based quality improvement. Evidence (and its limitations) of the effectiveness of guideline dissemination and implementation strategies 1966-1998. J Gen Intern Med. 2006, 21 (Suppl 2): S14-S20.PubMedPubMed CentralGoogle Scholar
  2. Browman GP, Levine MN, Mohide A, Hayward RSA, Pritchard KI, Gafni A, Laupacis A: The practice guidelines development cycle: A conceptual tool for practice guidelines development and implementation. J Clin Oncol. 1995, 13 (2): 502-512.PubMedGoogle Scholar
  3. McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A, Kerr EA: The quality of health care delivered to adults in the United States. N Engl J Med. 2003, 348 (26): 2635-2645. 10.1056/NEJMsa022615.View ArticlePubMedGoogle Scholar
  4. FitzGerald JM, Boulet LP, McIvor RA, Zimmerman S, Chapman KR: Asthma control in Canada remains suboptimal: the Reality of Asthma Control (TRAC) study. Can Respir J. 2006, 13 (5): 253-259.View ArticlePubMedPubMed CentralGoogle Scholar
  5. Brown LC, Johnson JA, Majumdar SR, Tsuyuki RT, McAlister FA: Evidence of suboptimal management of cardiovascular risk in patients with type 2 diabetes mellitus and symptomatic atherosclerosis. CMAJ. 2004, 171 (10): 1189-1192. 10.1503/cmaj.1031965.View ArticlePubMedPubMed CentralGoogle Scholar
  6. Latosinsky S, Fradette K, Lix L, Hildebrand K, Turner D: Canadian breast cancer guidelines: have they made a difference?. CMAJ. 2007, 176 (6): 771-776. 10.1503/cmaj.060854.View ArticlePubMedPubMed CentralGoogle Scholar
  7. Francke AL, Smit MC, de Veer AJE, Mistiaen P: Factors influencing the implementation of clinical guidelines for health care professionals: A systematic meta-review. BMC Med Inform Decis Mak. 2008, 8: 38-10.1186/1472-6947-8-38.View ArticlePubMedPubMed CentralGoogle Scholar
  8. Cochrane LJ, Olson CA, Murray S, Dupuis M, Tooman T, Hayes S: Gaps between knowing and doing: Understanding and assessing the barriers to optimal health care. J Contin Educ Health Prof. 2007, 27: 94-102. 10.1002/chp.106.View ArticlePubMedGoogle Scholar
  9. Grilli R, Lomas J: Evaluating the message: The relationship between compliance rate and the subject of a practice guideline. Med Care. 1994, 32 (3): 202-213. 10.1097/00005650-199403000-00002.View ArticlePubMedGoogle Scholar
  10. Dobbins M, Hanna SE, Ciliska D, Manske S, Cameron R, Mercer SL: A randomized controlled trial evaluating the impact of knowledge translation and exchange strategies. Implement Sci. 2009, 4: 61-10.1186/1748-5908-4-61.View ArticlePubMedPubMed CentralGoogle Scholar
  11. Shekelle PG, Kravitz RL, Beart J, Marger M, Wang M, Lee M: Are non-specific guidelines potentially harmful? A randomized comparison of the effect of nonspecific versus specific guidelines on physician decision making. Health Serv Res. 2000, 34 (7): 1429-1448.PubMedPubMed CentralGoogle Scholar
  12. Michie S, Johnston M: Changing clinical behaviour by making guidelines specific. BMJ. 2004, 328: 343-345. 10.1136/bmj.328.7435.343.View ArticlePubMedPubMed CentralGoogle Scholar
  13. Michie S, Lester K: Words matter: increasing the implementation of clinical guidelines. Qual Saf Health Care. 2005, 14: 367-370. 10.1136/qshc.2005.014100.View ArticlePubMedPubMed CentralGoogle Scholar
  14. Shiffman RN, Dixon J, Brandt C, Essaihi A, Hsiao A, Michel G, O'Connell R: The GuideLine Implementability Appraisal (GLIA): development of an instrument to identify obstacles to guideline implementation. BMC Med Inform Decis Mak. 2005, 5: 23-10.1186/1472-6947-5-23.View ArticlePubMedPubMed CentralGoogle Scholar
  15. Gagliardi AR, Brouwers MC, Palda VA, Lemieux-Charles L, Grimshaw JM: How can we improve guideline use? A conceptual framework of implementability. Implement Sci. 2011, 6: 26-10.1186/1748-5908-6-26.View ArticlePubMedPubMed CentralGoogle Scholar
  16. Alter C, Hage J: Organizations working together. 1993, Newburg Park, CA: In Sage PublicationsGoogle Scholar
  17. Lohr KN, Steinwachs DM: Health services research: An evolving definition of the field. Health Serv Res. 2002, 37: 15-17.View ArticlePubMed CentralGoogle Scholar
  18. Vargas RB, Landon BE, Shapiro MF: The future of health services research in academic medicine. Am J Med. 2004, 116: 503-507. 10.1016/j.amjmed.2004.02.002.View ArticlePubMedGoogle Scholar
  19. King G, Currie M, Smith L, Servais M, McDougall J: A framework of operating models for interdisciplinary research programs in clinical service organizations. Eval Program Plann. 2008, 31: 160-173. 10.1016/j.evalprogplan.2008.01.003.View ArticlePubMedGoogle Scholar
  20. Lemieux-Charles L, McGuire W: What do we know about health care team effectiveness: A review of the literature. Med Care Res Rev. 2006, 63 (3): 263-300. 10.1177/1077558706287003.View ArticlePubMedGoogle Scholar
  21. D'Amour D, Ferrada-Videla M, San Martin Rodriguez L, Beaulieu MD: The conceptual basis for interprofessional collaboration: Core concepts and theoretical frameworks. J Interprof Care. 2005, 19 (Suppl 1): 116-131.View ArticlePubMedGoogle Scholar
  22. Reid R, Haggerty J, McKendry : Defusing the confusion: Concepts and measures of continuity of healthcare. Canadian Health Services Research Foundation. 2002, accessed on December 21, 2009, [http://www.chsrf.ca]Google Scholar
  23. Wells R, Weiner BJ: Adapting a dynamic model of interorganizational cooperation to the health care sector. Med Care Res Rev. 2007, 64 (5): 518-543. 10.1177/1077558707301166.View ArticlePubMedGoogle Scholar
  24. Hall JG, Bainbridge L, Buchan A, Cribb A, Drummond J, Gyles C: A meeting of the minds: interdisciplinary research in the health services in Canada. CMAJ. 2006, 175: 763-771. 10.1503/cmaj.060783.View ArticlePubMedPubMed CentralGoogle Scholar
  25. Nair KM, Dolovich L, Brazil K, Raina P: It's all about relationships: a qualitative study of health researchers' perspectives of conducting interdisciplinary health research. BMC Health Serv Res. 2008, 8: 110-10.1186/1472-6963-8-110.View ArticlePubMedPubMed CentralGoogle Scholar
  26. Gagnon ML: Moving knowledge to action through dissemination and exchange. J Clin Epidemiol. 2009,Google Scholar
  27. Li LC, Gimshaw JM, Nielsen C, Judd M, Coyte PC, Graham ID: Use of communities of practice in business and health care sectors: a systematic review. Implement Sci. 2009, 4: 27-10.1186/1748-5908-4-27.View ArticlePubMedPubMed CentralGoogle Scholar
  28. Schouten LMT, Grol RPTM, Hulscher MEJL: Factors influencing success in quality improvement collaborative. Implement Sci. 2010, 5: 84-10.1186/1748-5908-5-84.View ArticlePubMedPubMed CentralGoogle Scholar
  29. Blevins D, Farmer MS, Edlund C, Sullivan G, Kirchner JE: Collaborative research between clinicians and researchers. Implement Sci. 2010, 5: 76-10.1186/1748-5908-5-76.View ArticlePubMedPubMed CentralGoogle Scholar
  30. Gagnon M: Knowledge dissemination and exchange of knowledge. Knowledge Translation in Health Care. Edited by: Straus S, Tetroe J, Graham ID. 2009, Oxford, UK: Wiley BlackwellGoogle Scholar
  31. Parry D, Salsberg J, Macaulay AC: Guide to researcher and knowledge-user collaboration in health research. 2009, Ottawa, ON: Canadian Institutes of Health Research, [http://www.cihr-irsc.gc.ca/e/39128.html]Google Scholar
  32. Gagliardi AR, Fraser N, Wright FC, Lemieux-Charles L, Davis D: Fostering knowledge exchange between researchers and decision makers: Exploring the effectiveness of a mixed-methods approach. Health Policy. 2008, 86: 53-63. 10.1016/j.healthpol.2007.09.002.View ArticlePubMedGoogle Scholar
  33. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M: Developing and evaluating complex interventions. 2011, Medical Research Council, UKGoogle Scholar
  34. Gagliardi AR, Brouwers MC, Palda VA, Lemieux-Charles L, Grimshaw JM: How can we improve guideline use? A conceptual framework of implementability. Implement Sci. 2011, 6: 26-10.1186/1748-5908-6-26.View ArticlePubMedPubMed CentralGoogle Scholar
  35. Gagliardi AR, Brouwers MC, Finelli A, Campbell CE, Marlow BA, Silver IL: Factors influencing conduct and impact of self-audit. J Contin Educ Health Prof. 2011,Google Scholar
  36. Gagliardi AR, Wright FC, Khalifa MA, Smith AJ: Exploratory study of factors influencing quality improvement in colorectal cancer lymph node staging. BMC Health Serv Res. 2008, 8: 34-10.1186/1472-6963-8-34.View ArticlePubMedPubMed CentralGoogle Scholar
  37. Gagliardi AR, Wright FC, Victor JC, Brouwers MC, Silver IL: Self-directed learning needs, patterns, and outcomes among general surgeons. J Contin Educ Health Prof. 2009, 29 (4): 269-275. 10.1002/chp.20046.View ArticlePubMedGoogle Scholar
  38. Patel VL, Arocha JF, Diermeier M, Greenes RA, Shortliffe EH: Methods of cognitive analysis to support the design and evaluation of biomedical systems. J Biomed Inform. 2001, 34: 52-66. 10.1006/jbin.2001.1002.View ArticlePubMedGoogle Scholar
  39. Elo S, Kyngas H: The qualitative content analysis process. J Adv Nurs. 2008, 62 (1): 107-115. 10.1111/j.1365-2648.2007.04569.x.View ArticlePubMedGoogle Scholar
  40. Hsieh HF, Shannon SE: Three approaches to qualitative content analysis. Qual Health Res. 2005, 15 (9): 1277-1288. 10.1177/1049732305276687.View ArticlePubMedGoogle Scholar
  41. Drennan J: Cognitive interviewing: verbal data in the design and pretesting of questionnaires. J Adv Nurs. 2003, 42: (1):57-63.View ArticlePubMedGoogle Scholar
  42. Strauss A, Corbin J: Basics of qualitative research: grounded theory procedures and techniques. 1990, Sage Publications, Newbury Park, CAGoogle Scholar
  43. Eccles M, Grimashaw J, Campbell M, Ramsay C: Research designs for studies evaluating the effectiveness of change and improvement strategies. Qual Saf Health Care. 2003, 12: 47-52. 10.1136/qhc.12.1.47.View ArticlePubMedPubMed CentralGoogle Scholar

Copyright

© Gagliardi et al; licensee BioMed Central Ltd. 2012

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement