- Study protocol
- Open Access
Identifying the domains of context important to implementation science: a study protocol
Implementation Sciencevolume 10, Article number: 135 (2015)
There is growing recognition that “context” can and does modify the effects of implementation interventions aimed at increasing healthcare professionals’ use of research evidence in clinical practice. However, conceptual clarity about what exactly comprises “context” is lacking. The purpose of this research program is to develop, refine, and validate a framework that identifies the key domains of context (and their features) that can facilitate or hinder (1) healthcare professionals’ use of evidence in clinical practice and (2) the effectiveness of implementation interventions.
A multi-phased investigation of context using mixed methods will be conducted. The first phase is a concept analysis of context using the Walker and Avant method to distinguish between the defining and irrelevant attributes of context. This phase will result in a preliminary framework for context that identifies its important domains and their features according to the published literature. The second phase is a secondary analysis of qualitative data from 13 studies of interviews with 312 healthcare professionals on the perceived barriers and enablers to their application of research evidence in clinical practice. These data will be analyzed inductively using constant comparative analysis. For the third phase, we will conduct semi-structured interviews with key health system stakeholders and change agents to elicit their knowledge and beliefs about the contextual features that influence the effectiveness of implementation interventions and healthcare professionals’ use of evidence in clinical practice. Results from all three phases will be synthesized using a triangulation protocol to refine the context framework drawn from the concept analysis. The framework will then be assessed for content validity using an iterative Delphi approach with international experts (researchers and health system stakeholders/change agents).
This research program will result in a framework that identifies the domains of context and their features that can facilitate or hinder: (1) healthcare professionals’ use of evidence in clinical practice and (2) the effectiveness of implementation interventions. The framework will increase the conceptual clarity of the term “context” for advancing implementation science, improving healthcare professionals’ use of evidence in clinical practice, and providing greater understanding of what interventions are likely to be effective in which contexts.
Healthcare professionals’ use of research evidence in clinical practice is critical to improving population health and achieving a high-performing health system. Yet, one of the most consistent findings in health services and clinical research is that healthcare professionals’ use of evidence is suboptimal despite increased awareness of and accessibility to research evidence [1–5]. Our understanding of how to improve healthcare professionals’ use of evidence is also incomplete. Implementation science, also known as knowledge translation, is the study of methods to promote the integration of research findings and evidence into healthcare policy and practice. It seeks to understand the behavior of healthcare professionals and other stakeholders as a key variable in the sustainable uptake, adoption, and implementation of evidence-based interventions . In several reviews of implementation studies [7–17], researchers have identified major conceptual and methodological issues facing the field that need to be addressed to improve healthcare professionals’ use of evidence in practice; among them is consistent unexplained variation in intervention effectiveness between trials with a possible explanation being the influence of context. To advance the field, we therefore need to begin considering and measuring context; this first requires a clear conceptualization of the key domains (and their features) of context that are likely to influence variation in the effectiveness of implementation interventions.
While context is broadly known as the physical and social environment, the term is used differently by different authors. More specifically, there is little agreement about what domains, measures, and features of context are important to healthcare professionals’ use of evidence. For example, Øvretveit  defines context broadly as all factors that are not part of the intervention. May et al.  adopt a more specific definition as follows: “the physical, organizational, institutional, and legislative structures that enable and constrain, and resource and realize, people and procedures”. French et al.  define context as “the organizational environment of healthcare, composed of physical, social, political and economic influences on the practical reasoning and choices of practitioners about how clinical issues are addressed” (p. 174) while Rycroft-Malone  defines it as “the environment or setting in which the proposed change is to be implemented” (p.299). Further, GW Allport’s  seminal definition from social psychology highlights the effect of the “real, imagined or implied presence of others” on behavior, implying that a social context exists that is much broader than the immediately observable features of the environment.
Context will vary by setting; however, a core set of domains of context that are important to all or most healthcare professionals’ use of evidence in clinical practice is likely to exist. While each domain may be more or less important in different settings, each should, at minimum, be assessed prior to designing and delivering implementation interventions.
Context and implementation interventions
Empirical evidence supporting the central role of context to implementation interventions has emerged in studies where intervention effectiveness varied by context. For example, Hogg et al. [23, 24] conducted two trials on the effects of practice facilitation in improving preventive care delivery and found benefits in capitation-based practices but not fee-for-service practices, illustrating that context matters to intervention effectiveness. Similar findings were reported in a recent review of point-of-care computer reminders; interventions that targeted inpatient settings had larger improvements in processes of care than those in outpatient settings . A study commissioned by the UK’s Health Foundation identified essential contextual characteristics for successful implementation of a program (Keystone, conducted in Michigan, USA) and then explored what happened when a program it inspired (Matching Michigan) was launched. They reported that application of the program’s technical practices were generally very good, but implementation of the broader set of factors shown to be relevant to success in the original program was highly variable and depended on the national, local, and internal context . These examples illustrate the need to consider scope and dimensions of context when designing implementation interventions in healthcare when interpreting trial findings and when considering the limits of generalizability (external validity) of trial findings.
Frameworks for context
While several implementation frameworks acknowledge the importance of context, authors provide little detail on what they consider to be the key domains of context. Rogers’ Diffusion of Innovations  is the most frequently used theory in studies of implementation in clinical practice . It describes organizational innovativeness as being related to a variety of contextual features such as: leadership, internal organizational structure (centralization, complexity, formalization, interconnectedness, organizational slack), and external characteristics of the organization . The Promoting Action on Research in Health Services (PARiHS) framework was developed to explain healthcare professionals’ use of evidence in clinical practice. It is hypothesized to be a function of (i) the sources of evidence used to support practice change, (ii) the context (defined as three domains—leadership, culture, and feedback) where practice change occurs, and (iii) methods used to facilitate practice change [29, 30]. The Knowledge to Action Cycle  also highlights the importance of context to successful implementation in clinical practice and its assessment to inform the design and delivery of implementation interventions. Included in the action part of the cycle are processes related to context that are needed to translate research in healthcare settings, namely: applying knowledge (research) to the local context and the assessment of barriers and supports to knowledge (research) use (which includes consideration of the local context). Damschroder et al.  describe their Consolidated Framework for Implementation Research (CFIR), which recognizes the multiple levels at which contextual influences on behavior change operate and allows for interaction between contextual factors insofar as they influence clinical practice. The framework includes 8 concepts related to the intervention itself, 4 related to the outer context (such as the patient and resources), 12 related to the inner setting or context (such as culture and leadership), 5 related to the individual, and 8 related to processes (such as planning and reflection). This framework reflects the organizational and policy literature, where context is examined from the perspective of levels. In the field of health psychology, there is the Theoretical Domains Framework [33, 34]. This framework focuses on individuals’ perceptions of the determinants of their behavior. It consists of 14 domains derived from 128 explanatory constructs from 33 health and social psychology theories. The framework explicitly recognizes context in 2 of the 14 domains (social influences and environmental context). Details, however, on the specific features of context that would fall into these domains have not been investigated. In the organizational literature, three levels of context are commonly proposed: macro in which market-type forces are at play (e.g., growth in strategic management), meso in which organizational characteristics are an influence, and micro in which activities in the clinical setting provide a contextual influence . The organizational literature also suggests that multiple levels of context create a layered set of influences and require examination of influence at each level.
Despite the abundance of frameworks that identify context as an influence on implementation, several unanswered questions about context remain. First, while several implementation frameworks include context, no one framework is sufficiently inclusive or comprehensive about what comprises context. Secondly, existing frameworks are often inconsistent with one another regarding how they define context and what they consider to be the important domains of context. As a result, there is little direction on which elements of context need to be assessed prior to designing an implementation intervention to improve healthcare professionals’ use of evidence in clinical practice. Thirdly, knowledge users (e.g., healthcare decision makers, change agents) have not always been engaged in developing the existing frameworks, meaning their knowledge on what domains of context are important to evidence use by healthcare professionals and to the success of implementation interventions is lacking in these frameworks. This may limit knowledge users’ acceptance and thus use of the frameworks to assess context to inform their implementation efforts. Fourthly, the existing frameworks were not subject to international content validation with expert-user groups, which is critical to ensuring that the framework is acceptable and useful to its users .
Increasing clarity about the concept of context will lay the basis for increasing understanding of context for advancing implementation science, designing and delivering effective implementation interventions, and reducing variation in the effectiveness of these interventions. Because of the different levels and multiple features of context, which may indicate different domains of context, we believe it is important to construct a framework that describes context both fully and precisely, supporting development of the science of implementation research. Therefore, the purpose of the research program described in this manuscript is to develop, refine, and validate a comprehensive framework of context that identifies the domains of context (and their features) that can facilitate or hinder (1) healthcare professionals’ use of evidence in clinical practice and (2) the effectiveness of implementation interventions.
Design and objectives
This project is a multi-phased investigation of context using mixed methods.
The specific objectives of this project are:
To conduct a concept analysis of context as it is described in the international literature in order to develop a preliminary framework of domains of context (and their features) important to healthcare professionals’ use of evidence in clinical practice (Context study 1)
To conduct a secondary analysis of qualitative data collected from healthcare professionals internationally on the perceived contextual barriers and enablers to their use of evidence in clinical practice (Context study 2)
To conduct interviews with a variety of international health system stakeholders and change agents to elicit their knowledge and beliefs about contextual features that influence the effectiveness of implementation interventions and healthcare professionals’ use of evidence in clinical practice (Context study 3)
To synthesize data from the three context studies in order to refine the framework of context drawn from the concept analysis—Context study 1 (Data Triangulation)
To assess the resulting context framework for content validity through an iterative process involving expert researchers and health system stakeholders and change agents (Delphi Study).
Context study 1: concept analysis
A protocol describing our methods for the concept analysis is published elsewhere . We are using a modified Walker and Avant method of concept analysis  comprising six systematic steps: (1) concept selection, (2) statement of the aims and purpose of the concept analysis, (3) identification of uses of context, (4) determination of defining attributes of context, (5) identification/construction of different cases of context (i.e., related cases, borderline cases, contrary cases, illegitimate cases, and invented cases), and (6) definition of empirical referents. For further details, see Squires et al. .
Context study 2: secondary analysis of interviews with healthcare professionals
A secondary analysis of qualitative (semi-structured interview) data investigating healthcare professionals’ perceived barriers and enablers to their use of evidence in clinical practice will be conducted. While secondary analysis of quantitative (e.g., survey) data is increasingly undertaken, data sharing among qualitative researchers is less common. According to Cotri and Thompson [39, 40] and Heaton [41, 42], re-use of qualitative datasets is infrequent and comprises untapped “resources”. For this phase of our research program, a unique collection of qualitative data investigating healthcare professionals’ perceived barriers and enablers to their use of evidence in clinical practice has been compiled. The original datasets were collected using similar methods across different countries, healthcare professionals, settings, and behaviors, providing richness not available in any single dataset. The data were collected using semi-structured interview guides informed by the Theoretical Domains Framework [33, 34] (see Background). Interview questions were broad and open-ended, allowing participants to spontaneously identify instances of context that act as barriers and/ or enablers to their use of evidence in clinical practice. The target behaviors in each of these studies were specified with a high level of granularity (see Table 1), but all involved the application of evidence from clinical research and/or clinical guidelines As a result, these data provide a rich array of contextual features.
The sample and data
The sample consists of 13 individual datasets including interviews from 312 healthcare professionals in four countries: Canada, USA, UK, and Australia. A variety of healthcare professionals are included, for example, different specialties of physicians (intensivists, orthopedic surgeons, general surgeons, anesthesiologists, family physicians, emergency physicians, infectious diseases physicians), and nurses (labor and delivery, emergency, critical care, infection control), trainee doctors (residents), chiropractors, and organ donor coordinators. A variety of clinical settings are also included: different types of hospital units (intensive care, medical, surgical, pre-operative, birthing, adult and pediatric emergency rooms) and primary care settings (private clinics, physician offices). The 13 datasets are summarized in Table 1 using the TACT principle as follows: Target (population the behavior is performed toward), Action (act that you plan to intervene upon), Context (the clinical setting), and Time (timeframe when the action occurs) .
Data analysis will be managed using NVIVO software . All transcripts in a single dataset will be analyzed before proceeding to the next dataset. Data will be analyzed inductively, following constant comparative analysis [45, 46]. First, two team members will independently read four transcripts from dataset number 1 to determine a coding scheme comprising codes, definitions of codes, and examples of utterances that align with each code. Two team members will then independently analyze the remaining transcripts using the coding scheme. The coding scheme will be modified as needed throughout the analytic process. Analysis will occur in three steps: selection of utterances, coding, and categorizing. Each transcript will first be read and key ideas (i.e., utterances) that reflect context highlighted. These context utterances will then be assigned a “code”. Codes will be operationally defined in order to be consistently applied throughout the data. Codes will then be placed into broad categories, which will become our major units of analysis. Comparisons between multiple categories will be carried out in order to locate similarities and differences between them. Each category of context identified in the transcripts will be given a label, definition, and guideline for identification. Individuals responsible for analysis will meet biweekly to compare their interpretations and jointly select a label that best represents each category and determine a definition and guideline for its identification, revising the coding scheme as needed. Coder reliability will be assessed using the Miles and Huberman  approach: Coder reliability = number of agreements/(total number of agreements + disagreements). In instances of disagreement, the rationale behind the coding will be discussed to reach consensus. When agreement is good (≥70 %) , the labels, definitions and guidelines for identification by the two coders will be discussed and synthesized to provide the clearest articulation. Reasons for disagreements will be discussed and consensus sought. When consensus is reached, the label, definition, and/or guidelines for identification will be improved and inter-rater agreement reassessed using two new coders.
Context study 3: semi-structured interviews with health system stakeholders and change agents
The sample is health system stakeholders and change agents who are responsible for the design and implementation of implementation interventions, programs, and change processes that are focused on improving healthcare professionals’ use of evidence in clinical practice. Participants will be purposefully selected from the four countries represented by our team: Canada, USA, England, and Australia. These countries have different health systems; by sampling across these systems, we will capture variation in macro contextual factors. We will purposefully sample from each country to ensure that we interview a variety of participants, for example within (1) public health systems (where the government predominantly covers costs), (2) private health systems (where individuals predominantly cover costs through private insurance premiums), and (3) managed care health system (where there are institutional arrangements that put administrators and designated “gatekeepers” in charge of guiding patients through a healthcare network to manage costs). Because there will also be different health systems within public, private, and managed care programs by country, we anticipate recruiting 12–20 individuals across settings (e.g., inpatient, outpatient, long-term care, home care) working in each of (1) public, (2) private, and (3) managed care systems, for a total of 36–60 interviews for this phase of our research program.
Data collection and analysis
Semi-structured key informant interviews will be conducted using an interview guide designed to elicit participants’ knowledge and beliefs about context (what it is and the contextual domains and their features that are perceived to influence implementation interventions and healthcare professionals’ use of evidence in clinical practice). Interviews are expected to last 45 min, be conducted by telephone, and will be digitally recorded (with consent). To monitor the progress of the interviews, permit follow-up of issues that may emerge from the data, and allow us to assess whether we have reached data saturation, interviewing, transcription, and analysis will proceed concurrently. The digital recordings will be transcribed verbatim and verified by the interviewer prior to analysis. Inductive data analysis using constant comparative analysis, as described under Context study 2, will be used.
The findings from the three context studies described above (the concept analysis, the secondary analysis of healthcare professional interviews, and the interviews with health system stakeholders and change agents) will be triangulated in order to refine and increase confidence about the comprehensiveness of the preliminary context framework derived from the concept analysis. We will follow a 6-step triangulation process as outlined by Farmer and colleagues  and as summarized in Table 2. Two groups of team members will independently undertake the triangulation process and compare their results. Data triangulation will result in an extensive framework of context domains (and their features) important to healthcare professionals’ use of research evidence in clinical practice and the effectiveness of implementation interventions. The framework will represent a shared understanding of context across researchers in the published literature internationally (Context study 1), healthcare professionals (Context study 2), and health system stakeholders and change agents (Context study 3).
Delphi study: assessment of the context framework for content validity
The final phase of our research program is to assess the content validity of the refined context framework resulting from the data triangulation phase. A modified Delphi approach [49–51] will be used. International experts will be asked to quantitatively rate their agreement with the domains (and their features) in the framework given the data and the processes used to derive the framework. The Delphi approach was selected because it is anonymous and thus will allow participants from different backgrounds to participate without imposing a hierarchy. It has previously been used for similar purposes, for example, to identify innovation determinants in healthcare  and to confirm research priorities in different health settings [53–56].
Participants will be selected from the international research and healthcare delivery community and will include (1) researchers from a variety of disciplines (e.g., health services, implementation science, quality improvement, public health, social science, behavioral science, organizational science) that study context and implementation in healthcare and (2) health system stakeholders and change agents who are responsible for the design and delivery of interventions, programs, and change processes focused on increasing evidence-use by healthcare professionals in clinical practice. There is a broad range of estimates of suitable sizes for a Delphi panel. Lumley et al. , however, showed that any sample larger than n = 65 can be treated as normally distributed, allowing the use of large sample estimators with more robust properties in Delphi analyses. Therefore, 70 participants will be recruited. Assuming a 50 % response rate , 140 experts will be invited to participate.
Participants in the Delphi process will complete 2–3 email questionnaires estimated to take 20–30 min to complete Round 1. Participants will be provided with (1) a description of the process used to develop the context framework, (2) a data triangulation table that summarizes the framework domains (and their features) of context with supporting evidence from the three context studies conducted, and (3) a questionnaire that asks the experts to rate their strength of agreement (on a 9-point scale [59, 60]) with the domains (and their features) in the context framework given the data provided. There will also be open-ended fields to allow participants to provide comments on each domain and on any domains/features they interpret as missing, Round 2. Quantitative data (frequency distributions and measures of spread) from Round 1 will be fed back by email to the experts who will be asked to confirm or revise their initial views, Round 3. Round 2 will be repeated if needed. A maximum of three rounds will be conducted to limit participant burden.
All participants will be treated and analyzed as a single group of experts. The initial ratings of agreement from Round 1 will be summarized as frequency distributions together with measures of dispersion (interquartile ranges) and central tendency (mean, median). In line with previous Delphi studies, participants will be considered as in “disagreement” if 30 % or more of the ratings are in the lower third (ratings 1–3), and 30 % or more of the ratings are in the upper third (ratings 7–9) . Context domains (and features) with an overall rating of 7 to 9 (without disagreement) will be judged as appropriately included in the framework . Agreement ratings from Round 2 will also be summarized quantitatively (as outlined above) and presented back to the participants. A third round will be conducted only if necessary using the same procedure. The Delphi process will be ceased when agreement is established on inclusion of 70 % of the context domains (and their features), defined by a Wald statistic of <0.7, or there is no change in participant scores between two consecutive rounds, defined as a change in the mean score across all participants of >1 scale point for any individual item. Following completion of the Delphi process, any revisions necessary will be made to the context framework.
This paper presents the protocol of a multi-phased research program that takes up the challenge to develop, refine, and validate a comprehensive framework of context that identifies the key domains (and their features) of context that facilitate and hinder (1) healthcare professionals’ use of evidence in clinical practice and (2) the effectiveness of implementation interventions. The framework will represent a shared understanding of context that is needed to advance the science of implementation. The framework will be useful to both researchers and knowledge users (healthcare decision makers, implementers/change agents). Researchers and knowledge users may be able to use the framework to guide a priori assessments of context (to assist in the design and delivery of their implementation interventions) as well as a posteriori assessments of context (to aid the interpretation of the effects of implementation interventions which can then inform the design and delivery of subsequent implementation trials). Researchers and knowledge users may also be able to use the framework to pragmatically guide their implementation efforts by identifying important features of context that are important to consider when choosing, designing, and delivering implementation interventions, and to help assess the transferability of effective implementation interventions from other contexts to theirs by identifying contextual features that may influence effective implementation.
Lang ES, Wyer PC, Haynes RB. Knowledge translation: closing the evidence-to-practice gap. Ann Emerg Med. 2007;49:355–63.
Lauer S, Skarlatos S. Translational research for cardiovascular diseases at the National Heart, Lung, and Blood Institute: moving from bench to bedside and from bedside to community. Circulation. 2010;121:929–33.
McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A, et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348:2635–45.
Schuster M, McGlynn E, Brook R. How good is the quality of health care in the United States? Milbank Q. 2003;76:517–63.
Grol R. Successes and failures in the implementation of evidence-based guidelines for clinical practice. Med Care. 2001;39:II46–54.
Fogarty International Center. Frequently asked questions about implementation science. Bethesda MD: National Institutes of Health; 2015. 9-8-2015.
Backer TE. Knowledge utilization: the third wave. Knowledge: Creation, Diffusion, Utilization. 1991;12:225–40.
Landry R, Lamari M, Amara N. Climbing the ladder of research utilization: evidence from social science research. Sci Commun. 2001;22:396–422.
Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovation in service organizations: systematic review and recommendations. Millbank Q. 2004;82:581–629.
Rich R, Oh C. The utilization of policy research. In: Nagel S, editor. Encyclopedia of policy studies. 2nd ed. New York: M. Dekker; 1994. p. 69–92.
Dogherty EJ, Harrison MB, Graham ID. Facilitation as a role and process in achieving evidence-based practice in nursing: a focused review of concept and meaning. Worldviews Evid-Based Nurs. 2010;7:76–89.
Kimberly J, Cook J. Organizational measurement and the implementation of innovations in mental health services. Adm Policy Ment Health. 2008;35:11–20.
Mitton C, Adair CE, McKenzie E, Patten SB, Perry BW. Knowledge transfer and exchange: review and synthesis of the literature. Milbank Q. 2007;85:729–68.
Squires JE, Estabrooks CA, O’Rourke HM, Gustavsson P, Newburn-Cook C, Wallin L. A systematic review of the psychometric properties of self-report research utilization measures used in healthcare. Implement Sci. 2011;6:83.
Squires J, Estabrooks C, Gustavsson P, Wallin L. Individual determinants of research utilization by nurses: a systematic review update. Implement Sci. 2011;6:1.
Squires JE, Hutchinson AM, Bostrom A-M, O’Rourke HM, Cobban SJ, Estabrooks CA. To what extent do nurses use research in clinical practice? A systematic review. Implement Sci. 2011;6:21.
Glaser EM, Abelson HH, Garrison KN. Putting knowledge to Use: facilitating the diffusion of knowledge and the implementation of planned change. San Francisco: Jossey-Bass; 1983.
Ovretveit J. Understanding the conditions for improvement: research to discover which context influences affect improvement success. BMJ Qual Saf. 2011;20:i18–23.
May C, Finch T, Mair F, Ballini L, Dowrick C, Eccles M, et al. Understanding the implementation of complex interventions in health care: the normalization process model. BMC Health Serv Res. 2007;7:148.
French B. Contextual factors influencing research use in nursing. Worldviews Evid-Based Nurs. 2005;2:172–83.
Rycroft-Malone J. The PARIHS framework–A framework for guiding the implementation of evidence-based practice. J Nurs Care Qual. 2004;19:297–304.
Allport GW. The nature of prejudice. Addison-Wesley Pub. Co. 1954.
Hogg W, Lemelin J, Graham ID, Grimshaw J, Martin C, Moore L, et al. Improving prevention in primary care: evaluating the effectiveness of outreach facilitation. Fam Pract. 2008;25:40–8.
Lemelin J, Hogg W, Baskerville N. Evidence to action: a tailored multifaceted approach to changing family physician practice patterns and improving preventive care. CMAJ. 2001;164:757–63.
Shojania K, Jennings A, Mayhew A, Ramsay C, Eccles M, Grimshaw J. The effects on on-screen, point of care computer reminders on process and outcomes of care. Cochrane Database Syst Rev. 2011;8(3):CD001096.
The Health Foundation. Lining Up: How do improvement programmes work? London: The Health Foundation; 2013.
Rogers EM. Diffusion of innovations. 5th ed. New York: Free Press; 2003.
Squires JE, Estabrooks CA, O’Rourke HM, Gustavsson P, Newburn-Cook CV, Wallin L. A systematic review of the psychometric properties of self-report research utilization measures used in healthcare. Implement Sci. 2011;6:83.
Kitson A, Harvey G, McCormack B. Enabling the implementation of evidence based practice: a conceptual framework. Qual Health Care. 1998;7:149–58.
Kitson A, Rycroft-Malone J, Harvey G, McCormack BS, Titchen A. Evaluating the successful implementation of evidence into practice using the PARiHS framework: theoretical and practical challenges. 2008. p. 3.
Graham ID, Logan J, Harrison MB, Straus SE, Tetroe J, Caswell W, et al. Lost in knowledge translation: time for a map? J Contin Educ Health Prof. 2006;26:13–24.
Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice. A consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.
Cane J, O’Connor D, Michie S. Validation of the theoretical domains framework for use in behaviour change and implementation research. Implement Sci. 2012;7:37.
Michie S, Johnston M, Abraham C, Lawton R, Parker D, Walker A. Making psychological theory useful for implementing evidence based practice: a consensus approach. Qual Health Care. 2005;14:26–33.
McNulty T, Ferlie E. Reengineering health care: the complexities of organizational transformation. New York: Oxford University Press; 2002.
Waltz CF, Strickland O, Lenz E. Measurement in nursing and health research. New York: Springer Pub; 2005.
Squires JE, Graham ID, Hutchinson AM, Linklater S, Brehaut JC, Curran J, et al. Understanding context in knowledge translation: a concept analysis study protocol. J Adv Nurs. 2014;71(5):1146–55.
Avant KC. The Wilson method of concept analysis. In: Rodgers BL, Knafl KA, editors. Concept development in nursing. Foundations, techniques and applications. Philadelphia: W. B. Saunders Company; 2000.
Corti L, Thompson P. Secondary analysis of archive data. In: Searle C, Gobo G, Gubrium JF, editors. Qualitative research practice. London: Sage; 2004. p. 327–43.
Corti L, Foster J, Thompson P. The need for a qualitative data archival policy. Qual Health Res. 1996;6:135–9.
Heaton J. Reworking qualitative data. London: Sage; 2004.
Heaton J. Secondary analysis of qualitative data: a review of the literature. ESRC research report R000222918. 2000.
Fishbein ME. Readings in attitude theory and measurement. New York: John Wiley and Sons; 1967.
QSR International. NVivo10. 2012.
Hewitt-Taylor J. Use of constant comparative analysis in qualitative research. Nurs Stand. 2001;15:39–42.
Glaser B, Strauss A. The discovery of grounded theory. Hawthorne, NY: Aldine; 1967.
Miles MB, Huberman AM. Qualitative data analysis: an expanded sourcebook. 2nd ed. Thousand Oaks, CA: Sage; 1994.
Farmer T, Robinson T, Elliott S, Eyles J. Developing and implementing a triangulation protocol for qualitative health research. Qual Health Res. 2006;16:377.
Sackman H. Delphi technique. Lexington: Lexington Books; 1975.
McKenna HP. The Delphi technique: a worthwhile research approach for nursing. J Adv Nurs. 1994;19:1221–5.
Okoli C, Pawlowski SD. The Delphi method as a research tool: an example, design considerations and applications. Information and Management. 2004;42:15–29.
Fleuren M, Wiefferink K, Paulussen T. Determinants of innovation within healthcare organizations. Int J Qual Health Care. 2004;16:107–23.
Bond S, Bond J. A Delphi study of clinical nursing research priorities. J Adv Nurs. 1982;7:565–75.
Broome ME, Woodring B, Von O’Connor S. Research priorities for the nursing of children and their families: a Delphi study. J Pediatr Nurs. 1996;11:281–7.
Schmidt K, Montgomery LA, Bruene D, Kenney M. Determining research priorities in pediatric nursing: a Delphi study. J Pediatr Nurs. 1997;12:201–7.
Sleep J. Establishing priorities for research: report of a Delphi survey. BJM. 1999;3:323–31.
Lumley T, Diehr P, Emerson S, Chen L. The importance of the normality assumption in large public health data sets. Annu Rev Public Health. 2002;23:151–69.
Cook JV, Dickinson HO, Eccles MP. Response rates in postal surveys of healthcare professionals between 1996 and 2005: an observational study. BMC Health Serv Res. 2009;9:160.
Raine K, Sanderson C, Hutchings A, Carter S, Larkin K, Black N. An experimental study of determinants of group judgements in clinical guideline development. Lancet. 2004;364:429–37.
Elwyn G, O’Connor A, Stacey D, Volk R, Edwards A, Coulter A. Developing a quality criteria framework for patient decision aids: online international Delphi consensus process. BMJ. 2006;333:417–9.
This study has received provincial peer-reviewed funding from The Canadian Institutes of Health Research (CIHR) and ethics approval from The Ottawa Hospital Research Ethics Board and the University of Ottawa Research Ethics Board. JES holds a CIHR New Investigator Award in Knowledge Translation and a University of Ottawa Research Chair in Health Evidence Implementation. JMG holds a CIHR Canada Research Chair in Knowledge Transfer and Uptake. NI holds New Investigator Awards from CIHR and the Department of Family and Community Medicine at the University of Toronto.
We would like to thank the following individuals (external to our research team) for allowing data they collected to be used in the secondary analysis (Context study 2) of the program described in this protocol: Andre Bussieres (McGill University), Christopher Fuller (University College London), Greg Knoll (The Ottawa Hospital), Danielle Mazza (Monash University), Kathryn Suh (The Ottawa Hospital), and Alan T Tinmouth (University of Ottawa).
The authors declare that they have no competing interests. JES and SM are Associate Editors of Implementation Science, AMH is a member of the Editorial Board of Implementation Science, and AES is a co-Editor in Chief of Implementation Science; they had no involvement in the review of this manuscript.
All authors participated in conception of the study and securing its funding. JES drafted the manuscript. All authors provided input into the protocol, critical feedback on the manuscript, and approved the final manuscript.