- Open Access
Revisiting concepts of evidence in implementation science
Implementation Science volume 17, Article number: 26 (2022)
Evidence, in multiple forms, is a foundation of implementation science. For public health and clinical practice, evidence includes the following: type 1 evidence on etiology and burden; type 2 evidence on effectiveness of interventions; and type 3: evidence on dissemination and implementation (D&I) within context. To support a vision for development and use of evidence in D&I science that is more comprehensive and equitable (particularly for type 3 evidence), this article aims to clarify concepts of evidence, summarize ongoing debates about evidence, and provide a set of recommendations and tools/resources for addressing the “how-to” in filling evidence gaps most critical to advancing implementation science.
Because current conceptualizations of evidence have been relatively narrow and insufficiently characterized in our opinion, we identify and discuss challenges and debates about the uses, usefulness, and gaps in evidence for implementation science. A set of questions is proposed to assist in determining when evidence is sufficient for dissemination and implementation. Intersecting gaps include the need to (1) reconsider how the evidence base is determined, (2) improve understanding of contextual effects on implementation, (3) sharpen the focus on health equity in how we approach and build the evidence-base, (4) conduct more policy implementation research and evaluation, and (5) learn from audience and stakeholder perspectives. We offer 15 recommendations to assist in filling these gaps and describe a set of tools for enhancing the evidence most needed in implementation science.
To address our recommendations, we see capacity as a necessary ingredient to shift the field’s approach to evidence. Capacity includes the “push” for implementation science where researchers are trained to develop and evaluate evidence which should be useful and feasible for implementers and reflect community or stakeholder priorities. Equally important, there has been inadequate training and too little emphasis on the “pull” for implementation science (e.g., training implementers, practice-based research). We suggest that funders and reviewers of research should adopt and support a more robust definition of evidence. By critically examining the evolving nature of evidence, implementation science can better fulfill its vision of facilitating widespread and equitable adoption, delivery, and sustainment of scientific advances.
For every complex problem, there is a solution that is simple, neat, and wrong. — H. L. Mencken
Evidence, often informed by a complex cycle of observation, theory, and experiment , is a foundation of implementation science [2, 3]. Evidence is central in part because dissemination and implementation (D&I) science is based on the notion that there are practices and policies that should be widely used because scientific research concludes that they would have widespread benefits. In this context, an evidence-based intervention (EBI) is defined broadly to include programs, practices, processes, policies, and guidelines with some level of effectiveness . Many of the underlying sources of evidence were originally derived from legal settings, taking on multiple forms including witness accounts, police testimony, expert opinions, and forensic science . Building on these origins, evidence for public health and clinical practice comes in many forms, across three broad domains [6,7,8]: type 1: evidence on etiology and burden; type 2: evidence on effectiveness of interventions; type 3: evidence on implementation within context (Table 1). These three types of evidence are often not linear, but interconnected, iterative, and overlapping—they shape one another (e.g., if we have limited type 2 evidence then the ability to apply type 3 evidence is hampered). Across these three domains, we have by far the most type 1 evidence and the least type 3 evidence [6, 9].
Definitions of evidence and the associated processes (how evidence is used) vary by setting. In clinical settings, evidence-based medicine is “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients” . Evidence-based public health occurs across a range of community settings and is “the process of integrating science-based interventions with community preferences to improve the health of populations” . Perhaps most relevant to implementation science, evidence-based decision-making is a multilevel process that involves collecting and implementing the best available evidence from research, practice, professional experience, and clinical or community partners [12,13,14,15]. A robust, equitable, and sustainable approach to evidence-based decision-making takes both challenges and strengths into account (e.g., skills, leadership priorities, resources [16,17,18,19]) and places scientific evidence and stakeholder engagement in the center of the decision-making process .
For all types of evidence and particularly for type 3 evidence regarding D&I, complexity and context are essential elements [21,22,23]. Both PCORI [24, 25] and a recent update to the MRC guidance  have provided statements about researching complex health interventions that provide excellent recommendations and resources. We concur with most of these recommendations and add to their points and recommendations in this article. The most effective approaches often rely on complex interventions embedded in complex systems (e.g., nonlinear, multilevel interventions) where the description of core intervention components and their relationships involve multiple settings, audiences, and approaches [26,27,28]. Type 3 evidence is also highly context-dependent—the context for implementation involves complex adaptive systems that form the dynamic environment(s) in which discrete interventions and interconnected implementation processes are situated . For example, in models such as the Dynamic Sustainability Framework, the EBI is embedded in the context of multiple factors in a practice setting (e.g., staffing, organizational climate) which is in turn embedded in a broader ecological system with a complex set of variables (e.g., policy, regulations, population characteristics) . This embeddedness also should take into account dynamism—that an EBI may stay true to its original function but need to evolve form over time to adapt to changing population needs, new evidence, and the “fit” of evidence with complex and changing context [30,31,32].
Much has been written about the terminology of evidence-based practice and policy. The most widely used term is “evidence-based” practice (often evidence-based medicine [33, 34] or evidence-based public health [7, 35]). Especially in Canada and Australia, the term “evidence-informed” decision-making is commonly used [15, 36]. The term “informed” is used to emphasize that public health decisions are based on research but also require consideration of individual preferences and political and organizational factors [37, 38]. Others have used the term “knowledge-based practice” or “practice-based evidence” or “practice-relevant evidence” to emphasize the importance of practice wisdom from frontline practitioners and lived experience of patients and community members [39,40,41,42,43]. To maximize the use of EBIs, research should inform practice and practice should inform research . In our view, the most important issue is not which term to use, but rather that implementation decisions should be based on and informed by evaluation and research findings, while using rigorous methods to take into account a variety of contextual variables across multiple levels of influence (Table 2).
Fundamental issues for implementation science involve the questions: (1) evidence on what and for whom in what settings and under what conditions? and (2) When do we have enough evidence for D&I? While the answer to this latter question will always be “it depends,” there are related questions that are useful to consider (Table 3).
To facilitate the development and delivery of more equitable and sustainable interventions, we need to expand our thinking about evidence, especially for but not limited to type 3 evidence. We discuss a set of five core interrelated issues about evidence, examining (1) how the evidence base is determined, (2) context, (3) health equity, (4) policy implementation, and (5) audience/stakeholder perspectives. All areas concern some form of research or knowledge gaps in D&I science. The evidence base discussion presents a broader perspective on what is considered evidence; the context, equity, and stakeholder sections cover neglected aspects of implementation science in need of more and higher quality research; and the policy implementation section points to the need for the most pressing gaps in policy-relevant research for D&I. Across these areas, we provide a series of recommendations along with tools and resources for speeding translation of research to practice and policy.
Selected debates about evidence
Here, we describe ongoing discussions and debates about the uses, usefulness, and gaps in evidence for implementation science, which give way to our recommendations (Table 4). While this is not an exhaustive list, it illustrates the need for more reflection and clarity across five core areas where there are major unresolved issues about evidence.
Reconsider how the evidence base is determined
The evidence base for implementation science needs to be broadened to encompass a wider range of study designs, methods, stakeholders, and outcomes. For example, the decontextualized randomized controlled efficacy trial (RCT) that attempts to control for many potential confounding factors is generally considered the gold standard for obtaining evidence on internal validity and contributing to the determination of causality of a given intervention, practice, or treatment . A property of an RCT is that, with large sample sizes, it allows researchers to potentially balance known and unknown confounders. Despite the value and conceptual simplicity of the traditional efficacy RCT, its limitations have been noted [46,47,48]. For example, randomization may be impractical, costly, or unethical for some interventions (e.g., community-based interventions where partners have concerns about withholding a program from the community) and for many policy interventions, where the independent variable (the “exposure”) cannot be randomized. Tools such as PRECIS-2 and the newer PRECIS-2 PS help enhance the real-world utility of RCTs (pragmatic trials) [49, 50]. For some settings and interventions, alternative and more rapid-cycle and adaptive designs are needed to elucidate effects including quasi-experiments, observational trials, iterative assessments and actions, natural experiments, and mixed-methods studies [51,52,53,54,55]. Often in implementation science what we want to know is how one strategy adds to a range of strategies already being delivered within an existing environment a concept called “mosaic effectiveness” .
For clinical and public health practice, the generalizability of an EBI’s effectiveness from one population and setting to another (and ideally across a diverse range of populations and settings)—the core concept of external validity—is an essential ingredient. Systematic review and practice guidelines, which are often the basis for an implementation study, are mainly focused on whether an intervention is effective on average (internal validity) and have commonly given limited attention to specifying conditions (settings, populations, circumstances) under which a program is and is not effective [57,58,59]. For implementation science, there are many considerations and layers to the notion of whether an evidence-based practice applies in a particular setting or population . Tools such as ADAPT  or process models like ADAPT-ITT  can be useful in transferring EBIs from one setting to another while taking contextual variables into account. Models such as FRAME and FRAME-IS are helpful for tracking and building the evidence base around what types of adaptations are associated with improved or decreased effectiveness or implementation outcomes (and for which settings and populations?) [62, 63].
The question of whether an EBI applies involves a set of scientific considerations that may differ from simply knowing average treatment effects. These include balancing of fidelity to the original EBI functions with adaptations needed for replication and scale-up , as well as considerations as to when there may be a need to “start from scratch” in developing a new intervention as opposed to refining or adapting an existing one (e.g., when the nature of the evidence for an EBI does not fit the sociocultural or community context). There is a pressing need for research on the strengths and limitations of practitioner-driven and community-centered adaptation of EBIs, which is likely to enhance relevance, feasibility, and sociocultural appropriateness and acceptability, as well as fit with implementation context [65,66,67]. There are also potential considerations when adapting EBIs or implementation strategies (e.g., costs, resources needed, potential reduction in effectiveness) [63, 68, 69]. It has also been suggested that a greater emphasis is needed on both the functions of an intervention (its basic purposes, underlying theoretical premise) and forms (the strategies and approaches used to meet each intervention function) , opening the door to inquiry about how fidelity to function may demand adaptations (or in some cases transformation or evolution) in form.
Additional evidence is needed on the inter-related concepts of null (ineffective) interventions, de-implementation, and mis-implementation [70,71,72]. From null intervention results, we can learn which parts of an EBI or implementation strategy need to be refined, adapted, or re-invented. Data on null interventions also informs for whom and under what conditions an EBI or implementation strategy is “evidence-based.” De-implementation is the process of stopping or abandoning practices that are not proved to be effective or are possibly harmful , whereas mis-implementation involves one or both of two processes: the discontinuation of effective programs and the continuation of ineffective practices in public health settings . Many of the contextual variables in Table 2 strongly affect de-implementation and mis-implementation.
Emerging perspectives in data science and causal inference may help advance type 3 evidence. If contextual heterogeneity is the norm, then the scientific task in any one study population is to produce data that address relevance across diverse external settings. Useful methods to do so are becoming available and suggest that the more we know about mediators/mechanisms and modifiers of effects in implementation, the more interpretable findings could be in different settings and populations [74,75,76]. For example, consider the question of whether evidence for audit and feedback on the use of EBIs in HIV clinics from randomized trials in Boston could apply to HIV clinics in Nairobi, Kenya. Let us assume that in Boston, researchers learn that the credibility of the data is a key driver of successful implementation (e.g., clinicians who doubt the veracity of metrics from the electronic health record are less likely to respond). Given the widespread challenges of data accuracy in the nascent electronic health records in this specific setting in Africa (and extensive literature documenting this challenge), audit and feedback as an implementation strategy can be anticipated to have limited implementation relevance as well as effectiveness. Using data from Boston to infer (in this case that it might not work) in Nairobi depends on knowing critical mediators of audit and feedback in Boston (i.e., the credibility of data on provider performance). In some situations, a completely different implementation strategy may be needed that is better suited to local conditions. One further implication is that this directs research efforts to not only find effects in Boston, but how they came about (type 3 evidence).
Improve understanding of contextual effects on implementation
The complexity and dynamic nature of implementation necessitate continual attention to context (i.e., active and unique factors that surround implementation and sustainability [77, 78]) [22, 79, 80]. When context is taken into account in research, the study findings are more likely to indicate the conditions under which evidence does or does not generalize to different populations, settings, and time periods —yet too often context is inadequately described or not fully elucidated . Contextual conditions also drive and inform the adaptation of EBIs to populations and settings that differ from those in which it originally developed . It is useful to consider contextual issues of relevance for implementation across levels of a socio-ecological framework (individual, interpersonal, organizational, community, policy) (Table 2) .
The challenging scientific task of “unpacking” context requires three activities. First, contextual effects in any study setting or across settings and/or systems should be enumerated (e.g., a set of variables in Table 2). Second, since one cannot measure everything, part of building the evidence base involves determining which aspects of context are most salient for implementation within and across settings. Third, implementation research should also seek to measure the presence, distribution, and intensity of those contextual factors in target settings in which a research study is not being undertaken, but where one might want to apply evidence.
Within an implementation research project, context is dynamic and should be assessed across all stages of a study . Too often, dynamic contexts are not fully understood or assessed . In some cases, the context for delivery (e.g., a particular clinical setting) is relatively stable, but the target of the intervention (e.g., a particular pathophysiology; guidelines for cancer screening) is dynamic and emergent. In a more complex intervention trial, both context and targets are dynamic and emergent [22, 84].
During implementation planning, a needs and assets assessment (formative research) should account for historical, cultural, social, and system factors that may shape implementation and the implementation climate, including forms of structural or institutional racism (e.g., inequitable practices and policies), medical mistrust, institutional and providers’ biases and norms that may create or reinforce biases or inequities, as well as community strengths and assets that may inform implementation efforts. Tools such as critical ethnography can be useful during needs assessment to understand interactions between the ensembles of actors, agencies, interventions, and other contextual variables . When selecting EBIs to be tested in an implementation study, context may affect both internal validity and external validity. Systematic reviews, which are often the source of EBIs, use a relatively narrow hierarchy of evidence  and tend to strip out implementation context when trying to make a summary (often quantitative) judgement about the average effectiveness of an EBI (e.g. for most populations and settings). For many settings in which we are conducting implementation studies (e.g., lower- and middle-income countries ), we may not have a strong evidence base, guidelines, or interventions that have been tested through “gold-standard” RCTs and if they have, they are often not under conditions similar to those in which the EBI will now be applied.
Context in global settings presents unique considerations, particularly in lower- and middle-income countries (LMICs) and other settings that have limited resources and face numerous structural barriers to health (e.g., in the USA, federally qualified health centers, donor-funded vertical health programs in lower- and middle-income countries). Among the considerations is the relevant evidence base for implementation—when settings vary tremendously, particularly the social and political context and systems/organizational infrastructure: Do researchers and implementers need to start anew in building the evidence base for implementation, answering many of the questions in Table 3? There is some evidence that in settings with constrained resources, intervention and methods innovations may be fostered due to the need for creativity and adaptations (e.g., task shifting ) when choices are restricted ). Adaptive designs (where interventions and strategies are modified in response to emerging data) may be particularly useful in LMICs since they may allow a team to begin with low-intensity/low-resource approaches, and refine or intensify as needed [90,91,92].
Transportability theory has been applied to assess whether findings about the effects of an implementation strategy in one setting can be used to infer in another, and if so, whether it is likely to work . Context, when defined narrowly as the causes of an outcome that differ from one setting to another, asks science to focus on two measurement tasks. In the initial context where a strategy is being tested, it will be important to measure the steps that mediate or moderate the effects of the strategy on the outcome as well as factors that influence those steps. Hypotheses not only about effects but also about how and why they occur across diverse settings are important to inform the measurement architecture.
Context is also important during the process of broader dissemination of evidence-based approaches. There is a well-documented disconnect between how researchers disseminate their findings (including EBIs) and how practitioners and policy makers learn about the latest evidence . Applying principles of designing for dissemination (D4D) allows researchers to better account for the needs, assets, priorities, and time frames of potential adopters and stakeholders [94, 95]. An active D4D process emphasizes the design phase of an implementation research project. A D4D process anticipates dissemination of products (e.g., an evidence-based implementation strategy) by developing a dissemination plan that takes into account audience differences, product messaging, channels, and packaging . In the future, this proactive D4D process could usefully more fully address designing for equity and sustainment, as well as dissemination.
Sharpen the focus on health equity
Addressing heath disparities and promoting health equity is becoming a more central and explicit focus of implementation science [92, 97,98,99,100,101,102]. Health equity is a framing that shifts from a deficits approach (disparities) to one focused on what society can achieve (equity) . An equity focus also recognizes the unjust nature of inequities, naming root/structural causes . This emphasis is documented in publication trends over the past two decades. Figure 1 shows trends of publications from January 1, 2000, to December 31, 2021, using two search strings in PubMed: 1) “health disparities” AND [“implementation science” OR “implementation research” or “knowledge translation”] and 2) “health equity” AND [“implementation science” OR “implementation research” or “knowledge translation”]. For most of the past two decades, research has been framed more often with a disparities focus than with an equity focus—disparity publications were two- to three-fold more common than equity articles from 2006 to 2014. However, in 2021, the number of equity-framed publications greatly exceeded the number of disparities-framed publications.
To move towards the goal of achieving health equity, it is critical that implementation science expands the quantity, quality, and types of evidence produced and prioritized, as well as who and what settings are (1) reflected in that evidence (representativeness) and (2) involved in its generation and interpretation (representation). For many health conditions and populations, we have adequate descriptive (type 1) data that can guide what to address (e.g., the size and nature of disparities). However, we often lack sufficient data on EBIs and strategies that are effective in reducing inequities and/or promoting equity . Often, available EBIs inadequately address or account for many relevant social, cultural, structural, and contextual conditions that shape both health inequities and have implications for EBI implementation [92, 105, 106]. There are challenges in generating evidence on inequities, including potentially smaller sample sizes across various social dimensions through which inequities exist, which may limit subgroup heterogeneity analyses (e.g., by race or ethnicity) [107, 108] (see Table 2). As we build the evidence base of EBIs to actively promote equity, there is a need to understand the core elements of equity-focused interventions and strategies, and to do so for the range of social dimensions through which health inequities may exist (e.g., race, immigration status, gender, sexual orientation, location) and their intersection .
A foundational challenge here is that many EBIs were not developed with or tested among settings or populations that experience inequities or with the goal of promoting health equity and may unintentionally contribute to or exacerbate inequities [110,111,112]. This results in part from the reductionist way in which EBIs are often developed, deployed (a linear, “cause and effect” approach), and tested , paying inadequate attention to the complex and interrelated social determinants of health and root causes of health inequities (e.g., structural racism, inequitable allocation of resources and opportunities) [114,115,116,117,118].
We need to engage a wider range of partners from lower resource settings earlier and throughout the research process and in meaningful ways to build a broader and more relevant array of equity-focused EBIs that are feasible, acceptable, culturally appropriate, and address root causes. We also need to expand what we “count” as EBIs in public health and clinical research, broadening the focus from a narrower view of individual, interpersonal, and organizational interventions, to also include community, policy, and multi-sector interventions that have the potential to make larger shifts in health inequities. Such broadening of evidence with an eye towards health equity will consider moving beyond a more singular focus on our EBI repositories and including and evaluating existing promising community-defined evidence and interventions [92, 119, 120]. In expanding the evidence-base with the goal of promoting health equity, there are significant opportunities to develop and deploy EBIs in sectors outside of health (e.g., schools, workplaces, social services agencies, juvenile justice settings) where in many cases, the reach and impact can be greater than in the health sector . Additionally, as we expand this evidence base, it may be beneficial to prioritize development and evaluation of interventions, practices, and policies that can reduce underlying structural and social factors (e.g., structural racism) and their downstream effects on health inequities .
Equity should be a core criterion for valuing evidence. This value statement should be reflected in priorities of funders, how research questions are framed, how research resources and decision-making are distributed, and how studies are conducted, evaluated, and reviewed. Implementation science has a role in recognizing that a negative consequence of our social and economic systems is the concentration of resources and health. These systems create inequities, so when thinking about closing an implementation gap, we should recognize the context—that such a gap is often an outgrowth of these systems and must be addressed and transformed. Equity needs to be prioritized and made more explicit as part of engagement efforts, which includes consideration of power imbalances (who is and is not involved in making key decisions) and timing of when and how partners are engaged (e.g., who is involved in EBI development and deployment, how communities are reflected in co-creating the evidence) [95, 120]. Reflection questions and step-by-step guidance can help guide study planning with an equity focus [102, 120].
Conduct more policy implementation research and evaluation
Health and social policies, in the form of laws, regulations, organizational practices, and funding priorities, have a substantial impact on the health and well-being of populations and create the conditions under which people can be healthy and thrive- or not [122, 123]. Clinical and public health guidelines inform policy implementation by providing the basis for legislation, informing covered services in health plans, and advancing policies that support health equity [124,125,126,127,128]. Policies often address the underlying social and structural conditions that shape health and inequities—this in turn provides opportunities for policy implementation to frame accountability for organizations and systems.
Policy implementation research, which has been conducted since the 1970s across multiple disciplines [129, 130], seeks to understand the complexities of the policy process and increase the likelihood that evidence reaches policymakers and influences their decisions so that the population health benefits of scientific progress are maximized . A key objective of policy implementation research is the enactment, enforcement, and evaluation of evidence-based policies to (1) understand approaches to enhance the likelihood of policy adoption (process); (2) identify specific policy elements likely to be effective (content); and (3) document the potential impact of policy (outcomes) . Especially in the USA, policy implementation research is underdeveloped compared to other areas in implementation science. For example, a content analysis of all projects funded by the US National Institutes of Health through implementation research program announcements found that only 8% of funded grants were on policy implementation researc h. Few of these studies had an explicit focus on equity or social determinants of health.
Policy researchers have utilized a variety of designs, methods, and data sources to investigate the development processes, content, and outcomes of policies. Much more evidence is needed, including which policies work and which do not (for what outcomes, settings, and populations), how policies should be developed and implemented, unintended consequences of policies, and the best ways to combine quantitative and qualitative methods for evaluation of “upstream” factors that have important implications for health equity . There is also a pressing need for reliable and valid measures of policy implementation processes . These knowledge gaps are unlikely to be addressed by randomized designs and are more likely to be addressed using quasi-experimental designs, natural experiments, stakeholder-driven adaptations, systems science methods, citizen science, and participatory approaches [51, 66, 136,137,138,139].
Several other areas in policy implementation research need attention. First, policy makers often need information on a much shorter time frame than researchers can deliver—this calls for the use of tools such as rapid-cycle research  and rapid realist reviews . Second, we need to better understand the spread of policies, including the reasons that ineffective policies spread , the role of social media , and ways to address mis- and dis-information in the policy process . Finally, more emphasis is needed on the reciprocal, often horizontal, interactions between organizations and the development of policy-relevant evidence . For this inter-organizational research, the role of policy intermediaries (those who work in between existing systems to achieve a policy goal) has gained attention due to their critical roles in policy implementation research . Strategies and tools to address several of these issues are provided in recent reviews [146, 147] and in Table 4.
Pay greater attention to audience and stakeholder differences
There are multiple audiences of relevance for developing, applying, disseminating, and sustaining the evidence for implementation science . When seeking effective methods to generate, implement, and sustain EBIs, it is important to take into account the characteristics of each audience and stakeholder group, what they value, how to balance different viewpoints, and how to combine stakeholders’ experience and research evidence. Across these stakeholder groups, research evidence is only one of many influential factors influencing adoption, implementation, and sustainment of EBI [6, 15, 40].
Key audience categories include researchers, practitioners, and policy makers (Table 5). Researchers are one core audience. These individuals typically have specialized training and may devote an entire career studying a particular health issue. Another audience includes clinical and public health practitioners who seek practical information on the scope and quality of evidence for a range of EBIs and implementation strategies that are relevant in their setting. Practitioners in clinical settings (e.g., nurses, physicians) have specialized and standardized training whereas the training for public health practitioners is highly variable (most public health practitioners lack a public health degree ). A third group is policy makers at local, regional, state, national, and international levels. These individuals are faced with macro-level decisions on how to allocate public resources. Policy makers seek out distributional consequences (i.e., who has to pay, how much, and who benefits)  and in many policy settings, anecdotes are prioritized over empirical data . The category of policy makers also includes funders—these funders may be elected officials and “small p” policy makers (organizational leaders) who make funding decisions within their settings.
The relevance and usefulness of evidence vary by stakeholder type (Table 5) . Research usefulness can be informed by audience segmentation, where a product promotion strategy is targeted to the characteristics of a desired segment—a widely accepted principle in marketing . Audience segmentation can be informed by the process of user-centered design and decision-centered processes, in which the product (e.g., an implementation strategy) is guided in a systematic way by the end-users of the product [153,154,155].
Framing is another important factor in considering audiences for D&I. Individuals interpret the same data in different ways depending on the mental model through which they perceive information . For example, policy makers often perceive risks and benefits not in scientific terms but in relation to (usually short term) emotional, moral, financial, or political frameworks [157, 158]. In practical terms for implementation science, framing for a particular health issue for a community member or patient might relate to the ability to raise healthy children whereas framing for a policy maker might relate to cost savings from action or inaction. Cost and economic evaluation are key considerations for a range of stakeholders involved in implementation, yet too often the perspectives of diverse stakeholders are not well considered, acted upon, or reported .
Next steps for addressing gaps
The “how-to” for broadening the evidence base for implementation science will require several actions. First, we need to prioritize the evidence gaps and possible ways of filling these gaps—many ideas are shown in Table 4. Next, resources and tools are needed to address evidence deficits (Table 6). All tools listed are available free of charge and provide enough background and instructions to make them useful for a wide range of users—from beginners to experts. The tools listed cover multiple, overlapping domains: (1) engagement and partnerships; (2) study planning; (3) research proposals, articles, reporting, and guidelines; (4) and dissemination, scale-up, and sustainability. In addition to the resources in Table 6, there are many other portals that provide valuable information and resources for implementation research across multiple domains (e.g., technical assistance, mentorship, conferences, archived slides, webinars) [160,161,162,163,164,165,166,167,168].
Capacity is a core element for building a stronger, more comprehensive, and equitable evidence base. Capacity can be developed in multiple ways, including supporting the “push” for implementation science where researchers are trained to develop the evidence for implementation and skills in evaluation. Evaluation skill building should take into account the principles of realist evaluation, a mixed-methods approach that takes into account multiple contextual variables . There is a significant number of implementation science training opportunities across countries [160, 170, 171], though few have an explicit focus on many of the issues we have highlighted (e.g., health equity, designing for dissemination, sustainability, policy implementation). There has also been inadequate training and too little emphasis on the “pull” for implementation science (e.g., training the practitioners/implementers) [170, 172]. This emphasis on “pull” should embrace the audience differences in Table 5. There is even less evidence on who and how to conduct capacity building, especially in low-resource settings [171, 173].
There are also macro-level actions that would facilitate a broader and more robust evidence base. For example, funders and guideline developers should adopt a more comprehensive definition of evidence, addressing many of the recommendations outlined in Table 4 and above. This could include an alternative or addition to GRADE, incorporating methods of appraising research that does not automatically elevate RCTs (particularly when answering policy-related research questions). Similarly, it is helpful for study sections to be oriented to a wide array of evidence, particularly type 3 evidence. This will require some learning as well as some unlearning—as an example, we need to broaden our understanding of contextual mediators and moderators of implementation, which are likely to vary from those identified in highly controlled experiments.
Over the past few decades, there has been substantial progress in defining evidence for clinical and public health practice, identifying evidence gaps, and making initial progress in filling certain gaps. Yet to solve the health challenges facing society, we need new and expanded thinking about evidence and commitment to context-based decision-making. This process begins with evidence—a foundation of implementation science. By critically examining and broadening current concepts of evidence, implementation science can better fulfill its vision of providing an explicit response to decades of scientific progress that has not translated into equitable and sustained improvements in population health .
Availability of data and materials
Acceptability, Practicability, Effectiveness, Affordability, Side-effects, and Equity
Consolidated Standards of Reporting Trials
Designing for dissemination
Expert Recommendations for Implementing Change
Framework for Reporting Adaptations and Modifications-Enhanced
Framework for Reporting Adaptations and Modifications to Evidence-based Implementation Strategies
Grading Recommendations Assessment and Development Evidence
Human immunodeficiency virus
Lower- and middle-income countries
Multiphase Optimization Strategy
UK Medical Research Council
PRagmatic Explanatory Continuum Indicator Summary
- PRECIS-2 PS:
PRagmatic Explanatory Continuum Indicator Summary for providers
Randomized controlled trial
Standards for Reporting Implementation Studies
Theory, Model, and Framework Comparison and Selection Tool
Rimer BK, Glanz DK, Rasband G. Searching for evidence about health education and health behavior interventions. Health Educ Behav. 2001;28:231–48.
Peters DH, Adam T, Alonge O, Agyepong IA, Tran N. Implementation research: what it is and how to do it. BMJ. 2013;347:f6753.
Shelton RC, Lee M, Brotzman LE, Wolfenden L, Nathan N, Wainberg ML. What is dissemination and implementation science?: An Introduction and opportunities to advance behavioral medicine and public health globally. Int J Behav Med. 2020;27:3–20.
Rabin BA, Brownson RC, Kerner JF, Glasgow RE. Methodologic challenges in disseminating evidence-based interventions to promote physical activity. Am J Prev Med. 2006;31:S24–34.
McQueen DV. Strengthening the evidence base for health promotion. Health Promot Int. 2001;16:261–8.
Brownson RC, Fielding JE, Maylahn CM. Evidence-based public health: a fundamental concept for public health practice. Annu Rev Public Health. 2009;30:175–201.
Brownson RC, Gurney JG, Land GH. Evidence-based decision making in public health. J Public Health Manag Pract. 1999;5:86–97.
Rychetnik L, Hawe P, Waters E, Barratt A, Frommer M. A glossary for evidence based public health. J Epidemiol Community Health. 2004;58:538–45.
Brownson RC, Baker EA, Deshpande AD, Gillespie KN. Evidence-Based Public Health. 3rd ed. New York: Oxford University Press; 2018.
Sackett DL. Evidence-based medicine. Semin Perinatol. 1997;21:3–5.
Kohatsu ND, Robinson JG, Torner JC. Evidence-based public health: an evolving concept. Am J Prev Med. 2004;27:417–21.
Baba V, HakemZadeh F. Toward a theory of evidence based decision making. Manag Decis. 2012;50:832–67.
Mackintosh J, Ciliska D, Tulloch K. Evidence-informed decision making in public health in action. Environ Health Rev. 2015;58:15–9.
Brownson RC, Fielding JE, Green LW. Building capacity for evidence-based public health: reconciling the pulls of practice and the push of research. Annu Rev Public Health. 2018;39:27–53.
Armstrong R, Pettman TL, Waters E. Shifting sands - from descriptions to solutions. Public Health. 2014;128:525–32.
Jacobs JA, Dodson EA, Baker EA, Deshpande AD, Brownson RC. Barriers to evidence-based decision making in public health: a national survey of chronic disease practitioners. Public Health Rep. 2010;125:736–42.
Newman M, Papadopoulos I, Sigsworth J. Barriers to evidence-based practice. Intensive Crit Care Nurs. 1998;14:231–8.
Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014;14:2.
Sadeghi-Bazargani H, Tabrizi JS, Azami-Aghdash S. Barriers to evidence-based medicine: a systematic review. J Eval Clin Pract. 2014;20:793–802.
Glasgow RE, Green LW, Taylor MV, Stange KC. An evidence integration triangle for aligning science with policy and practice. Am J Prev Med. 2012;42:646–54.
Braithwaite J, Churruca K, Long JC, Ellis LA, Herkes J. When complexity science meets implementation science: a theoretical and empirical analysis of systems change. BMC Med. 2018;16:63.
May CR, Johnson M, Finch T. Implementation, context and complexity. Implement Sci. 2016;11:141.
Nilsen P, Bernhardsson S. Context matters in implementation science: a scoping review of determinant frameworks that describe contextual determinants for implementation outcomes. BMC Health Serv Res. 2019;19:189.
PCORI Methodology Standards [https://www.pcori.org/research/about-our-research/research-methodology/pcori-methodology-standards#Complex].
Selby JV, Beal AC, Frank L. The Patient-Centered Outcomes Research Institute (PCORI) national priorities for research and initial research agenda. JAMA. 2012;307:1583–4.
Skivington K, Matthews L, Simpson SA, Craig P, Baird J, Blazeby JM, et al. A new framework for developing and evaluating complex interventions: update of Medical Research Council guidance. BMJ. 2021;374:n2061.
Hawe P. Lessons from complex interventions to improve health. Annu Rev Public Health. 2015;36:307–23.
Hawe P, Shiell A, Riley T, Gold L. Methods for exploring implementation variation and local context within a cluster randomised community intervention trial. J Epidemiol Community Health. 2004;58:788–93.
May C. Towards a general theory of implementation. Implement Sci. 2013;8:18.
Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8:117.
Allen J, Shelton D, Emmons K, Linnan L. Fidelity and its relationship to implementation efffectiveness, adaptation and dissemination. In: Brownson R, Colditz G, Proctor E, editors. Dissemination and implementation research in health: translating science to practice. 2nd ed. New York: Oxford University Press; 2018. p. 267–84.
Baumann A, Cabassa L, Wiltsey Stirman S. Adaptation in dissemination and implementation science. In: Brownson R, Colditz G, Proctor E, editors. Dissemination and Implementation Research in Health: Translating Science to Practice. 2nd ed. New York: Oxford University Press; 2018. p. 285–300.
Guyatt G, Cook D, Haynes B. Evidence based medicine has come a long way. BMJ. 2004;329:990–1.
Sackett DL, Rosenberg WMC. The need for evidence-based medicine. J R Soc Med. 1995;88:620–4.
Glasziou P, Longbottom H. Evidence-based public health practice. Aust N Z J Public Health. 1999;23:436–40.
Yost J, Dobbins M, Traynor R, DeCorby K, Workentine S, Greco L. Tools to support evidence-informed public health decision making. BMC Public Health. 2014;14:728.
Rycroft-Malone J. Evidence-informed practice: from individual to context. J Nurs Manag. 2008;16:404–8.
Viehbeck SM, Petticrew M, Cummins S. Old myths, new myths: challenging myths in public health. Am J Public Health. 2015;105:665–9.
Glasby J, Walshe K, Gill H. What counts as ʻevidenceʼ in ʻevidence-based practiceʼ? Evid Policy. 2007;3:325–7.
Rycroft-Malone J, Seers K, Titchen A, Harvey G, Kitson A, McCormack B. What counts as evidence in evidence-based practice? J Adv Nurs. 2004;47:81–90.
Green LW. Making research relevant: if it is an evidence-based practice, whereʼs the practice-based evidence? Fam Pract. 2008;25(Suppl 1):i20–4.
Kothari A, Rudman D, Dobbins M, Rouse M, Sibbald S, Edwards N. The use of tacit and explicit knowledge in public health: a qualitative study. Implement Sci. 2012;7:20.
Youngblut JM, Brooten D. Evidence-based nursing practice: why is it important? AACN Clin Issues. 2001;12:468–76.
Swisher AK. Practice-based evidence. Cardiopulm Phys Ther J. 2010;21:4.
Akobeng AK. Understanding randomised controlled trials. Arch Dis Child. 2005;90:840–4.
Kessler R, Glasgow RE. A proposal to speed translation of healthcare research into practice: dramatic change is needed. Am J Prev Med. 2011;40:637–44.
Sanson-Fisher RW, Bonevski B, Green LW. DʼEste C: Limitations of the randomized controlled trial in evaluating population-based health interventions. Am J Prev Med. 2007;33:155–61.
Weiss N, Koepsell T, Psaty B. Generalizability of the results of randomized trials. Arch Int Med. 2008;168:133–5.
Loudon K, Treweek S, Sullivan F, Donnan P, Thorpe KE, Zwarenstein M. The PRECIS-2 tool: designing trials that are fit for purpose. BMJ. 2015;350:h2147.
Norton WE, Loudon K, Chambers DA, Zwarenstein M. Designing provider-focused implementation trials with purpose and intent: introducing the PRECIS-2-PS tool. Implement Sci. 2021;16:7.
Leatherdale ST. Natural experiment methodology for research: a review of how different methods can support real-world research. Int J Social Res Methodol. 2019;22:19–35.
Petticrew M, Cummins S, Ferrell C, Findlay A, Higgins C, Hoy C, et al. Natural experiments: an underused tool for public health? Public Health. 2005;119:751–7.
Palinkas LA, Mendon SJ, Hamilton AB. Innovations in mixed methods evaluations. Annu Rev Public Health. 2019;40:423–42.
Ramsey AT, Proctor EK, Chambers DA, Garbutt JM, Malone S, Powderly WG, et al. Designing for Accelerated Translation (DART) of emerging innovations in health. J Clin Transl Sci. 2019;3:53–8.
Leeman J, Rohweder C, Lee M, Brenner A, Dwyer A, Ko LK. OʼLeary MC, Ryan G, Vu T, Ramanadhan S: Aligning implementation science with improvement practice: a call to action. Implement Sci Commun. 2021;2:99.
Glidden DV, Mehrotra ML, Dunn DT, Geng EH. Mosaic effectiveness: measuring the impact of novel PrEP methods. Lancet HIV. 2019;6:e800–6.
Avellar SA, Thomas J, Kleinman R, Sama-Miller E, Woodruff SE, Coughlin R, et al. External validity: the next step for systematic reviews? Eval Rev. 2017;41:283–325.
Green LW, Glasgow RE. Evaluating the relevance, generalization, and applicability of research: issues in external validation and translation methodology. Eval Health Prof. 2006;29:126–53.
Huebschmann AG, Leavitt IM, Glasgow RE. Making health research matter: a call to increase attention to external validity. Annu Rev Public Health. 2019;40:45–63.
Moore G, Campbell M, Copeland L, Craig P, Movsisyan A, Hoddinott P, et al. Adapting interventions to new contexts-the ADAPT guidance. BMJ. 2021;374:n1679.
Wingood GM, DiClemente RJ. The ADAPT-ITT model: a novel method of adapting evidence-based HIV Interventions. J Acquir Immune Defic Syndr. 2008;47(Suppl 1):S40–6.
Miller CJ, Barnett ML, Baumann AA, Gutner CA, Wiltsey-Stirman S. The FRAME-IS: a framework for documenting modifications to implementation strategies in healthcare. Implement Sci. 2021;16:36.
Wiltsey Stirman S, Baumann AA, Miller CJ. The FRAME: an expanded framework for reporting adaptations and modifications to evidence-based interventions. Implement Sci. 2019;14:58.
Perez Jolles M, Lengnick-Hall R, Mittman BS. Core functions and forms of complex health interventions: a patient-centered medical home illustration. J Gen Intern Med. 2019;34:1032–8.
Alvidrez J, Napoles AM, Bernal G, Lloyd J, Cargill V, Godette D, et al. Horse Brave Heart MY, Das R, Farhat T: Building the evidence base to inform planned intervention adaptations by practitioners serving health disparity populations. Am J Public Health. 2019;109:S94–S101.
Minkler M, Salvatore A, Chang C. Participatory approaches for study design and analysis in dissemination and implementation research. In: Brownson R, Colditz G, Proctor E, editors. Dissemination and implementation research in health: translating science to practice. 2nd ed. New York: Oxford University Press; 2018. p. 175–90.
Ramanadhan S, Davis MM, Armstrong R, Baquero B, Ko LK, Leng JC, et al. Participatory implementation science to increase the impact of evidence-based cancer prevention and control. Cancer Causes Control. 2018;29:363–9.
Escoffery C, Lebow-Skelley E, Haardoerfer R, Boing E, Udelson H, Wood R, et al. A systematic review of adaptations of evidence-based public health interventions globally. Implement Sci. 2018;13:125.
Wang Z, Norris SL, Bero L. The advantages and limitations of guideline adaptation frameworks. Implement Sci. 2018;13:72.
Brownson RC, Allen P, Jacob RR, Harris JK, Duggan K, Hipp PR, et al. Understanding mis-implementation in public health practice. Am J Prev Med. 2015;48:543–51.
Nilsen P, Ingvarsson S, Hasson H, von Thiele SU, Augustsson H. Theories, models, and frameworks for de-implementation of low-value care: a scoping review of the literature. Implement Res Pract. 2020;1:1–15.
Norton WE, Chambers DA. Unpacking the complexities of de-implementing inappropriate health interventions. Implement Sci. 2020;15:2.
Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9:1.
Anselmi L, Binyaruka P, Borghi J. Understanding causal pathways within health systems policy evaluation through mediation analysis: an application to payment for performance (P4P) in Tanzania. Implement Sci. 2017;12:10.
Mehrotra M, Petersen M, Zimmerman S, Glidden D, Geng E. Designing trials for transport: optimizing trials for translation to diverse. In: Society for Epidemiologic Research: Virtual; 2020.
Pearl J, Bareinboim E. External validity: from do-calculus to transportability across populations. Statist Sci. 2014;29:579–95.
Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health. 2011;38:4–23.
Pfadenhauer L, Rohwer A, Burns J, Booth A, Bakke Lysdahl K, Hfmann B, et al. Guidance for the assessment of context and implementation in health technology assessments (HTA) and systematic reviews of complex interventions: the context and implementation of complex interventions (CICI) framework. European Union; 2016.
Glasgow RE, Emmons KM. How can we increase translation of research into practice? Types of evidence needed. Annu Rev Public Health. 2007;28:413–33.
Shelton RC, Chambers DA, Glasgow RE. An extension of RE-AIM to enhance sustainability: addressing dynamic context and promoting health equity over time. Front Public Health. 2020;8:134.
Glasgow RE, Chambers D. Developing robust, sustainable, implementation systems using rigorous, rapid and relevant science. Clin Transl Sci. 2012;5:48–55.
Cabassa LJ, Baumann AA. A two-way street: bridging implementation science and cultural adaptations of mental health treatments. Implement Sci. 2013;8:90.
Miller AL, Krusky AM, Franzen S, Cochran S, Zimmerman MA. Partnering to translate evidence-based programs to community settings: bridging the gap between research and practice. Health Promot Pract. 2012;13:559–66.
Paparini S, Papoutsi C, Murdoch J, Green J, Petticrew M, Greenhalgh T, et al. Evaluating complex interventions in context: systematic, meta-narrative review of case study approaches. BMC Med Res Methodol. 2021;21:225.
Cook KE. Using critical ethnography to explore issues in health promotion. Qual Health Res. 2005;15:129–38.
Shaw RL, Larkin M, Flowers P. Expanding the evidence within evidence-based healthcare: thinking about the context, acceptability and feasibility of interventions. Evid Based Med. 2014;19:201–3.
Chinnock P, Siegfried N, Clarke M. Is evidence-based medicine relevant to the developing world? PLoS Med. 2005;2:e107.
Joshi R, Alim M, Kengne AP, Jan S, Maulik PK, Peiris D, et al. Task shifting for non-communicable disease management in low and middle income countries--a systematic review. PLoS One. 2014;9:e103754.
Yapa HM, Barnighausen T. Implementation science in resource-poor countries and communities. Implement Sci. 2018;13:154.
Brown CH, Ten Have TR, Jo B, Dagne G, Wyman PA, Muthen B, et al. Adaptive designs for randomized trials in public health. Annu Rev Public Health. 2009;30:1–25.
Brown C, Curran G, Palinkas L, et al. An overview of research and evaluation designs for dissemination and implementation. Annu Rev Public Health. 2017;38:1–22.
Brownson RC, Kumanyika SK, Kreuter MW, Haire-Joshu D. Implementation science should give higher priority to health equity. Implement Sci. 2021;16:28.
Mehrotra ML, Petersen ML, Geng EH. Understanding HIV program effects: a structural approach to context using the transportability framework. J Acquir Immune Defic Syndr. 2019;82(Suppl 3):S199–205.
Brownson RC, Jacobs JA, Tabak RG, Hoehner CM, Stamatakis KA. Designing for dissemination among public health researchers: findings from a national survey in the United States. Am J Public Health. 2013;103:1693–9.
Knoepke CE, Ingle MP, Matlock DD, Brownson RC, Glasgow RE. Dissemination and stakeholder engagement practices among dissemination & implementation scientists: results from an online survey. PLoS One. 2019;14:e0216971.
Kwan BM, Brownson RC, Glasgow RE, Morrato EH, Luke DA. Designing for dissemination and sustainability to promote equitable impacts on health. Annu Rev Public Health. 2022;43:331–53.
Woodward EN, Matthieu MM, Uchendu US, Rogal S, Kirchner JE. The health equity implementation framework: proposal and preliminary study of hepatitis C virus treatment. Implement Sci. 2019;14:26.
McNulty M, Smith JD, Villamar J, Burnett-Zeigler I, Vermeer W, Benbow N, et al. Implementation research methodologies for achieving scientific equity and health equity. Ethn Dis. 2019;29:83–92.
Baumann AA, Cabassa LJ. Reframing implementation science to address inequities in healthcare delivery. BMC Health Serv Res. 2020;20:190.
Shelton RC, Adsul P, Oh A. Recommendations for addressing structural racism in implementation science: a call to the field. Ethn Dis. 2021;31:357–64.
Yousefi Nooraie R, Kwan BM, Cohn E, AuYoung M, Clarke Roberts M, Adsul P, et al. Advancing health equity through CTSA programs: opportunities for interaction between health equity, dissemination and implementation, and translational science. J Clin Transl Sci. 2020;4:168–75.
Kerkhoff AD, Farrand E, Marquez C, Cattamanchi A, Handley MA. Addressing health disparities through implementation science-a need to integrate an equity lens from the outset. Implement Sci. 2022;17:13.
Kumanyika SK. Health equity is the issue we have been waiting for. J Public Health Manag Pract. 2016;22(Suppl 1):S8–S10.
Braveman P, Gruskin S. Defining equity in health. J Epidemiol Community Health. 2003;57:254–8.
Bach-Mortensen AM, Lange BCL, Montgomery P. Barriers and facilitators to implementing evidence-based interventions among third sector organisations: a systematic review. Implement Sci. 2018;13:103.
Fagan AA, Bumbarger BK, Barth RP, Bradshaw CP, Cooper BR, Supplee LH, et al. Scaling up evidence-based interventions in US public systems to prevent behavioral health problems: challenges and opportunities. Prev Sci. 2019;20:1147–68.
Andresen EM, Diehr PH, Luke DA. Public health surveillance of low-frequency populations. Annu Rev Public Health. 2004;25:25–52.
Korngiebel DM, Taualii M, Forquera R, Harris R, Buchwald D. Addressing the challenges of research with small populations. Am J Public Health. 2015;105:1744–7.
Bowleg L. The problem with the phrase women and minorities: intersectionality-an important theoretical framework for public health. Am J Public Health. 2012;102:1267–73.
Allen-Scott LK, Hatfield JM, McIntyre L. A scoping review of unintended harm associated with public health interventions: towards a typology and an understanding of underlying factors. Int J Public Health. 2014;59:3–14.
Lorenc T, Petticrew M, Welch V, Tugwell P. What types of interventions generate inequalities? Evidence from systematic reviews. J Epidemiol Community Health. 2013;67:190–3.
Thomson K, Hillier-Brown F, Todd A, McNamara C, Huijts T, Bambra C. The effects of public health policies on health inequalities in high-income countries: an umbrella review. BMC Public Health. 2018;18:869.
Hoffmann I. Transcending reductionism in nutrition research. Am J Clin Nutr. 2003;78:514S–6S.
Braveman P, Egerter S, Williams DR. The social determinants of health: coming of age. Annu Rev Public Health. 2011;32:381–98.
Donkin A, Goldblatt P, Allen J, Nathanson V, Marmot M. Global action on the social determinants of health. BMJ Glob Health. 2018;3:e000603.
Marmot M, Bell R, Goldblatt P. Action on the social determinants of health. Rev Epidemiol Sante Publique. 2013;61(Suppl 3):S127–32.
Williams DR, Lawrence JA, Davis BA. Racism and health: evidence and needed research. Annu Rev Public Health. 2019;40:105–25.
Griffith DM, Holliday CS, Enyia OK, Ellison JM, Jaeger EC. Using syndemics and intersectionality to explain the disproportionate COVID-19 mortality among black men. Public Health Rep. 2021;136:523–31.
Martinez K, Callejas L, Hernandez M. Community-defined evidence: a bottom-up behavioral health approach to measure what works in communities of color. Emotional Behav Disord Youth. 2010;10:11–6.
Shelton R, Adsul P, Oh A, Moise N, Griffith D. Application of an anti-racism lens in the field of implementation science: recommendations for Reframing Implementation Research with a Focus on Justice and Racial Equity. Implement Res Pract. 2021; in press.
Mazzucca S, Arredondo EM, Hoelscher DM, Haire-Joshu D, Tabak RG, Kumanyika SK, et al. Expanding implementation research to prevent chronic diseases in community settings. Annu Rev Public Health. 2021;42:135–58.
CDC. Ten great public health achievements--United States, 1900-1999. MMWR Morb Mortal Wkly Rep. 1999;48:241–3.
CDC. Ten great public health achievements--United States, 2001-2010. MMWR Morb Mortal Wkly Rep. 2011;60:619–23.
Ayres CG, Griffith HM. Consensus guidelines: improving the delivery of clinical preventive services. Health Care Manage Rev. 2008;33:300–7.
Briss PA, Brownson RC, Fielding JE, Zaza S. Developing and using the guide to community preventive services: lessons learned about evidence-based public health. Annu Rev Public Health. 2004;25:281–302.
Woolf SH, Atkins D. The evolving role of prevention in health care. Contributions of the U.S. Preventive Services Task Force. Am J Prev Med. 2001;20:13–20.
Woolf SH, DiGuiseppi CG, Atkins D, Kamerow DB. Developing evidence-based clinical practice guidelines: lessons learned by the US Preventive Services Task Force. Annu Rev Public Health. 1996;17:511–38.
Doubeni CA, Simon M, Krist AH. Addressing systemic racism through clinical preventive service recommendations from the US Preventive Services Task Force. JAMA. 2021;325:627–8.
Mugambwa J, Nabeta N, Ngoma M, Rudaheranwa N, Kaberuka W, Munene J. Policy implementation: conceptual foundations, accumulated wisdom and new directions. J Public Adm Governance. 2018;8:211–32.
Nilsen P, Stahl C, Roback K, Cairney P. Never the twain shall meet?--a comparison of implementation science and policy implementation research. Implement Sci. 2013;8:63.
Purtle J, Dodson E, Brownson R. Policy dissemination research. In: Brownson R, Colditz G, Proctor E, editors. Dissemination and Implementation Research in Health: Translating Science to Practice. 2nd ed. New York: Oxford University Press; 2018. p. 433–47.
Brownson RC, Chriqui JF, Stamatakis KA. Understanding evidence-based public health policy. Am J Public Health. 2009;99:1576–83.
Purtle J, Peters R, Brownson RC. A review of policy dissemination and implementation research funded by the National Institutes of Health, 2007-2014. Implement Sci. 2016;11:1.
Emmons KM, Chambers DA. Policy implementation science - an unexplored strategy to address social determinants of health. Ethn Dis. 2021;31:133–8.
Allen P, Pilar M, Walsh-Bailey C, Hooley C, Mazzucca S, Lewis CC, et al. Quantitative measures of health policy implementation determinants and outcomes: a systematic review. Implement Sci. 2020;15:47.
Newcomer K, Hatry H, Wholey J, editors. Handbook of practical program evaluation. 4th ed. San Francisco: Jossey-Bass; 2015.
Hinckson E, Schneider M, Winter SJ, Stone E, Puhan M, Stathi A, et al. Citizen science applied to building healthier community environments: advancing the field through shared construct and measurement development. Int J Behav Nutr Phys Act. 2017;14:133.
Tengo M, Austin BJ, Danielsen F, Fernandez-Llamazares A. Creating synergies between citizen science and indigenous and local knowledge. Bioscience. 2021;71:503–18.
Wallerstein N, Duran B. Community-based participatory research contributions to intervention research: the intersection of science and practice to improve health equity. Am J Public Health. 2010;100(Suppl 1):S40–6.
Glasgow RE, Vinson C, Chambers D, Khoury MJ, Kaplan RM, Hunter C. National Institutes of Health approaches to dissemination and implementation science: current and future directions. Am J Public Health. 2012;102:1274–81.
Saul JE, Willis CD, Bitz J, Best A. A time-responsive tool for informing policy making: rapid realist review. Implement Sci. 2013;8:103.
Shipan C, Volden C. Why bad policies spread (and good ones don’t): Cambridge University Press; 2021.
Mheidly N, Fares J. Leveraging media and health communication strategies to overcome the COVID-19 infodemic. J Public Health Policy. 2020;41:410–20.
May C. Mobilising modern facts: health technology assessment and the politics of evidence. Sociol Health Illn. 2006;28:513–32.
Bullock HL, Lavis JN. Understanding the supports needed for policy implementation: a comparative analysis of the placement of intermediaries across three mental health systems. Health Res Policy Syst. 2019;17:82.
Ashcraft LE, Quinn DA, Brownson RC. Strategies for effective dissemination of research to United States policymakers: a systematic review. Implement Sci. 2020;15:89.
Bullock HL, Lavis JN, Wilson MG, Mulvale G, Miatello A. Understanding the implementation of evidence-informed policies and practices from a policy perspective: a critical interpretive synthesis. Implement Sci. 2021;16:18.
Brownson RC, Eyler AA, Harris JK, Moore JB, Tabak RG. Getting the word out: new approaches for disseminating public health science. J Public Health Manag Pract. 2018;24:102–11.
Institute of Medicine. Who will keep the public healthy? Educating public health professionals for the 21st century. Washington, D.C.: National Academies Press; 2003.
Sturm R. Evidence-based health policy versus evidence-based medicine. Psychiatr Serv. 2002;53:1499.
Kerner JF. Integrating research, practice, and policy: what we see depends on where we stand. J Public Health Manag Pract. 2008;14:193–8.
Slater MD. Theory and method in health audience segmentation. J Health Commun. 1996;1:267–83.
Dopp AR, Parisi KE, Munson SA, Lyon AR. Aligning implementation and user-centered design strategies to enhance the impact of health services: results from a concept mapping study. Implement Sci Commun. 2020;1:17.
Lyon AR, Koerner K. User-centered design for psychosocial intervention development and implementation. Clin Psychol (New York). 2016;23:180–200.
Schnittker R, Marshall SD, Horberry T, Young K. Decision-centred design in healthcare: the process of identifying a decision support tool for airway management. Appl Ergon. 2019;77:70–82.
Morgan M, Fischhoff B, Bostrom A, Atman C. Risk communication: a mental models approach. Cambridge: Cambridge University Press; 2002.
Choi BC, Pang T, Lin V, Puska P, Sherman G, Goddard M, et al. Can scientists and policy makers work together? J Epidemiol Community Health. 2005;59:632–7.
The Social Issues Research Centre. Guidelines for scientists on communicating with the media. Oxford: The Social Issues Research Centre; 2006.
Eisman AB, Quanbeck A, Bounthavong M, Panattoni L, Glasgow RE. Implementation science issues in understanding, collecting, and using cost estimates: a multi-stakeholder perspective. Implement Sci. 2021;16:75.
Darnell D, Dorsey CN, Melvin A, Chi J, Lyon AR, Lewis CC. A content analysis of dissemination and implementation science resource initiatives: what types of resources do they offer to advance the field? Implement Sci. 2017;12:137.
Implementation Science [https://cancercontrol.cancer.gov/is].
The Center for Implementation [https://thecenterforimplementation.com/about-us].
Dissemination and Implementation Science Program [https://medschool.cuanschutz.edu/accords/cores-and-programs/dissemination-implementation-science-program].
Implementation Science Exchange [https://impsci.tracs.unc.edu/].
Implementation Science Resource Hub [https://impsciuw.org/].
Dissemination & Implemetation Research [https://implementationresearch.wustl.edu/].
Implementation Science Video Library [https://www.youtube.com/channel/UCJhGTpULmVIENeYHPDy-jLg/videos].
Pawson R, Greenhalgh T, Harvey G, Walshe K. Realist review--a new method of systematic review designed for complex policy interventions. J Health Serv Res Policy. 2005;10(Suppl 1):21–34.
Chambers DA, Proctor EK, Brownson RC, Straus SE. Mapping training needs for dissemination and implementation research: lessons from a synthesis of existing D&I research training programs. Transl Behav Med. 2016;7:593–601.
Davis R. DʼLima D: Building capacity in dissemination and implementation science: a systematic review of the academic literature on teaching and training initiatives. Implement Sci. 2020;15:97.
Schultes MT, Aijaz M, Klug J, Fixsen DL. Competences for implementation science: what trainees need to learn and where they learn it. Adv Health Sci Educ Theory Pract. 2021;26:19–35.
Osanjo GO, Oyugi JO, Kibwage IO, Mwanda WO, Ngugi EN, Otieno FC, et al. Building capacity in implementation science research training at the University of Nairobi. Implement Sci. 2016;11:30.
The findings and conclusions in this paper are those of the authors and do not necessarily represent the official positions of the National Institutes of Health, the Centers for Disease Control and Prevention, or the American Cancer Society.
This work was supported in part by the National Cancer Institute (numbers P50CA244431, P50CA244688, P50CA244690, R01CA255382), the National Institute of Diabetes and Digestive and Kidney Diseases (numbers P30DK092950, P30DK056341, R25DK123008), the Centers for Disease Control and Prevention (number U48DP006395), the American Cancer Society (number RSG-17-156-01-CPPB), and the Foundation for Barnes-Jewish Hospital.
Ethics approval and consent to participate
Consent for publication
The authors declare they have no conflicting interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Brownson, R.C., Shelton, R.C., Geng, E.H. et al. Revisiting concepts of evidence in implementation science. Implementation Sci 17, 26 (2022). https://doi.org/10.1186/s13012-022-01201-y
- Implementation science