Skip to main content

Revisiting concepts of evidence in implementation science

Abstract

Background

Evidence, in multiple forms, is a foundation of implementation science. For public health and clinical practice, evidence includes the following: type 1 evidence on etiology and burden; type 2 evidence on effectiveness of interventions; and type 3: evidence on dissemination and implementation (D&I) within context. To support a vision for development and use of evidence in D&I science that is more comprehensive and equitable (particularly for type 3 evidence), this article aims to clarify concepts of evidence, summarize ongoing debates about evidence, and provide a set of recommendations and tools/resources for addressing the “how-to” in filling evidence gaps most critical to advancing implementation science.

Main text

Because current conceptualizations of evidence have been relatively narrow and insufficiently characterized in our opinion, we identify and discuss challenges and debates about the uses, usefulness, and gaps in evidence for implementation science. A set of questions is proposed to assist in determining when evidence is sufficient for dissemination and implementation. Intersecting gaps include the need to (1) reconsider how the evidence base is determined, (2) improve understanding of contextual effects on implementation, (3) sharpen the focus on health equity in how we approach and build the evidence-base, (4) conduct more policy implementation research and evaluation, and (5) learn from audience and stakeholder perspectives. We offer 15 recommendations to assist in filling these gaps and describe a set of tools for enhancing the evidence most needed in implementation science.

Conclusions

To address our recommendations, we see capacity as a necessary ingredient to shift the field’s approach to evidence. Capacity includes the “push” for implementation science where researchers are trained to develop and evaluate evidence which should be useful and feasible for implementers and reflect community or stakeholder priorities. Equally important, there has been inadequate training and too little emphasis on the “pull” for implementation science (e.g., training implementers, practice-based research). We suggest that funders and reviewers of research should adopt and support a more robust definition of evidence. By critically examining the evolving nature of evidence, implementation science can better fulfill its vision of facilitating widespread and equitable adoption, delivery, and sustainment of scientific advances.

Peer Review reports

For every complex problem, there is a solution that is simple, neat, and wrong. — H. L. Mencken

Introduction

Evidence, often informed by a complex cycle of observation, theory, and experiment [1], is a foundation of implementation science [2, 3]. Evidence is central in part because dissemination and implementation (D&I) science is based on the notion that there are practices and policies that should be widely used because scientific research concludes that they would have widespread benefits. In this context, an evidence-based intervention (EBI) is defined broadly to include programs, practices, processes, policies, and guidelines with some level of effectiveness [4]. Many of the underlying sources of evidence were originally derived from legal settings, taking on multiple forms including witness accounts, police testimony, expert opinions, and forensic science [5]. Building on these origins, evidence for public health and clinical practice comes in many forms, across three broad domains [6,7,8]: type 1: evidence on etiology and burden; type 2: evidence on effectiveness of interventions; type 3: evidence on implementation within context (Table 1). These three types of evidence are often not linear, but interconnected, iterative, and overlapping—they shape one another (e.g., if we have limited type 2 evidence then the ability to apply type 3 evidence is hampered). Across these three domains, we have by far the most type 1 evidence and the least type 3 evidence [6, 9].

Table 1 Selected terminology related to evidence and implementation science

Definitions of evidence and the associated processes (how evidence is used) vary by setting. In clinical settings, evidence-based medicine is “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients” [10]. Evidence-based public health occurs across a range of community settings and is “the process of integrating science-based interventions with community preferences to improve the health of populations” [11]. Perhaps most relevant to implementation science, evidence-based decision-making is a multilevel process that involves collecting and implementing the best available evidence from research, practice, professional experience, and clinical or community partners [12,13,14,15]. A robust, equitable, and sustainable approach to evidence-based decision-making takes both challenges and strengths into account (e.g., skills, leadership priorities, resources [16,17,18,19]) and places scientific evidence and stakeholder engagement in the center of the decision-making process [20].

For all types of evidence and particularly for type 3 evidence regarding D&I, complexity and context are essential elements [21,22,23]. Both PCORI [24, 25] and a recent update to the MRC guidance [26] have provided statements about researching complex health interventions that provide excellent recommendations and resources. We concur with most of these recommendations and add to their points and recommendations in this article. The most effective approaches often rely on complex interventions embedded in complex systems (e.g., nonlinear, multilevel interventions) where the description of core intervention components and their relationships involve multiple settings, audiences, and approaches [26,27,28]. Type 3 evidence is also highly context-dependent—the context for implementation involves complex adaptive systems that form the dynamic environment(s) in which discrete interventions and interconnected implementation processes are situated [29]. For example, in models such as the Dynamic Sustainability Framework, the EBI is embedded in the context of multiple factors in a practice setting (e.g., staffing, organizational climate) which is in turn embedded in a broader ecological system with a complex set of variables (e.g., policy, regulations, population characteristics) [30]. This embeddedness also should take into account dynamism—that an EBI may stay true to its original function but need to evolve form over time to adapt to changing population needs, new evidence, and the “fit” of evidence with complex and changing context [30,31,32].

Much has been written about the terminology of evidence-based practice and policy. The most widely used term is “evidence-based” practice (often evidence-based medicine [33, 34] or evidence-based public health [7, 35]). Especially in Canada and Australia, the term “evidence-informed” decision-making is commonly used [15, 36]. The term “informed” is used to emphasize that public health decisions are based on research but also require consideration of individual preferences and political and organizational factors [37, 38]. Others have used the term “knowledge-based practice” or “practice-based evidence” or “practice-relevant evidence” to emphasize the importance of practice wisdom from frontline practitioners and lived experience of patients and community members [39,40,41,42,43]. To maximize the use of EBIs, research should inform practice and practice should inform research [44]. In our view, the most important issue is not which term to use, but rather that implementation decisions should be based on and informed by evaluation and research findings, while using rigorous methods to take into account a variety of contextual variables across multiple levels of influence (Table 2).

Table 2 Contextual variables for implementation across ecological levels

Fundamental issues for implementation science involve the questions: (1) evidence on what and for whom in what settings and under what conditions? and (2) When do we have enough evidence for D&I? While the answer to this latter question will always be “it depends,” there are related questions that are useful to consider (Table 3).

Table 3 Determining when evidence is sufficient for dissemination and implementation

To facilitate the development and delivery of more equitable and sustainable interventions, we need to expand our thinking about evidence, especially for but not limited to type 3 evidence. We discuss a set of five core interrelated issues about evidence, examining (1) how the evidence base is determined, (2) context, (3) health equity, (4) policy implementation, and (5) audience/stakeholder perspectives. All areas concern some form of research or knowledge gaps in D&I science. The evidence base discussion presents a broader perspective on what is considered evidence; the context, equity, and stakeholder sections cover neglected aspects of implementation science in need of more and higher quality research; and the policy implementation section points to the need for the most pressing gaps in policy-relevant research for D&I. Across these areas, we provide a series of recommendations along with tools and resources for speeding translation of research to practice and policy.

Selected debates about evidence

Here, we describe ongoing discussions and debates about the uses, usefulness, and gaps in evidence for implementation science, which give way to our recommendations (Table 4). While this is not an exhaustive list, it illustrates the need for more reflection and clarity across five core areas where there are major unresolved issues about evidence.

Table 4 Recommendations to advance evidence and implementation science

Reconsider how the evidence base is determined

The evidence base for implementation science needs to be broadened to encompass a wider range of study designs, methods, stakeholders, and outcomes. For example, the decontextualized randomized controlled efficacy trial (RCT) that attempts to control for many potential confounding factors is generally considered the gold standard for obtaining evidence on internal validity and contributing to the determination of causality of a given intervention, practice, or treatment [45]. A property of an RCT is that, with large sample sizes, it allows researchers to potentially balance known and unknown confounders. Despite the value and conceptual simplicity of the traditional efficacy RCT, its limitations have been noted [46,47,48]. For example, randomization may be impractical, costly, or unethical for some interventions (e.g., community-based interventions where partners have concerns about withholding a program from the community) and for many policy interventions, where the independent variable (the “exposure”) cannot be randomized. Tools such as PRECIS-2 and the newer PRECIS-2 PS help enhance the real-world utility of RCTs (pragmatic trials) [49, 50]. For some settings and interventions, alternative and more rapid-cycle and adaptive designs are needed to elucidate effects including quasi-experiments, observational trials, iterative assessments and actions, natural experiments, and mixed-methods studies [51,52,53,54,55]. Often in implementation science what we want to know is how one strategy adds to a range of strategies already being delivered within an existing environment a concept called “mosaic effectiveness” [56].

For clinical and public health practice, the generalizability of an EBI’s effectiveness from one population and setting to another (and ideally across a diverse range of populations and settings)—the core concept of external validity—is an essential ingredient. Systematic review and practice guidelines, which are often the basis for an implementation study, are mainly focused on whether an intervention is effective on average (internal validity) and have commonly given limited attention to specifying conditions (settings, populations, circumstances) under which a program is and is not effective [57,58,59]. For implementation science, there are many considerations and layers to the notion of whether an evidence-based practice applies in a particular setting or population [59]. Tools such as ADAPT [60] or process models like ADAPT-ITT [61] can be useful in transferring EBIs from one setting to another while taking contextual variables into account. Models such as FRAME and FRAME-IS are helpful for tracking and building the evidence base around what types of adaptations are associated with improved or decreased effectiveness or implementation outcomes (and for which settings and populations?) [62, 63].

The question of whether an EBI applies involves a set of scientific considerations that may differ from simply knowing average treatment effects. These include balancing of fidelity to the original EBI functions with adaptations needed for replication and scale-up [64], as well as considerations as to when there may be a need to “start from scratch” in developing a new intervention as opposed to refining or adapting an existing one (e.g., when the nature of the evidence for an EBI does not fit the sociocultural or community context). There is a pressing need for research on the strengths and limitations of practitioner-driven and community-centered adaptation of EBIs, which is likely to enhance relevance, feasibility, and sociocultural appropriateness and acceptability, as well as fit with implementation context [65,66,67]. There are also potential considerations when adapting EBIs or implementation strategies (e.g., costs, resources needed, potential reduction in effectiveness) [63, 68, 69]. It has also been suggested that a greater emphasis is needed on both the functions of an intervention (its basic purposes, underlying theoretical premise) and forms (the strategies and approaches used to meet each intervention function) [64], opening the door to inquiry about how fidelity to function may demand adaptations (or in some cases transformation or evolution) in form.

Additional evidence is needed on the inter-related concepts of null (ineffective) interventions, de-implementation, and mis-implementation [70,71,72]. From null intervention results, we can learn which parts of an EBI or implementation strategy need to be refined, adapted, or re-invented. Data on null interventions also informs for whom and under what conditions an EBI or implementation strategy is “evidence-based.” De-implementation is the process of stopping or abandoning practices that are not proved to be effective or are possibly harmful [73], whereas mis-implementation involves one or both of two processes: the discontinuation of effective programs and the continuation of ineffective practices in public health settings [70]. Many of the contextual variables in Table 2 strongly affect de-implementation and mis-implementation.

Emerging perspectives in data science and causal inference may help advance type 3 evidence. If contextual heterogeneity is the norm, then the scientific task in any one study population is to produce data that address relevance across diverse external settings. Useful methods to do so are becoming available and suggest that the more we know about mediators/mechanisms and modifiers of effects in implementation, the more interpretable findings could be in different settings and populations [74,75,76]. For example, consider the question of whether evidence for audit and feedback on the use of EBIs in HIV clinics from randomized trials in Boston could apply to HIV clinics in Nairobi, Kenya. Let us assume that in Boston, researchers learn that the credibility of the data is a key driver of successful implementation (e.g., clinicians who doubt the veracity of metrics from the electronic health record are less likely to respond). Given the widespread challenges of data accuracy in the nascent electronic health records in this specific setting in Africa (and extensive literature documenting this challenge), audit and feedback as an implementation strategy can be anticipated to have limited implementation relevance as well as effectiveness. Using data from Boston to infer (in this case that it might not work) in Nairobi depends on knowing critical mediators of audit and feedback in Boston (i.e., the credibility of data on provider performance). In some situations, a completely different implementation strategy may be needed that is better suited to local conditions. One further implication is that this directs research efforts to not only find effects in Boston, but how they came about (type 3 evidence).

Improve understanding of contextual effects on implementation

The complexity and dynamic nature of implementation necessitate continual attention to context (i.e., active and unique factors that surround implementation and sustainability [77, 78]) [22, 79, 80]. When context is taken into account in research, the study findings are more likely to indicate the conditions under which evidence does or does not generalize to different populations, settings, and time periods [23]—yet too often context is inadequately described or not fully elucidated [81]. Contextual conditions also drive and inform the adaptation of EBIs to populations and settings that differ from those in which it originally developed [82]. It is useful to consider contextual issues of relevance for implementation across levels of a socio-ecological framework (individual, interpersonal, organizational, community, policy) (Table 2) [79].

The challenging scientific task of “unpacking” context requires three activities. First, contextual effects in any study setting or across settings and/or systems should be enumerated (e.g., a set of variables in Table 2). Second, since one cannot measure everything, part of building the evidence base involves determining which aspects of context are most salient for implementation within and across settings. Third, implementation research should also seek to measure the presence, distribution, and intensity of those contextual factors in target settings in which a research study is not being undertaken, but where one might want to apply evidence.

Within an implementation research project, context is dynamic and should be assessed across all stages of a study [83]. Too often, dynamic contexts are not fully understood or assessed [30]. In some cases, the context for delivery (e.g., a particular clinical setting) is relatively stable, but the target of the intervention (e.g., a particular pathophysiology; guidelines for cancer screening) is dynamic and emergent. In a more complex intervention trial, both context and targets are dynamic and emergent [22, 84].

During implementation planning, a needs and assets assessment (formative research) should account for historical, cultural, social, and system factors that may shape implementation and the implementation climate, including forms of structural or institutional racism (e.g., inequitable practices and policies), medical mistrust, institutional and providers’ biases and norms that may create or reinforce biases or inequities, as well as community strengths and assets that may inform implementation efforts. Tools such as critical ethnography can be useful during needs assessment to understand interactions between the ensembles of actors, agencies, interventions, and other contextual variables [85]. When selecting EBIs to be tested in an implementation study, context may affect both internal validity and external validity. Systematic reviews, which are often the source of EBIs, use a relatively narrow hierarchy of evidence [86] and tend to strip out implementation context when trying to make a summary (often quantitative) judgement about the average effectiveness of an EBI (e.g. for most populations and settings). For many settings in which we are conducting implementation studies (e.g., lower- and middle-income countries [87]), we may not have a strong evidence base, guidelines, or interventions that have been tested through “gold-standard” RCTs and if they have, they are often not under conditions similar to those in which the EBI will now be applied.

Context in global settings presents unique considerations, particularly in lower- and middle-income countries (LMICs) and other settings that have limited resources and face numerous structural barriers to health (e.g., in the USA, federally qualified health centers, donor-funded vertical health programs in lower- and middle-income countries). Among the considerations is the relevant evidence base for implementation—when settings vary tremendously, particularly the social and political context and systems/organizational infrastructure: Do researchers and implementers need to start anew in building the evidence base for implementation, answering many of the questions in Table 3? There is some evidence that in settings with constrained resources, intervention and methods innovations may be fostered due to the need for creativity and adaptations (e.g., task shifting [88]) when choices are restricted [89]). Adaptive designs (where interventions and strategies are modified in response to emerging data) may be particularly useful in LMICs since they may allow a team to begin with low-intensity/low-resource approaches, and refine or intensify as needed [90,91,92].

Transportability theory has been applied to assess whether findings about the effects of an implementation strategy in one setting can be used to infer in another, and if so, whether it is likely to work [93]. Context, when defined narrowly as the causes of an outcome that differ from one setting to another, asks science to focus on two measurement tasks. In the initial context where a strategy is being tested, it will be important to measure the steps that mediate or moderate the effects of the strategy on the outcome as well as factors that influence those steps. Hypotheses not only about effects but also about how and why they occur across diverse settings are important to inform the measurement architecture.

Context is also important during the process of broader dissemination of evidence-based approaches. There is a well-documented disconnect between how researchers disseminate their findings (including EBIs) and how practitioners and policy makers learn about the latest evidence [14]. Applying principles of designing for dissemination (D4D) allows researchers to better account for the needs, assets, priorities, and time frames of potential adopters and stakeholders [94, 95]. An active D4D process emphasizes the design phase of an implementation research project. A D4D process anticipates dissemination of products (e.g., an evidence-based implementation strategy) by developing a dissemination plan that takes into account audience differences, product messaging, channels, and packaging [96]. In the future, this proactive D4D process could usefully more fully address designing for equity and sustainment, as well as dissemination.

Sharpen the focus on health equity

Addressing heath disparities and promoting health equity is becoming a more central and explicit focus of implementation science [92, 97,98,99,100,101,102]. Health equity is a framing that shifts from a deficits approach (disparities) to one focused on what society can achieve (equity) [103]. An equity focus also recognizes the unjust nature of inequities, naming root/structural causes [104]. This emphasis is documented in publication trends over the past two decades. Figure 1 shows trends of publications from January 1, 2000, to December 31, 2021, using two search strings in PubMed: 1) “health disparities” AND [“implementation science” OR “implementation research” or “knowledge translation”] and 2) “health equity” AND [“implementation science” OR “implementation research” or “knowledge translation”]. For most of the past two decades, research has been framed more often with a disparities focus than with an equity focus—disparity publications were two- to three-fold more common than equity articles from 2006 to 2014. However, in 2021, the number of equity-framed publications greatly exceeded the number of disparities-framed publications.

Fig. 1
figure 1

Number of annual publications on health disparities and health equity

To move towards the goal of achieving health equity, it is critical that implementation science expands the quantity, quality, and types of evidence produced and prioritized, as well as who and what settings are (1) reflected in that evidence (representativeness) and (2) involved in its generation and interpretation (representation). For many health conditions and populations, we have adequate descriptive (type 1) data that can guide what to address (e.g., the size and nature of disparities). However, we often lack sufficient data on EBIs and strategies that are effective in reducing inequities and/or promoting equity [92]. Often, available EBIs inadequately address or account for many relevant social, cultural, structural, and contextual conditions that shape both health inequities and have implications for EBI implementation [92, 105, 106]. There are challenges in generating evidence on inequities, including potentially smaller sample sizes across various social dimensions through which inequities exist, which may limit subgroup heterogeneity analyses (e.g., by race or ethnicity) [107, 108] (see Table 2). As we build the evidence base of EBIs to actively promote equity, there is a need to understand the core elements of equity-focused interventions and strategies, and to do so for the range of social dimensions through which health inequities may exist (e.g., race, immigration status, gender, sexual orientation, location) and their intersection [109].

A foundational challenge here is that many EBIs were not developed with or tested among settings or populations that experience inequities or with the goal of promoting health equity and may unintentionally contribute to or exacerbate inequities [110,111,112]. This results in part from the reductionist way in which EBIs are often developed, deployed (a linear, “cause and effect” approach), and tested [113], paying inadequate attention to the complex and interrelated social determinants of health and root causes of health inequities (e.g., structural racism, inequitable allocation of resources and opportunities) [114,115,116,117,118].

We need to engage a wider range of partners from lower resource settings earlier and throughout the research process and in meaningful ways to build a broader and more relevant array of equity-focused EBIs that are feasible, acceptable, culturally appropriate, and address root causes. We also need to expand what we “count” as EBIs in public health and clinical research, broadening the focus from a narrower view of individual, interpersonal, and organizational interventions, to also include community, policy, and multi-sector interventions that have the potential to make larger shifts in health inequities. Such broadening of evidence with an eye towards health equity will consider moving beyond a more singular focus on our EBI repositories and including and evaluating existing promising community-defined evidence and interventions [92, 119, 120]. In expanding the evidence-base with the goal of promoting health equity, there are significant opportunities to develop and deploy EBIs in sectors outside of health (e.g., schools, workplaces, social services agencies, juvenile justice settings) where in many cases, the reach and impact can be greater than in the health sector [121]. Additionally, as we expand this evidence base, it may be beneficial to prioritize development and evaluation of interventions, practices, and policies that can reduce underlying structural and social factors (e.g., structural racism) and their downstream effects on health inequities [120].

Equity should be a core criterion for valuing evidence. This value statement should be reflected in priorities of funders, how research questions are framed, how research resources and decision-making are distributed, and how studies are conducted, evaluated, and reviewed. Implementation science has a role in recognizing that a negative consequence of our social and economic systems is the concentration of resources and health. These systems create inequities, so when thinking about closing an implementation gap, we should recognize the context—that such a gap is often an outgrowth of these systems and must be addressed and transformed. Equity needs to be prioritized and made more explicit as part of engagement efforts, which includes consideration of power imbalances (who is and is not involved in making key decisions) and timing of when and how partners are engaged (e.g., who is involved in EBI development and deployment, how communities are reflected in co-creating the evidence) [95, 120]. Reflection questions and step-by-step guidance can help guide study planning with an equity focus [102, 120].

Conduct more policy implementation research and evaluation

Health and social policies, in the form of laws, regulations, organizational practices, and funding priorities, have a substantial impact on the health and well-being of populations and create the conditions under which people can be healthy and thrive- or not [122, 123]. Clinical and public health guidelines inform policy implementation by providing the basis for legislation, informing covered services in health plans, and advancing policies that support health equity [124,125,126,127,128]. Policies often address the underlying social and structural conditions that shape health and inequities—this in turn provides opportunities for policy implementation to frame accountability for organizations and systems.

Policy implementation research, which has been conducted since the 1970s across multiple disciplines [129, 130], seeks to understand the complexities of the policy process and increase the likelihood that evidence reaches policymakers and influences their decisions so that the population health benefits of scientific progress are maximized [131]. A key objective of policy implementation research is the enactment, enforcement, and evaluation of evidence-based policies to (1) understand approaches to enhance the likelihood of policy adoption (process); (2) identify specific policy elements likely to be effective (content); and (3) document the potential impact of policy (outcomes) [132]. Especially in the USA, policy implementation research is underdeveloped compared to other areas in implementation science. For example, a content analysis of all projects funded by the US National Institutes of Health through implementation research program announcements found that only 8% of funded grants were on policy implementation researc h[133]. Few of these studies had an explicit focus on equity or social determinants of health.

Policy researchers have utilized a variety of designs, methods, and data sources to investigate the development processes, content, and outcomes of policies. Much more evidence is needed, including which policies work and which do not (for what outcomes, settings, and populations), how policies should be developed and implemented, unintended consequences of policies, and the best ways to combine quantitative and qualitative methods for evaluation of “upstream” factors that have important implications for health equity [134]. There is also a pressing need for reliable and valid measures of policy implementation processes [135]. These knowledge gaps are unlikely to be addressed by randomized designs and are more likely to be addressed using quasi-experimental designs, natural experiments, stakeholder-driven adaptations, systems science methods, citizen science, and participatory approaches [51, 66, 136,137,138,139].

Several other areas in policy implementation research need attention. First, policy makers often need information on a much shorter time frame than researchers can deliver—this calls for the use of tools such as rapid-cycle research [140] and rapid realist reviews [141]. Second, we need to better understand the spread of policies, including the reasons that ineffective policies spread [142], the role of social media [131], and ways to address mis- and dis-information in the policy process [143]. Finally, more emphasis is needed on the reciprocal, often horizontal, interactions between organizations and the development of policy-relevant evidence [144]. For this inter-organizational research, the role of policy intermediaries (those who work in between existing systems to achieve a policy goal) has gained attention due to their critical roles in policy implementation research [145]. Strategies and tools to address several of these issues are provided in recent reviews [146, 147] and in Table 4.

Pay greater attention to audience and stakeholder differences

There are multiple audiences of relevance for developing, applying, disseminating, and sustaining the evidence for implementation science [148]. When seeking effective methods to generate, implement, and sustain EBIs, it is important to take into account the characteristics of each audience and stakeholder group, what they value, how to balance different viewpoints, and how to combine stakeholders’ experience and research evidence. Across these stakeholder groups, research evidence is only one of many influential factors influencing adoption, implementation, and sustainment of EBI [6, 15, 40].

Key audience categories include researchers, practitioners, and policy makers (Table 5). Researchers are one core audience. These individuals typically have specialized training and may devote an entire career studying a particular health issue. Another audience includes clinical and public health practitioners who seek practical information on the scope and quality of evidence for a range of EBIs and implementation strategies that are relevant in their setting. Practitioners in clinical settings (e.g., nurses, physicians) have specialized and standardized training whereas the training for public health practitioners is highly variable (most public health practitioners lack a public health degree [149]). A third group is policy makers at local, regional, state, national, and international levels. These individuals are faced with macro-level decisions on how to allocate public resources. Policy makers seek out distributional consequences (i.e., who has to pay, how much, and who benefits) [150] and in many policy settings, anecdotes are prioritized over empirical data [9]. The category of policy makers also includes funders—these funders may be elected officials and “small p” policy makers (organizational leaders) who make funding decisions within their settings.

Table 5 Differences in evidence-related characteristics and needs among audiences

The relevance and usefulness of evidence vary by stakeholder type (Table 5) [151]. Research usefulness can be informed by audience segmentation, where a product promotion strategy is targeted to the characteristics of a desired segment—a widely accepted principle in marketing [152]. Audience segmentation can be informed by the process of user-centered design and decision-centered processes, in which the product (e.g., an implementation strategy) is guided in a systematic way by the end-users of the product [153,154,155].

Framing is another important factor in considering audiences for D&I. Individuals interpret the same data in different ways depending on the mental model through which they perceive information [156]. For example, policy makers often perceive risks and benefits not in scientific terms but in relation to (usually short term) emotional, moral, financial, or political frameworks [157, 158]. In practical terms for implementation science, framing for a particular health issue for a community member or patient might relate to the ability to raise healthy children whereas framing for a policy maker might relate to cost savings from action or inaction. Cost and economic evaluation are key considerations for a range of stakeholders involved in implementation, yet too often the perspectives of diverse stakeholders are not well considered, acted upon, or reported [159].

Next steps for addressing gaps

The “how-to” for broadening the evidence base for implementation science will require several actions. First, we need to prioritize the evidence gaps and possible ways of filling these gaps—many ideas are shown in Table 4. Next, resources and tools are needed to address evidence deficits (Table 6). All tools listed are available free of charge and provide enough background and instructions to make them useful for a wide range of users—from beginners to experts. The tools listed cover multiple, overlapping domains: (1) engagement and partnerships; (2) study planning; (3) research proposals, articles, reporting, and guidelines; (4) and dissemination, scale-up, and sustainability. In addition to the resources in Table 6, there are many other portals that provide valuable information and resources for implementation research across multiple domains (e.g., technical assistance, mentorship, conferences, archived slides, webinars) [160,161,162,163,164,165,166,167,168].

Table 6 Selected resources and tools to support practice and research on evidence-based dissemination and implementation

Capacity is a core element for building a stronger, more comprehensive, and equitable evidence base. Capacity can be developed in multiple ways, including supporting the “push” for implementation science where researchers are trained to develop the evidence for implementation and skills in evaluation. Evaluation skill building should take into account the principles of realist evaluation, a mixed-methods approach that takes into account multiple contextual variables [169]. There is a significant number of implementation science training opportunities across countries [160, 170, 171], though few have an explicit focus on many of the issues we have highlighted (e.g., health equity, designing for dissemination, sustainability, policy implementation). There has also been inadequate training and too little emphasis on the “pull” for implementation science (e.g., training the practitioners/implementers) [170, 172]. This emphasis on “pull” should embrace the audience differences in Table 5. There is even less evidence on who and how to conduct capacity building, especially in low-resource settings [171, 173].

There are also macro-level actions that would facilitate a broader and more robust evidence base. For example, funders and guideline developers should adopt a more comprehensive definition of evidence, addressing many of the recommendations outlined in Table 4 and above. This could include an alternative or addition to GRADE, incorporating methods of appraising research that does not automatically elevate RCTs (particularly when answering policy-related research questions). Similarly, it is helpful for study sections to be oriented to a wide array of evidence, particularly type 3 evidence. This will require some learning as well as some unlearning—as an example, we need to broaden our understanding of contextual mediators and moderators of implementation, which are likely to vary from those identified in highly controlled experiments.

Conclusion

Over the past few decades, there has been substantial progress in defining evidence for clinical and public health practice, identifying evidence gaps, and making initial progress in filling certain gaps. Yet to solve the health challenges facing society, we need new and expanded thinking about evidence and commitment to context-based decision-making. This process begins with evidence—a foundation of implementation science. By critically examining and broadening current concepts of evidence, implementation science can better fulfill its vision of providing an explicit response to decades of scientific progress that has not translated into equitable and sustained improvements in population health [92].

Availability of data and materials

Not applicable.

Abbreviations

APEASE:

Acceptability, Practicability, Effectiveness, Affordability, Side-effects, and Equity

CONSORT:

Consolidated Standards of Reporting Trials

D4D:

Designing for dissemination

EBI:

Evidence-based intervention

ERIC:

Expert Recommendations for Implementing Change

FRAME:

Framework for Reporting Adaptations and Modifications-Enhanced

FRAME-IS:

Framework for Reporting Adaptations and Modifications to Evidence-based Implementation Strategies

GRADE:

Grading Recommendations Assessment and Development Evidence

HIV:

Human immunodeficiency virus

LMICs:

Lower- and middle-income countries

MOST:

Multiphase Optimization Strategy

MRC:

UK Medical Research Council

PRECIS-2:

PRagmatic Explanatory Continuum Indicator Summary

PRECIS-2 PS:

PRagmatic Explanatory Continuum Indicator Summary for providers

RCT:

Randomized controlled trial

StaRI:

Standards for Reporting Implementation Studies

TCaST:

Theory, Model, and Framework Comparison and Selection Tool

References

  1. Rimer BK, Glanz DK, Rasband G. Searching for evidence about health education and health behavior interventions. Health Educ Behav. 2001;28:231–48.

    Article  CAS  PubMed  Google Scholar 

  2. Peters DH, Adam T, Alonge O, Agyepong IA, Tran N. Implementation research: what it is and how to do it. BMJ. 2013;347:f6753.

    PubMed  Google Scholar 

  3. Shelton RC, Lee M, Brotzman LE, Wolfenden L, Nathan N, Wainberg ML. What is dissemination and implementation science?: An Introduction and opportunities to advance behavioral medicine and public health globally. Int J Behav Med. 2020;27:3–20.

    Article  PubMed  Google Scholar 

  4. Rabin BA, Brownson RC, Kerner JF, Glasgow RE. Methodologic challenges in disseminating evidence-based interventions to promote physical activity. Am J Prev Med. 2006;31:S24–34.

    Article  PubMed  Google Scholar 

  5. McQueen DV. Strengthening the evidence base for health promotion. Health Promot Int. 2001;16:261–8.

    Article  CAS  PubMed  Google Scholar 

  6. Brownson RC, Fielding JE, Maylahn CM. Evidence-based public health: a fundamental concept for public health practice. Annu Rev Public Health. 2009;30:175–201.

    Article  PubMed  Google Scholar 

  7. Brownson RC, Gurney JG, Land GH. Evidence-based decision making in public health. J Public Health Manag Pract. 1999;5:86–97.

    Article  CAS  PubMed  Google Scholar 

  8. Rychetnik L, Hawe P, Waters E, Barratt A, Frommer M. A glossary for evidence based public health. J Epidemiol Community Health. 2004;58:538–45.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Brownson RC, Baker EA, Deshpande AD, Gillespie KN. Evidence-Based Public Health. 3rd ed. New York: Oxford University Press; 2018.

    Google Scholar 

  10. Sackett DL. Evidence-based medicine. Semin Perinatol. 1997;21:3–5.

    Article  CAS  PubMed  Google Scholar 

  11. Kohatsu ND, Robinson JG, Torner JC. Evidence-based public health: an evolving concept. Am J Prev Med. 2004;27:417–21.

    PubMed  Google Scholar 

  12. Baba V, HakemZadeh F. Toward a theory of evidence based decision making. Manag Decis. 2012;50:832–67.

    Article  Google Scholar 

  13. Mackintosh J, Ciliska D, Tulloch K. Evidence-informed decision making in public health in action. Environ Health Rev. 2015;58:15–9.

    Article  Google Scholar 

  14. Brownson RC, Fielding JE, Green LW. Building capacity for evidence-based public health: reconciling the pulls of practice and the push of research. Annu Rev Public Health. 2018;39:27–53.

    Article  PubMed  Google Scholar 

  15. Armstrong R, Pettman TL, Waters E. Shifting sands - from descriptions to solutions. Public Health. 2014;128:525–32.

    Article  CAS  PubMed  Google Scholar 

  16. Jacobs JA, Dodson EA, Baker EA, Deshpande AD, Brownson RC. Barriers to evidence-based decision making in public health: a national survey of chronic disease practitioners. Public Health Rep. 2010;125:736–42.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Newman M, Papadopoulos I, Sigsworth J. Barriers to evidence-based practice. Intensive Crit Care Nurs. 1998;14:231–8.

    Article  CAS  PubMed  Google Scholar 

  18. Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014;14:2.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Sadeghi-Bazargani H, Tabrizi JS, Azami-Aghdash S. Barriers to evidence-based medicine: a systematic review. J Eval Clin Pract. 2014;20:793–802.

    Article  PubMed  Google Scholar 

  20. Glasgow RE, Green LW, Taylor MV, Stange KC. An evidence integration triangle for aligning science with policy and practice. Am J Prev Med. 2012;42:646–54.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Braithwaite J, Churruca K, Long JC, Ellis LA, Herkes J. When complexity science meets implementation science: a theoretical and empirical analysis of systems change. BMC Med. 2018;16:63.

    Article  PubMed  PubMed Central  Google Scholar 

  22. May CR, Johnson M, Finch T. Implementation, context and complexity. Implement Sci. 2016;11:141.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Nilsen P, Bernhardsson S. Context matters in implementation science: a scoping review of determinant frameworks that describe contextual determinants for implementation outcomes. BMC Health Serv Res. 2019;19:189.

    Article  PubMed  PubMed Central  Google Scholar 

  24. PCORI Methodology Standards [https://www.pcori.org/research/about-our-research/research-methodology/pcori-methodology-standards#Complex].

  25. Selby JV, Beal AC, Frank L. The Patient-Centered Outcomes Research Institute (PCORI) national priorities for research and initial research agenda. JAMA. 2012;307:1583–4.

    Article  CAS  PubMed  Google Scholar 

  26. Skivington K, Matthews L, Simpson SA, Craig P, Baird J, Blazeby JM, et al. A new framework for developing and evaluating complex interventions: update of Medical Research Council guidance. BMJ. 2021;374:n2061.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Hawe P. Lessons from complex interventions to improve health. Annu Rev Public Health. 2015;36:307–23.

    Article  PubMed  Google Scholar 

  28. Hawe P, Shiell A, Riley T, Gold L. Methods for exploring implementation variation and local context within a cluster randomised community intervention trial. J Epidemiol Community Health. 2004;58:788–93.

    Article  PubMed  PubMed Central  Google Scholar 

  29. May C. Towards a general theory of implementation. Implement Sci. 2013;8:18.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8:117.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Allen J, Shelton D, Emmons K, Linnan L. Fidelity and its relationship to implementation efffectiveness, adaptation and dissemination. In: Brownson R, Colditz G, Proctor E, editors. Dissemination and implementation research in health: translating science to practice. 2nd ed. New York: Oxford University Press; 2018. p. 267–84.

    Google Scholar 

  32. Baumann A, Cabassa L, Wiltsey Stirman S. Adaptation in dissemination and implementation science. In: Brownson R, Colditz G, Proctor E, editors. Dissemination and Implementation Research in Health: Translating Science to Practice. 2nd ed. New York: Oxford University Press; 2018. p. 285–300.

    Google Scholar 

  33. Guyatt G, Cook D, Haynes B. Evidence based medicine has come a long way. BMJ. 2004;329:990–1.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Sackett DL, Rosenberg WMC. The need for evidence-based medicine. J R Soc Med. 1995;88:620–4.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  35. Glasziou P, Longbottom H. Evidence-based public health practice. Aust N Z J Public Health. 1999;23:436–40.

    Article  CAS  PubMed  Google Scholar 

  36. Yost J, Dobbins M, Traynor R, DeCorby K, Workentine S, Greco L. Tools to support evidence-informed public health decision making. BMC Public Health. 2014;14:728.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Rycroft-Malone J. Evidence-informed practice: from individual to context. J Nurs Manag. 2008;16:404–8.

    Article  PubMed  Google Scholar 

  38. Viehbeck SM, Petticrew M, Cummins S. Old myths, new myths: challenging myths in public health. Am J Public Health. 2015;105:665–9.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Glasby J, Walshe K, Gill H. What counts as ʻevidenceʼ in ʻevidence-based practiceʼ? Evid Policy. 2007;3:325–7.

    Article  Google Scholar 

  40. Rycroft-Malone J, Seers K, Titchen A, Harvey G, Kitson A, McCormack B. What counts as evidence in evidence-based practice? J Adv Nurs. 2004;47:81–90.

    Article  PubMed  Google Scholar 

  41. Green LW. Making research relevant: if it is an evidence-based practice, whereʼs the practice-based evidence? Fam Pract. 2008;25(Suppl 1):i20–4.

    Article  PubMed  Google Scholar 

  42. Kothari A, Rudman D, Dobbins M, Rouse M, Sibbald S, Edwards N. The use of tacit and explicit knowledge in public health: a qualitative study. Implement Sci. 2012;7:20.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Youngblut JM, Brooten D. Evidence-based nursing practice: why is it important? AACN Clin Issues. 2001;12:468–76.

    Article  CAS  PubMed  Google Scholar 

  44. Swisher AK. Practice-based evidence. Cardiopulm Phys Ther J. 2010;21:4.

    PubMed  PubMed Central  Google Scholar 

  45. Akobeng AK. Understanding randomised controlled trials. Arch Dis Child. 2005;90:840–4.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  46. Kessler R, Glasgow RE. A proposal to speed translation of healthcare research into practice: dramatic change is needed. Am J Prev Med. 2011;40:637–44.

    Article  PubMed  Google Scholar 

  47. Sanson-Fisher RW, Bonevski B, Green LW. DʼEste C: Limitations of the randomized controlled trial in evaluating population-based health interventions. Am J Prev Med. 2007;33:155–61.

    Article  PubMed  Google Scholar 

  48. Weiss N, Koepsell T, Psaty B. Generalizability of the results of randomized trials. Arch Int Med. 2008;168:133–5.

    Article  Google Scholar 

  49. Loudon K, Treweek S, Sullivan F, Donnan P, Thorpe KE, Zwarenstein M. The PRECIS-2 tool: designing trials that are fit for purpose. BMJ. 2015;350:h2147.

    Article  PubMed  Google Scholar 

  50. Norton WE, Loudon K, Chambers DA, Zwarenstein M. Designing provider-focused implementation trials with purpose and intent: introducing the PRECIS-2-PS tool. Implement Sci. 2021;16:7.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Leatherdale ST. Natural experiment methodology for research: a review of how different methods can support real-world research. Int J Social Res Methodol. 2019;22:19–35.

    Article  Google Scholar 

  52. Petticrew M, Cummins S, Ferrell C, Findlay A, Higgins C, Hoy C, et al. Natural experiments: an underused tool for public health? Public Health. 2005;119:751–7.

    Article  CAS  PubMed  Google Scholar 

  53. Palinkas LA, Mendon SJ, Hamilton AB. Innovations in mixed methods evaluations. Annu Rev Public Health. 2019;40:423–42.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Ramsey AT, Proctor EK, Chambers DA, Garbutt JM, Malone S, Powderly WG, et al. Designing for Accelerated Translation (DART) of emerging innovations in health. J Clin Transl Sci. 2019;3:53–8.

    Article  PubMed  PubMed Central  Google Scholar 

  55. Leeman J, Rohweder C, Lee M, Brenner A, Dwyer A, Ko LK. OʼLeary MC, Ryan G, Vu T, Ramanadhan S: Aligning implementation science with improvement practice: a call to action. Implement Sci Commun. 2021;2:99.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Glidden DV, Mehrotra ML, Dunn DT, Geng EH. Mosaic effectiveness: measuring the impact of novel PrEP methods. Lancet HIV. 2019;6:e800–6.

    Article  PubMed  PubMed Central  Google Scholar 

  57. Avellar SA, Thomas J, Kleinman R, Sama-Miller E, Woodruff SE, Coughlin R, et al. External validity: the next step for systematic reviews? Eval Rev. 2017;41:283–325.

    Article  PubMed  Google Scholar 

  58. Green LW, Glasgow RE. Evaluating the relevance, generalization, and applicability of research: issues in external validation and translation methodology. Eval Health Prof. 2006;29:126–53.

    Article  PubMed  Google Scholar 

  59. Huebschmann AG, Leavitt IM, Glasgow RE. Making health research matter: a call to increase attention to external validity. Annu Rev Public Health. 2019;40:45–63.

    Article  PubMed  Google Scholar 

  60. Moore G, Campbell M, Copeland L, Craig P, Movsisyan A, Hoddinott P, et al. Adapting interventions to new contexts-the ADAPT guidance. BMJ. 2021;374:n1679.

    Article  PubMed  PubMed Central  Google Scholar 

  61. Wingood GM, DiClemente RJ. The ADAPT-ITT model: a novel method of adapting evidence-based HIV Interventions. J Acquir Immune Defic Syndr. 2008;47(Suppl 1):S40–6.

    Article  PubMed  Google Scholar 

  62. Miller CJ, Barnett ML, Baumann AA, Gutner CA, Wiltsey-Stirman S. The FRAME-IS: a framework for documenting modifications to implementation strategies in healthcare. Implement Sci. 2021;16:36.

    Article  PubMed  PubMed Central  Google Scholar 

  63. Wiltsey Stirman S, Baumann AA, Miller CJ. The FRAME: an expanded framework for reporting adaptations and modifications to evidence-based interventions. Implement Sci. 2019;14:58.

    Article  PubMed  PubMed Central  Google Scholar 

  64. Perez Jolles M, Lengnick-Hall R, Mittman BS. Core functions and forms of complex health interventions: a patient-centered medical home illustration. J Gen Intern Med. 2019;34:1032–8.

    Article  PubMed  PubMed Central  Google Scholar 

  65. Alvidrez J, Napoles AM, Bernal G, Lloyd J, Cargill V, Godette D, et al. Horse Brave Heart MY, Das R, Farhat T: Building the evidence base to inform planned intervention adaptations by practitioners serving health disparity populations. Am J Public Health. 2019;109:S94–S101.

    Article  PubMed  PubMed Central  Google Scholar 

  66. Minkler M, Salvatore A, Chang C. Participatory approaches for study design and analysis in dissemination and implementation research. In: Brownson R, Colditz G, Proctor E, editors. Dissemination and implementation research in health: translating science to practice. 2nd ed. New York: Oxford University Press; 2018. p. 175–90.

    Google Scholar 

  67. Ramanadhan S, Davis MM, Armstrong R, Baquero B, Ko LK, Leng JC, et al. Participatory implementation science to increase the impact of evidence-based cancer prevention and control. Cancer Causes Control. 2018;29:363–9.

    Article  PubMed  PubMed Central  Google Scholar 

  68. Escoffery C, Lebow-Skelley E, Haardoerfer R, Boing E, Udelson H, Wood R, et al. A systematic review of adaptations of evidence-based public health interventions globally. Implement Sci. 2018;13:125.

    Article  PubMed  PubMed Central  Google Scholar 

  69. Wang Z, Norris SL, Bero L. The advantages and limitations of guideline adaptation frameworks. Implement Sci. 2018;13:72.

    Article  PubMed  PubMed Central  Google Scholar 

  70. Brownson RC, Allen P, Jacob RR, Harris JK, Duggan K, Hipp PR, et al. Understanding mis-implementation in public health practice. Am J Prev Med. 2015;48:543–51.

    Article  PubMed  PubMed Central  Google Scholar 

  71. Nilsen P, Ingvarsson S, Hasson H, von Thiele SU, Augustsson H. Theories, models, and frameworks for de-implementation of low-value care: a scoping review of the literature. Implement Res Pract. 2020;1:1–15.

    Google Scholar 

  72. Norton WE, Chambers DA. Unpacking the complexities of de-implementing inappropriate health interventions. Implement Sci. 2020;15:2.

    Article  PubMed  PubMed Central  Google Scholar 

  73. Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9:1.

    Article  PubMed  PubMed Central  Google Scholar 

  74. Anselmi L, Binyaruka P, Borghi J. Understanding causal pathways within health systems policy evaluation through mediation analysis: an application to payment for performance (P4P) in Tanzania. Implement Sci. 2017;12:10.

    Article  PubMed  PubMed Central  Google Scholar 

  75. Mehrotra M, Petersen M, Zimmerman S, Glidden D, Geng E. Designing trials for transport: optimizing trials for translation to diverse. In: Society for Epidemiologic Research: Virtual; 2020.

  76. Pearl J, Bareinboim E. External validity: from do-calculus to transportability across populations. Statist Sci. 2014;29:579–95.

    Article  Google Scholar 

  77. Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health. 2011;38:4–23.

    Article  PubMed  Google Scholar 

  78. Pfadenhauer L, Rohwer A, Burns J, Booth A, Bakke Lysdahl K, Hfmann B, et al. Guidance for the assessment of context and implementation in health technology assessments (HTA) and systematic reviews of complex interventions: the context and implementation of complex interventions (CICI) framework. European Union; 2016.

    Google Scholar 

  79. Glasgow RE, Emmons KM. How can we increase translation of research into practice? Types of evidence needed. Annu Rev Public Health. 2007;28:413–33.

    Article  PubMed  Google Scholar 

  80. Shelton RC, Chambers DA, Glasgow RE. An extension of RE-AIM to enhance sustainability: addressing dynamic context and promoting health equity over time. Front Public Health. 2020;8:134.

    Article  PubMed  PubMed Central  Google Scholar 

  81. Glasgow RE, Chambers D. Developing robust, sustainable, implementation systems using rigorous, rapid and relevant science. Clin Transl Sci. 2012;5:48–55.

    Article  PubMed  PubMed Central  Google Scholar 

  82. Cabassa LJ, Baumann AA. A two-way street: bridging implementation science and cultural adaptations of mental health treatments. Implement Sci. 2013;8:90.

    Article  PubMed  PubMed Central  Google Scholar 

  83. Miller AL, Krusky AM, Franzen S, Cochran S, Zimmerman MA. Partnering to translate evidence-based programs to community settings: bridging the gap between research and practice. Health Promot Pract. 2012;13:559–66.

    Article  PubMed  PubMed Central  Google Scholar 

  84. Paparini S, Papoutsi C, Murdoch J, Green J, Petticrew M, Greenhalgh T, et al. Evaluating complex interventions in context: systematic, meta-narrative review of case study approaches. BMC Med Res Methodol. 2021;21:225.

    Article  PubMed  PubMed Central  Google Scholar 

  85. Cook KE. Using critical ethnography to explore issues in health promotion. Qual Health Res. 2005;15:129–38.

    Article  PubMed  Google Scholar 

  86. Shaw RL, Larkin M, Flowers P. Expanding the evidence within evidence-based healthcare: thinking about the context, acceptability and feasibility of interventions. Evid Based Med. 2014;19:201–3.

    Article  PubMed  Google Scholar 

  87. Chinnock P, Siegfried N, Clarke M. Is evidence-based medicine relevant to the developing world? PLoS Med. 2005;2:e107.

    Article  PubMed  PubMed Central  Google Scholar 

  88. Joshi R, Alim M, Kengne AP, Jan S, Maulik PK, Peiris D, et al. Task shifting for non-communicable disease management in low and middle income countries--a systematic review. PLoS One. 2014;9:e103754.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  89. Yapa HM, Barnighausen T. Implementation science in resource-poor countries and communities. Implement Sci. 2018;13:154.

    Article  PubMed  PubMed Central  Google Scholar 

  90. Brown CH, Ten Have TR, Jo B, Dagne G, Wyman PA, Muthen B, et al. Adaptive designs for randomized trials in public health. Annu Rev Public Health. 2009;30:1–25.

    Article  PubMed  PubMed Central  Google Scholar 

  91. Brown C, Curran G, Palinkas L, et al. An overview of research and evaluation designs for dissemination and implementation. Annu Rev Public Health. 2017;38:1–22.

    Article  PubMed  PubMed Central  Google Scholar 

  92. Brownson RC, Kumanyika SK, Kreuter MW, Haire-Joshu D. Implementation science should give higher priority to health equity. Implement Sci. 2021;16:28.

    Article  PubMed  PubMed Central  Google Scholar 

  93. Mehrotra ML, Petersen ML, Geng EH. Understanding HIV program effects: a structural approach to context using the transportability framework. J Acquir Immune Defic Syndr. 2019;82(Suppl 3):S199–205.

    Article  PubMed  PubMed Central  Google Scholar 

  94. Brownson RC, Jacobs JA, Tabak RG, Hoehner CM, Stamatakis KA. Designing for dissemination among public health researchers: findings from a national survey in the United States. Am J Public Health. 2013;103:1693–9.

    Article  PubMed  PubMed Central  Google Scholar 

  95. Knoepke CE, Ingle MP, Matlock DD, Brownson RC, Glasgow RE. Dissemination and stakeholder engagement practices among dissemination & implementation scientists: results from an online survey. PLoS One. 2019;14:e0216971.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  96. Kwan BM, Brownson RC, Glasgow RE, Morrato EH, Luke DA. Designing for dissemination and sustainability to promote equitable impacts on health. Annu Rev Public Health. 2022;43:331–53.

    Article  PubMed  Google Scholar 

  97. Woodward EN, Matthieu MM, Uchendu US, Rogal S, Kirchner JE. The health equity implementation framework: proposal and preliminary study of hepatitis C virus treatment. Implement Sci. 2019;14:26.

    Article  PubMed  PubMed Central  Google Scholar 

  98. McNulty M, Smith JD, Villamar J, Burnett-Zeigler I, Vermeer W, Benbow N, et al. Implementation research methodologies for achieving scientific equity and health equity. Ethn Dis. 2019;29:83–92.

    Article  PubMed  PubMed Central  Google Scholar 

  99. Baumann AA, Cabassa LJ. Reframing implementation science to address inequities in healthcare delivery. BMC Health Serv Res. 2020;20:190.

    Article  PubMed  PubMed Central  Google Scholar 

  100. Shelton RC, Adsul P, Oh A. Recommendations for addressing structural racism in implementation science: a call to the field. Ethn Dis. 2021;31:357–64.

    Article  PubMed  PubMed Central  Google Scholar 

  101. Yousefi Nooraie R, Kwan BM, Cohn E, AuYoung M, Clarke Roberts M, Adsul P, et al. Advancing health equity through CTSA programs: opportunities for interaction between health equity, dissemination and implementation, and translational science. J Clin Transl Sci. 2020;4:168–75.

    Article  PubMed  PubMed Central  Google Scholar 

  102. Kerkhoff AD, Farrand E, Marquez C, Cattamanchi A, Handley MA. Addressing health disparities through implementation science-a need to integrate an equity lens from the outset. Implement Sci. 2022;17:13.

    Article  PubMed  PubMed Central  Google Scholar 

  103. Kumanyika SK. Health equity is the issue we have been waiting for. J Public Health Manag Pract. 2016;22(Suppl 1):S8–S10.

    Article  PubMed  Google Scholar 

  104. Braveman P, Gruskin S. Defining equity in health. J Epidemiol Community Health. 2003;57:254–8.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  105. Bach-Mortensen AM, Lange BCL, Montgomery P. Barriers and facilitators to implementing evidence-based interventions among third sector organisations: a systematic review. Implement Sci. 2018;13:103.

    Article  PubMed  PubMed Central  Google Scholar 

  106. Fagan AA, Bumbarger BK, Barth RP, Bradshaw CP, Cooper BR, Supplee LH, et al. Scaling up evidence-based interventions in US public systems to prevent behavioral health problems: challenges and opportunities. Prev Sci. 2019;20:1147–68.

    Article  PubMed  PubMed Central  Google Scholar 

  107. Andresen EM, Diehr PH, Luke DA. Public health surveillance of low-frequency populations. Annu Rev Public Health. 2004;25:25–52.

    Article  PubMed  Google Scholar 

  108. Korngiebel DM, Taualii M, Forquera R, Harris R, Buchwald D. Addressing the challenges of research with small populations. Am J Public Health. 2015;105:1744–7.

    Article  PubMed  PubMed Central  Google Scholar 

  109. Bowleg L. The problem with the phrase women and minorities: intersectionality-an important theoretical framework for public health. Am J Public Health. 2012;102:1267–73.

    Article  PubMed  PubMed Central  Google Scholar 

  110. Allen-Scott LK, Hatfield JM, McIntyre L. A scoping review of unintended harm associated with public health interventions: towards a typology and an understanding of underlying factors. Int J Public Health. 2014;59:3–14.

    Article  CAS  PubMed  Google Scholar 

  111. Lorenc T, Petticrew M, Welch V, Tugwell P. What types of interventions generate inequalities? Evidence from systematic reviews. J Epidemiol Community Health. 2013;67:190–3.

    Article  PubMed  Google Scholar 

  112. Thomson K, Hillier-Brown F, Todd A, McNamara C, Huijts T, Bambra C. The effects of public health policies on health inequalities in high-income countries: an umbrella review. BMC Public Health. 2018;18:869.

    Article  PubMed  PubMed Central  Google Scholar 

  113. Hoffmann I. Transcending reductionism in nutrition research. Am J Clin Nutr. 2003;78:514S–6S.

    Article  CAS  PubMed  Google Scholar 

  114. Braveman P, Egerter S, Williams DR. The social determinants of health: coming of age. Annu Rev Public Health. 2011;32:381–98.

    Article  PubMed  Google Scholar 

  115. Donkin A, Goldblatt P, Allen J, Nathanson V, Marmot M. Global action on the social determinants of health. BMJ Glob Health. 2018;3:e000603.

    Article  PubMed  Google Scholar 

  116. Marmot M, Bell R, Goldblatt P. Action on the social determinants of health. Rev Epidemiol Sante Publique. 2013;61(Suppl 3):S127–32.

    Article  PubMed  Google Scholar 

  117. Williams DR, Lawrence JA, Davis BA. Racism and health: evidence and needed research. Annu Rev Public Health. 2019;40:105–25.

    Article  PubMed  PubMed Central  Google Scholar 

  118. Griffith DM, Holliday CS, Enyia OK, Ellison JM, Jaeger EC. Using syndemics and intersectionality to explain the disproportionate COVID-19 mortality among black men. Public Health Rep. 2021;136:523–31.

    Article  PubMed  Google Scholar 

  119. Martinez K, Callejas L, Hernandez M. Community-defined evidence: a bottom-up behavioral health approach to measure what works in communities of color. Emotional Behav Disord Youth. 2010;10:11–6.

    Google Scholar 

  120. Shelton R, Adsul P, Oh A, Moise N, Griffith D. Application of an anti-racism lens in the field of implementation science: recommendations for Reframing Implementation Research with a Focus on Justice and Racial Equity. Implement Res Pract. 2021; in press.

  121. Mazzucca S, Arredondo EM, Hoelscher DM, Haire-Joshu D, Tabak RG, Kumanyika SK, et al. Expanding implementation research to prevent chronic diseases in community settings. Annu Rev Public Health. 2021;42:135–58.

    Article  PubMed  Google Scholar 

  122. CDC. Ten great public health achievements--United States, 1900-1999. MMWR Morb Mortal Wkly Rep. 1999;48:241–3.

    Google Scholar 

  123. CDC. Ten great public health achievements--United States, 2001-2010. MMWR Morb Mortal Wkly Rep. 2011;60:619–23.

    Google Scholar 

  124. Ayres CG, Griffith HM. Consensus guidelines: improving the delivery of clinical preventive services. Health Care Manage Rev. 2008;33:300–7.

    Article  PubMed  Google Scholar 

  125. Briss PA, Brownson RC, Fielding JE, Zaza S. Developing and using the guide to community preventive services: lessons learned about evidence-based public health. Annu Rev Public Health. 2004;25:281–302.

    Article  PubMed  Google Scholar 

  126. Woolf SH, Atkins D. The evolving role of prevention in health care. Contributions of the U.S. Preventive Services Task Force. Am J Prev Med. 2001;20:13–20.

    Article  CAS  PubMed  Google Scholar 

  127. Woolf SH, DiGuiseppi CG, Atkins D, Kamerow DB. Developing evidence-based clinical practice guidelines: lessons learned by the US Preventive Services Task Force. Annu Rev Public Health. 1996;17:511–38.

    Article  CAS  PubMed  Google Scholar 

  128. Doubeni CA, Simon M, Krist AH. Addressing systemic racism through clinical preventive service recommendations from the US Preventive Services Task Force. JAMA. 2021;325:627–8.

    Article  PubMed  Google Scholar 

  129. Mugambwa J, Nabeta N, Ngoma M, Rudaheranwa N, Kaberuka W, Munene J. Policy implementation: conceptual foundations, accumulated wisdom and new directions. J Public Adm Governance. 2018;8:211–32.

    Article  Google Scholar 

  130. Nilsen P, Stahl C, Roback K, Cairney P. Never the twain shall meet?--a comparison of implementation science and policy implementation research. Implement Sci. 2013;8:63.

    Article  PubMed  PubMed Central  Google Scholar 

  131. Purtle J, Dodson E, Brownson R. Policy dissemination research. In: Brownson R, Colditz G, Proctor E, editors. Dissemination and Implementation Research in Health: Translating Science to Practice. 2nd ed. New York: Oxford University Press; 2018. p. 433–47.

    Google Scholar 

  132. Brownson RC, Chriqui JF, Stamatakis KA. Understanding evidence-based public health policy. Am J Public Health. 2009;99:1576–83.

    Article  PubMed  PubMed Central  Google Scholar 

  133. Purtle J, Peters R, Brownson RC. A review of policy dissemination and implementation research funded by the National Institutes of Health, 2007-2014. Implement Sci. 2016;11:1.

    Article  PubMed  PubMed Central  Google Scholar 

  134. Emmons KM, Chambers DA. Policy implementation science - an unexplored strategy to address social determinants of health. Ethn Dis. 2021;31:133–8.

    Article  PubMed  PubMed Central  Google Scholar 

  135. Allen P, Pilar M, Walsh-Bailey C, Hooley C, Mazzucca S, Lewis CC, et al. Quantitative measures of health policy implementation determinants and outcomes: a systematic review. Implement Sci. 2020;15:47.

    Article  PubMed  PubMed Central  Google Scholar 

  136. Newcomer K, Hatry H, Wholey J, editors. Handbook of practical program evaluation. 4th ed. San Francisco: Jossey-Bass; 2015.

    Google Scholar 

  137. Hinckson E, Schneider M, Winter SJ, Stone E, Puhan M, Stathi A, et al. Citizen science applied to building healthier community environments: advancing the field through shared construct and measurement development. Int J Behav Nutr Phys Act. 2017;14:133.

    Article  PubMed  PubMed Central  Google Scholar 

  138. Tengo M, Austin BJ, Danielsen F, Fernandez-Llamazares A. Creating synergies between citizen science and indigenous and local knowledge. Bioscience. 2021;71:503–18.

    Article  PubMed  PubMed Central  Google Scholar 

  139. Wallerstein N, Duran B. Community-based participatory research contributions to intervention research: the intersection of science and practice to improve health equity. Am J Public Health. 2010;100(Suppl 1):S40–6.

    Article  PubMed  PubMed Central  Google Scholar 

  140. Glasgow RE, Vinson C, Chambers D, Khoury MJ, Kaplan RM, Hunter C. National Institutes of Health approaches to dissemination and implementation science: current and future directions. Am J Public Health. 2012;102:1274–81.

    Article  PubMed  PubMed Central  Google Scholar 

  141. Saul JE, Willis CD, Bitz J, Best A. A time-responsive tool for informing policy making: rapid realist review. Implement Sci. 2013;8:103.

    Article  PubMed  PubMed Central  Google Scholar 

  142. Shipan C, Volden C. Why bad policies spread (and good ones don’t): Cambridge University Press; 2021.

  143. Mheidly N, Fares J. Leveraging media and health communication strategies to overcome the COVID-19 infodemic. J Public Health Policy. 2020;41:410–20.

    Article  PubMed  PubMed Central  Google Scholar 

  144. May C. Mobilising modern facts: health technology assessment and the politics of evidence. Sociol Health Illn. 2006;28:513–32.

    Article  PubMed  Google Scholar 

  145. Bullock HL, Lavis JN. Understanding the supports needed for policy implementation: a comparative analysis of the placement of intermediaries across three mental health systems. Health Res Policy Syst. 2019;17:82.

    Article  PubMed  PubMed Central  Google Scholar 

  146. Ashcraft LE, Quinn DA, Brownson RC. Strategies for effective dissemination of research to United States policymakers: a systematic review. Implement Sci. 2020;15:89.

    Article  PubMed  PubMed Central  Google Scholar 

  147. Bullock HL, Lavis JN, Wilson MG, Mulvale G, Miatello A. Understanding the implementation of evidence-informed policies and practices from a policy perspective: a critical interpretive synthesis. Implement Sci. 2021;16:18.

    Article  PubMed  PubMed Central  Google Scholar 

  148. Brownson RC, Eyler AA, Harris JK, Moore JB, Tabak RG. Getting the word out: new approaches for disseminating public health science. J Public Health Manag Pract. 2018;24:102–11.

    Article  PubMed  PubMed Central  Google Scholar 

  149. Institute of Medicine. Who will keep the public healthy? Educating public health professionals for the 21st century. Washington, D.C.: National Academies Press; 2003.

    Google Scholar 

  150. Sturm R. Evidence-based health policy versus evidence-based medicine. Psychiatr Serv. 2002;53:1499.

    Article  PubMed  Google Scholar 

  151. Kerner JF. Integrating research, practice, and policy: what we see depends on where we stand. J Public Health Manag Pract. 2008;14:193–8.

    Article  PubMed  Google Scholar 

  152. Slater MD. Theory and method in health audience segmentation. J Health Commun. 1996;1:267–83.

    Article  CAS  PubMed  Google Scholar 

  153. Dopp AR, Parisi KE, Munson SA, Lyon AR. Aligning implementation and user-centered design strategies to enhance the impact of health services: results from a concept mapping study. Implement Sci Commun. 2020;1:17.

    Article  PubMed  PubMed Central  Google Scholar 

  154. Lyon AR, Koerner K. User-centered design for psychosocial intervention development and implementation. Clin Psychol (New York). 2016;23:180–200.

    Google Scholar 

  155. Schnittker R, Marshall SD, Horberry T, Young K. Decision-centred design in healthcare: the process of identifying a decision support tool for airway management. Appl Ergon. 2019;77:70–82.

    Article  CAS  PubMed  Google Scholar 

  156. Morgan M, Fischhoff B, Bostrom A, Atman C. Risk communication: a mental models approach. Cambridge: Cambridge University Press; 2002.

    Google Scholar 

  157. Choi BC, Pang T, Lin V, Puska P, Sherman G, Goddard M, et al. Can scientists and policy makers work together? J Epidemiol Community Health. 2005;59:632–7.

    Article  PubMed  PubMed Central  Google Scholar 

  158. The Social Issues Research Centre. Guidelines for scientists on communicating with the media. Oxford: The Social Issues Research Centre; 2006.

    Google Scholar 

  159. Eisman AB, Quanbeck A, Bounthavong M, Panattoni L, Glasgow RE. Implementation science issues in understanding, collecting, and using cost estimates: a multi-stakeholder perspective. Implement Sci. 2021;16:75.

    Article  PubMed  PubMed Central  Google Scholar 

  160. Darnell D, Dorsey CN, Melvin A, Chi J, Lyon AR, Lewis CC. A content analysis of dissemination and implementation science resource initiatives: what types of resources do they offer to advance the field? Implement Sci. 2017;12:137.

    Article  PubMed  PubMed Central  Google Scholar 

  161. Implementation Science [https://cancercontrol.cancer.gov/is].

  162. The Center for Implementation [https://thecenterforimplementation.com/about-us].

  163. Resources [https://ncois.org.au/resources/].

  164. Dissemination and Implementation Science Program [https://medschool.cuanschutz.edu/accords/cores-and-programs/dissemination-implementation-science-program].

  165. Implementation Science Exchange [https://impsci.tracs.unc.edu/].

  166. Implementation Science Resource Hub [https://impsciuw.org/].

  167. Dissemination & Implemetation Research [https://implementationresearch.wustl.edu/].

  168. Implementation Science Video Library [https://www.youtube.com/channel/UCJhGTpULmVIENeYHPDy-jLg/videos].

  169. Pawson R, Greenhalgh T, Harvey G, Walshe K. Realist review--a new method of systematic review designed for complex policy interventions. J Health Serv Res Policy. 2005;10(Suppl 1):21–34.

    Article  PubMed  Google Scholar 

  170. Chambers DA, Proctor EK, Brownson RC, Straus SE. Mapping training needs for dissemination and implementation research: lessons from a synthesis of existing D&I research training programs. Transl Behav Med. 2016;7:593–601.

    Article  PubMed Central  Google Scholar 

  171. Davis R. DʼLima D: Building capacity in dissemination and implementation science: a systematic review of the academic literature on teaching and training initiatives. Implement Sci. 2020;15:97.

    Article  PubMed  PubMed Central  Google Scholar 

  172. Schultes MT, Aijaz M, Klug J, Fixsen DL. Competences for implementation science: what trainees need to learn and where they learn it. Adv Health Sci Educ Theory Pract. 2021;26:19–35.

    Article  PubMed  Google Scholar 

  173. Osanjo GO, Oyugi JO, Kibwage IO, Mwanda WO, Ngugi EN, Otieno FC, et al. Building capacity in implementation science research training at the University of Nairobi. Implement Sci. 2016;11:30.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Disclaimer

The findings and conclusions in this paper are those of the authors and do not necessarily represent the official positions of the National Institutes of Health, the Centers for Disease Control and Prevention, or the American Cancer Society.

Funding

This work was supported in part by the National Cancer Institute (numbers P50CA244431, P50CA244688, P50CA244690, R01CA255382), the National Institute of Diabetes and Digestive and Kidney Diseases (numbers P30DK092950, P30DK056341, R25DK123008), the Centers for Disease Control and Prevention (number U48DP006395), the American Cancer Society (number RSG-17-156-01-CPPB), and the Foundation for Barnes-Jewish Hospital.

Author information

Authors and Affiliations

Authors

Contributions

RCB conceptualized the original ideas and wrote the first draft of the paper. RCS, EHG, and REG provided input on the original outline, contributed text to the draft manuscript, and provided intellectual content to the manuscript. All authors provided critical edits on drafts of the article and approved the final version.

Corresponding author

Correspondence to Ross C. Brownson.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare they have no conflicting interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Brownson, R.C., Shelton, R.C., Geng, E.H. et al. Revisiting concepts of evidence in implementation science. Implementation Sci 17, 26 (2022). https://doi.org/10.1186/s13012-022-01201-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13012-022-01201-y

Keywords