Skip to main content

The decision sampling framework: a methodological approach to investigate evidence use in policy and programmatic innovation

Abstract

Background

Calls have been made for greater application of the decision sciences to investigate and improve use of research evidence in mental health policy and practice. This article proposes a novel method, “decision sampling,” to improve the study of decision-making and research evidence use in policy and programmatic innovation. An illustrative case study applies the decision sampling framework to investigate the decisions made by mid-level administrators when developing system-wide interventions to identify and treat the trauma of children entering foster care.

Methods

Decision sampling grounds qualitative inquiry in decision analysis to elicit information about the decision-making process. Our case study engaged mid-level managers in public sector agencies (n = 32) from 12 states, anchoring responses on a recent index decision regarding universal trauma screening for children entering foster care. Qualitative semi-structured interviews inquired on questions aligned with key components of decision analysis, systematically collecting information on the index decisions, choices considered, information synthesized, expertise accessed, and ultimately the values expressed when selecting among available alternatives.

Results

Findings resulted in identification of a case-specific decision set, gaps in available evidence across the decision set, and an understanding of the values that guided decision-making. Specifically, respondents described 14 inter-related decision points summarized in five domains for adoption of universal trauma screening protocols, including (1) reach of the screening protocol, (2) content of the screening tool, (3) threshold for referral, (4) resources for screening startup and sustainment, and (5) system capacity to respond to identified needs. Respondents engaged a continuum of information that ranged from anecdote to research evidence, synthesizing multiple types of knowledge with their expertise. Policy, clinical, and delivery system experts were consulted to help address gaps in available information, prioritize specific information, and assess “fit to context.” The role of values was revealed as participants evaluated potential trade-offs and selected among policy alternatives.

Conclusions

The decision sampling framework is a novel methodological approach to investigate the decision-making process and ultimately aims to inform the development of future dissemination and implementation strategies by identifying the evidence gaps and values expressed by the decision-makers, themselves.

Peer Review reports

Research article

Implementation science has been defined as “the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice, and, hence, to improve the quality and effectiveness of health services.” [1, 2]. As implementation science grows in mental health services research, increasing emphasis has been placed on promoting the use of research evidence, or “empirical findings derived from systematic research methods and analyses” [3], in system-wide interventions that aim to improve the emotional, behavioral, and mental health of large populations of children, adolescents, and their caregivers [1, 2]. Examples include the adoption of universal mental health screening programs [4], development of trauma-informed systems of care [5], and psychotropic prescription drug monitoring programs [6,7,8]. Emphasis on research evidence use to inform system-wide interventions is rooted in beliefs that the use of such evidence will improve effectiveness, optimize resource allocations, mitigate the role of subjective beliefs or politics, and enhance health equity [9]. At the same time, documented challenges in the use of research evidence in mental health policy and programmatic innovation represent a sustained challenge in the implementation sciences.

To address these challenges, calls have been made for translational efforts to extend beyond simply promoting research evidence uptake (e.g., training the workforce to identify, critically appraise, and rapidly incorporate available research into decision-making [10,11,12]) to instead consider the knowledge, skills, and infrastructure required to inform and embed research evidence use in decision-making [13]. In this paper, we argue to engage the decision sciences to respond to the translational challenge of research evidence use in system-wide policy and programmatic innovations. System-wide innovations are studied far less frequently than medical treatments and often rely on cross-sectional or quasi-experimental study designs that lack a randomized control [10], adding to the uncertainty of the evidence base. Also, relevant decisions typically require considerations that extend beyond effectiveness alone, such as whether human, fiscal, and organizational capacities can accommodate implementation or provide redress to health inequity [13]. Accordingly, studies of research evidence use in system-wide policy and programmatic innovation increasingly emphasize that efforts to improve research evidence use require access not only to the research evidence, but also to the expertise needed to address the uncertainty of that evidence, and consideration for how values influence the decision-making process [14, 15].

In response, this article proposes the decision sampling framework, a systematic approach to data collection and analysis that aims to make explicit the gaps in evidence available to decision-makers by elucidating the decisions confronted and the role of research evidence and other types of information and expertise. This study specifically contributes to the field of implementation science by proposing the decision sampling framework as a methodological approach to inform the development of implementation strategies that are responsive to these evidence gaps and to inform the development of future dissemination and implementation strategies.

This article first presents our conceptualization of an optimal “evidence-informed decision” as an integration of (a) the best available evidence, (b) expertise to address scientific uncertainty in the application of that evidence, and (c) the values decision-makers prioritize to assess the trade-offs of potential options [14]. Then, we propose a specific methodological approach, “decision sampling,” that engages the theoretic tenets of the decision sciences to direct studies of evidence use in system-wide innovations to an investigation of the decision-making process itself. We conclude with a case study of decision sampling as applied to policy regarding trauma-informed screening for children in foster care.

Leveraging the decision sciences to define an “Evidence-informed Decision”

Foundational to the decision sciences is expected utility theory—a theory of rational decision-making whereby decision-makers maximize the expectation of benefit while minimizing risks, costs, and harms. While many theoretical frameworks have been developed to address its methodological limitations [16], expected utility theory nevertheless provides important insights into optimal decision-making when confronted with the real-world challenges of accessing relevant, high-quality, and timely research evidence for policy and programmatic decisions [17]. More specifically, expected utility theory demonstrates that evidence alone does not answer the questions of “what to do;” instead, it suggests that decisions draw on evidence, where available, but also incorporates expertise to appraise that evidence and articulated values and priorities weighed in consideration of the potential harms, costs, and benefits associated with any decision [14]. Although expected utility theory—unlike other frameworks in the decision sciences [18]—does not address the role of human biases, cognitive distortions and political and financial priorities that may also influence decision-making in practice, it nevertheless offers a valuable normative theoretical model to guide how decisions might be made under optimal circumstances.

Moving from research evidence use to knowledge synthesis

Based on a systematic review of research evidence use in policy [17], Oliver and colleagues argue that “much of the research [on policymakers’ use of research evidence] is theoretically naïve, focusing on the uptake of research evidence as opposed to evidence defined more broadly” [19]. Accordingly, some have suggested that policymakers use research evidence in conjunction with other forms of knowledge [20, 21]. Recent calls for “more critically and theoretically informed studies of decision-making” highlight the important role that broader sources of knowledge and local sources of information play alongside research evidence when making local policy decisions [17]. Research evidence to inform policy may include information on prevalence, risks factors, screening parameters, and treatment efficacy, which may be limited in their applicability to large population groups. Evaluations based on prospective experimental designs are rarely appropriate or possible for policy innovations because they do not account for the complexity of implementation contexts. Even when available, findings from these types of studies are often limited in their generalizability or not responsive to the information needs of decision-makers (e.g., feasibility, costs, sustainability) [14]. Accordingly, the research evidence available to make policy decisions is rarely so definitive or robust to rule out all alternatives, limiting the utility of research evidence alone. For this reason, local sources of information, such as administrative data, consumer and provider experience and opinion, and media stories among others, are often valued to provide data that may facilitate tailoring available research evidence to accommodate differences in heterogeneity of the population, delivery system, or sociopolitical context [20, 22,23,24,25,26,27]. Optimal decisions thus can be characterized as a process of synthesizing a variety of types of knowledge beyond available research evidence alone.

The role of expertise

Expertise is also needed to assess whether available research evidence holds “fit to context” [28]. Local policymakers may review evidence of what works in a controlled research setting, but remain uncertain as to where and with whom it will work on the ground [29]. Such concerns are not only practical but also scientific. For example, evaluations of policy innovations may produce estimates of average impact that are “unbiased” with respect to available data yet still can produce inaccurate predictions if the impact varies across contexts, or the evaluation estimate is subject to sampling errors [29].

The role of values in assessing population-level policies and trade-offs

Values are often considered to be antithetical to research evidence because they introduce subjective feelings into an ideally neutral process. Yet, the decision sciences suggest that values are required alongside evidence and expertise to inform optimal decision-making [30]. Application of clinical trials to medical policy offers a useful example. Frequently, medical interventions are found to reduce mortality but also quality of life. Patients and their physicians often face “preference sensitive” decisions, such as whether women at high risk of cancer attributable to the BRCA-2 gene should choose to undergo a mastectomy (i.e., removal of one or both breasts) at some cost to quality of life or not to undergo the invasive surgery and increase risk for premature mortality [31]. To weigh the trade-offs between competing outcomes—in this case the trade-off between extended lifespan and decreased quality of life—analysts often rely on “quality adjusted life years,” commonly known as QALYs. Fundamentally, QALYs represent a quantitative approach to applying patients’ values and preferences to evidence regarding trade-offs in likelihood and magnitude of different outcomes.

Likewise, policies frequently are evaluated on multiple outcome criteria, such as effectiveness, return on investment, prioritization within competing commitments, and/or stakeholder satisfaction. Efforts to promote evidence use, therefore, require consideration of the values of decision-makers and relevant stakeholders. Federal funding mechanisms emphasizing engagement of stakeholders in determining research priorities and patient-centered outcomes [32,33,34] are an acknowledgement of the importance of values in optimal decision-making. Specifically, when multiple criteria are relevant, (e.g., effectiveness, feasibility, acceptability, cost)—and when evidence may support a positive impact for one criteria but be inconclusive or unsupportive of another—preferences and values are necessary to make an evidence-informed decision.

To address these limitations, we recommend a new methodology, the decision sampling framework to improve analysis of decision-making to capture its varied and complicated inputs that are left out of current models. Rather than relying solely on investigation of research evidence uptake alone, this approach anchors the interview on the decision itself, focusing on respondents’ perceptions of how they synthesize available evidence, use judgment to apply that evidence, and apply values to weigh competing outcomes. By specifically orienting the interview on key dimensions of the dynamic decision-making process, this method offers a process that could improve implementation of research evidence by more accurately reflecting the decision-making process and its context.

Case study: trauma-informed care for children in foster care

In this case study, our methodological approach focused on sampling a single decision per participant regarding the identification and treatment of children in foster care who have experienced trauma [35]. In 2016, just over 425,000 U.S. children were in foster care at any one point in time [36]. Of these, approximately 90% presented with symptoms of trauma [37], or the simultaneous or sequential occurrence of threats to physical and emotional safety including psychological maltreatment, neglect, exposure to violence, and physical and sexual abuse [38]. Despite advances in the evidence available to identify and treat trauma for children in foster care [39], a national survey indicated that less than half of the forty-eight states use at least one screening or assessment tool with a rating of A or B from the California Evidence Base Clearinghouse [4].

In response, the Child and Family Services Improvement and Innovation Act of 2011(P.L. 112-34) required Title IV-B funded child welfare agencies to develop protocols “to monitor and treat the emotional trauma associated with a child’s maltreatment and removal from home” [40], and federal funding initiatives from the Administration for Children and Families, Substance Abuse and Mental Health Services Administration (SAMHSA) and the Center for Medicare and Medicaid Services incentivized the adoption of evidence-based, trauma-specific treatments for children in foster care [41]. These federal actions offered the opportunity to investigate the process by which states used available research evidence to inform policy decisions as they sought to implement trauma-informed protocols for children in foster care.

In this case study, we sampled decisions of mid-level managers seeking to develop protocols to identify and treat trauma among children and adolescents entering foster care. Protocols specifically included a screening tool or process that aimed to inform determination of whether referral for additional assessment or service was needed. Systematic reviews and clearinghouses were available to identify the strength of evidence available for potential trauma screening tools [42, 43]. This process explicitly investigated the multiple inputs and complex process used to make decisions.

Methods

Table 1 provides a summary of each step of the decision sampling method and a brief description of the approach taken for the illustrative case study. This study was reviewed and approved from the Institutional Review Board at Rutgers, the State University of New Jersey.

Table 1 Steps for the decision sampling framework and an illustrative case study

Sample

We posit that decision sampling requires the articulation of an innovation or policy of explicit interest. Drawing from a method known as “experience sampling” that engages systematic self-report to create an archival file of daily activities [45], “decision sampling” first identifies a sample of individuals working in a policy or programmatic domain, and then samples one or more relevant decisions for each individual. While experience sampling is typically used to acquire reports of emotion, “decision sampling” is intended to acquire a systematic self-report of the process for decision-making. Rationale for the number of key informants should meet relevant qualitative standards (e.g., theoretic saturation [46]).

In this case study, key informants were mid-level administrators working on public sector policy relevant to trauma-informed screening (rather than random selection [47]); we sampled specific decisions to gain systematic insight into their decision-making processes, complex social organization, and use of research evidence, expertise, and values.

Interview guide

Rather than asking directly about evidence use, the decision sampling approach seeks to minimize response and desirability bias by anchoring the conversation on an important decision recently made. The approach thus (a) shifts the typical unit of analysis in qualitative work from the participant to the decision confronted by the participant(s) working on a policy decision and (b) leverages decision sciences to investigate that decision. Consistent with expected utility theory and as depicted in Fig. 1, the framework for an interview guide aligns with the inputs of a decision tree reflecting the choices, changes, and outcomes that are ideally considered when making a choice among two or more options. This framework then generates domains for theoretically informed inputs of the decision-making process and consideration of subsequent trade-offs.

Fig. 1
figure 1

Interview Guide Domains: Mapping Interview Questions onto a Decision Analysis Framework

Accordingly, our semi-structured interview guide asked decision-makers to recall a recent and important decision regarding a system-wide intervention to identify and/or treat trauma of children in foster care. Questions and probes were then anchored to this decision and mapped onto the conceptual model. As articulated in Fig. 1, domains in our guide included (a) decision points, (b) choices considered, (c) evidence and expertise regarding chances and outcomes [20, 48], (d) outcomes prioritized (which implicitly reflect values), (e) the group process (including focus, formality, frequency, and function associated with the decision-making process), (f) explicit values as described by the respondent, and (g) trade-offs considered in the final decision [48]. By inquiring about explicit values, our guide probed beyond values as a means to weigh competing outcomes (as more narrowly considered in decision analysis) to include a broader definition of values that could apply to any aspect of decision-making. Relevant sections of the interview guide are available in the supplemental materials.

Data analysis

Data from semi-structured qualitative interviews was analyzed using a modified framework analysis, involving seven steps [49]. First, recordings were transcribed verbatim, checked for accuracy, and de-identified. Second, analysts familiarized themselves with the data, listening to recordings and reading transcripts. Third, an interdisciplinary team reviewed transcripts employing emergent and a priori codes—in this case aligned with the decision sampling framework (see a–g above [20, 50]). The codebook was then tested for applicability and interrater reliability. Two or more trained investigators performed line-by-line coding of each transcript using the developed codebook. The codes were then reconciled against each other arriving at consensus on the codes line-by-line. Fourth, analysts chose to use a software, a program to facilitate data indexing and a matrix, developed to index codes thematically, by interview transcript, to facilitate a summary of established codes for each decision. (The matrix used to index decisions in the present study is available in the supplement.) In steps five and six, the traditional approach to framework analysis was modified by summarizing the excerpted codes for each transcript into the decision-specific matrices with relevant quotes included. Our modification specifically aggregated data for each discrete decision point into a matrix providing the opportunity for routine and systematic analysis of each decision according to core domains. In step seven, the data were systematically analyzed across each decision matrix. Finally, we performed data validation given potential for biases introduced by the research team. As described in Table 1, our illustrative case study engaged member-checking group interviews (see supplement for the group interview guide.)

Results

Table 2 presents informant characteristics for the full sample (in the 3rd column) as well as the subsample (in the 2nd column) who reported decisions relevant to screening and assessment that are analyzed in detail below. The findings presented below demonstrate key products of the decision sampling framework, including the (a) set of inter-related decisions (i.e., the “decision set”) routinely (although not universally) confronted in developing protocols to screen and assess for trauma, (b) diverse array of information and knowledge accessed, (c) gaps in that information, (d) role of expertise and judgment in assessing “fit to context”, and ultimately (e) values to select between policy alternatives. We present findings for each of these themes in turn.

Table 2 Respondent characteristics

The inter-related decision set and choices considered

Policymakers articulated a set of specific and inter-related decisions required to develop system-wide trauma-focused approaches to screen and assess children entering foster care. Across interviews, analyses revealed the need for decisions in five domains, including (a) reach, (b) content of the endorsed screening tool, (c) the threshold or “cut-score” to be used, (d) the resources to start-up and sustain a protocol, and (e) the capacity of the service delivery system to respond to identified needs. Table 3 provides a summative list of choices articulated across all respondents who reported respective decision points within each of the five domains. All decision points presented more than one choice alternative, with respondents articulating as many as six potential alternatives for a single decision point (e.g., administrator credentials).

Table 3 The inter-related decision set for trauma screening, assessment, and treatment

Information continuum to inform understandings of options’ success and failure

To inform decisions, mid-level managers sought information from a variety of different sources ranging from research evidence (e.g., descriptive studies of trauma exposure, intervention or evaluation studies of practice-based and system-wide screening initiatives, meta-analyses of available screening and treatment options, and cost-effectiveness studies [3]) to anecdotes (see Fig. 2). Across this range, information derived from both global contexts, defined as evidence drawn from populations and settings outside of the jurisdiction, and local contexts, defined as evidence drawn from the affected populations and settings [50]. Notably, systematic research evidence most often derived from global contexts, whereas ad hoc evidence most often derived from local sources.

Fig. 2
figure 2

Information from Research Evidence to Anecdote

Evidence gaps

Respondents reported that the amount and type of information available depended on the decision itself. Comparing the decision set with the types of information available facilitated identification of evidence gaps. For example, respondents reported far more research literature about where a tool’s threshold should be set (based on its psychometric properties) than about the resources required to initiate and sustain the protocol over time. As a result, ad hoc information was often used to address these gaps in research evidence. For example, respondents indicated greater reliance on information such as testimony and personal experience to choose between policy alternatives. In particular, respondents often noted gaps in generalizability of research evidence to their jurisdiction and relevant stakeholders. As one respondent at a mental health agency notes, “I found screening tools that were validated on 50 people in [another city, state]. Well, that's not the population we deal with here in [our state] in community mental health or not 50 kids in [another city]. I want a tool to reflect the population we have today.”

The decision sampling method revealed where evidence gaps were greatest, allowing for further analysis to identify whether these gaps were due to a lack of extant evidence or the accessibility of existing evidence. In Fig. 3, a heat map illustrates where evidence gaps were most frequently expressed. Notably, this heat map illustrates that published studies were available for each of the relevant domains except for the “capacity for delivery systems to respond.”

Fig. 3
figure 3

Information Gaps across the Dynamic Decision Continuum

Role of expertise in assessing and contextualizing available research evidence

As articulated in Table 4, respondents facing decisions not only engaged with information directly, but also with professionals holding various kinds of clinical, policy, and/or public sector systems expertise. This served three purposes. First, respondents relied on experts to address information gaps. As one respondent illustrates, “We really found no literature on [feasibility to sustain the proposed screening protocol] … So, that was where a reliance in conversation with our child welfare partners was really the most critical place for those conversations.” Second, respondents relied on experts to help sift through large amounts of evidence and present alternatives. As one respondent in a child welfare agency indicates:

“I would say the committee was much more forthright… because of their knowledge of [a specific screening] tool. I helped to present some other ones … because I always want informed decision-making here, because they've looked at other things, you know...there were people on the committee that were familiar with [a specific screening tool]. And, you know, that's fine, as long as you're – you know, having choice is always good, but [the meeting convener] knew I'd bring the other ones to the committee.”

Table 4 Types of expertise across the decision set

Third, respondents reported reliance on experts to assess whether research evidence “fit to context” of the impacted sector or population. As one mental health professional indicated:

“It just seems to me when you're writing a policy that's going to impact a certain areas’ work, it's a really good idea to get [child welfare supervisor and caseworker] feedback and to see what roadblocks they are experiencing to see if there's anything from a policy standpoint that we can help them with…And then we said here's a policy that we're thinking of going forward with and typically they may say yes, that's correct or they'll say you know this isn't going work have you thought about this? And you kind of work with them to tweak it so that it becomes something that works well for everybody.”

Therefore, clinical, policy, or public sector systems expertise played at least three different roles—as a source of information in itself, as a broker that distilled or synthesized available research evidence, and also as a means to help apply (or assess the “fit” of) available research evidence to the local context.

The role of values in selecting policy alternatives

When considering choices, respondents focused on their associated benefits, challenges, and “trade-offs.” Ultimately, decision-makers revealed how values drove them to select one alternative over another, including commitments to (a) a trauma-informed approach, (b) aligning practice with the available evidence base, (c) aligning choices with available expert perspectives, and (d) determining the capacity of the system to implement and maintain.

For example, one common decision faced was where to set the “cut-off” score for trauma screening tools. Published research typically provides a cut-off score designed to optimize sensitivity (i.e., the likelihood of a true positive) and specificity (i.e., likelihood of a true negative). One Medicaid respondent engaged a researcher who developed a screening tool who articulated that the evidence-based threshold for further assessment on the tool was a score of “3.” However, the public sector decision-maker chose the higher threshold of “7” because they anticipated the system lacked capacity to implement and respond to identified needs if set at the lower score. This decision exemplifies a trade-off between being responsive to the available evidence base and system capacity. As the Medicaid respondent articulated: “What we looked at is we said where we would we need to draw the line, literally draw the line, to be able to afford based on the available dollars we had within the waiver.” The respondent confronted a trade-off between maximizing effectiveness of the screening tool, the resources to start-up and sustain the protocol, and the capacity of the service delivery system to respond. The respondent displayed a clear understanding of these trade-offs in the following statement: “…children who have 3 or more identified areas of trauma screen are really showing clinical significance for PTSD, these are kids you should be assessing. We looked at how many children that was [in our administrative data], and we said we can’t afford that.” In this statement, consideration for effectiveness is weighed against feasibility of implementation, including resources to initiate and sustain the policy.

Discussion

By utilizing a qualitative method grounded in the decision sciences, the decision sampling framework provides a new methodological approach to improve the study and use of research evidence in real-world settings. As the case study demonstrates, the approach draws attention to how decision-makers confront sets of inter-related decisions when implementing system-wide interventions. Each decision presents multiple choices, and the extent and types of information available are dependent upon the specific decision point. Our study finds many distinct decision points were routinely reported in the process of adoption, implementation, and sustainment of a screening program, yet research evidence was only available for some of these decisions [51], and local evidence, expertise, and values were often applied in the decision-making process to address this gap. For example, decision-makers consulted experts to address evidence gaps and they relied on both local information and expert judgment to assess “fit to context”—not unlike how researchers address external validity [52].

Concretely, the decision sampling framework seeks to produce insights that will advance the development of strategies to expedite research evidence use in system-wide innovations. Similar to strategies used to identify implementation strategies for clinical intervention (e.g., implementation mapping) [53], the decision sampling framework aims to provide evidence that can be informative to identification of the strategies required to expedite adoption of research findings into system-wide policies and programs. Notably, decision sampling may also identify evidence gaps leading to the articulation of areas where new lines of research may be required. Whether informing implementation strategies or new areas for research production, decision sampling roots itself in understanding the experience and perspectives of decision-makers themselves, including the decision set confronted, the role of various types of information, expertise, and values influencing the relevant decisions. Analytic attention on the decision-making process leads to at least three concrete benefits as articulated below.

First, the decision sampling framework challenges whether we refer to a system-wide innovation as “evidence-based” rather than as “evidence-informed.” As illustrated in our case example, a system-wide innovation may engage some decisions that are “evidence-based” (e.g., selection of the screening tool) while other decisions fail to meet the criteria whether because of lacking access to a relevant evidence base, expertise, or potentially holding values that prioritized other considerations over the research evidence alone. For example, respondents in some cases adopted an evidence-based screening tool but then chose not to implement the recommended threshold; while thresholds are widely recommended in peer-reviewed publications, neither their psychometric properties nor the trade-offs they imply are typically reported in full. Thus, the degree to which screening tools are based on “evidence derived from applying systematic methods and analyses” is an open question [54, 55]. In this case, an element of scientific uncertainty may have driven the perceived need for adaptation of an “evidence-based” screening threshold.

Second, decision sampling contributes to theory on why studies of evidence use should address other sources of information beyond systematic research [56]. Our findings emphasize the importance of capturing the heterogeneity of information used by decision-makers to assess “fit to context” and address the scientific uncertainty that may characterize any one type of evidence alone [14]. Notably, knowledge synthesis across the information continuum (i.e., research evidence to ad hoc information) may facilitate the use of available research evidence, impede it, or complement it, specifically in areas where uncertainty or gaps in the available research evidence exist. Whether use of other types of information serves any of these particular purposes is an empirical question that requires further investigation in future studies.

Third, the decision sampling framework makes a methodological contribution by providing a new template to produce generalizable knowledge about quality gaps that impede research evidence use in system-wide innovations [17] as called for by Gitomer and Crouse [57]. By applying standards of qualitative research for sampling, measures development, and analysis to mitigate potential bias, this framework facilitates the production of knowledge that is generalizable beyond any one individual system, as prioritized in implementation sciences [1]. To begin, the decision sampling framework maps the qualitative semi-structured interview guide on tenets of the decision sciences (see Fig. 1). Integral to this approach is a concrete cross-walk between the theoretic tenets of decision sciences and the method of inquiry. Modifications are possible; for example, we engaged individual interviews for this study but other studies may warrant group interviews or focus groups to characterize collective decision-making processes. Finally, our measures align with key indicators in decision analysis, while other studies may benefit from crosswalks with other theories (e.g., cumulative prospect theory to systematically examine heuristics and cognitive biases) [58]. Specific cases may require particular attention to one or all of these domains. For example, an area where the information continuum is already well-documented may warrant greater attention to how expertise and values are drawn upon to influence the solution selected.

Fourth, decision sampling can help to promote the use of research evidence in policy and programmatic innovation. Our findings corroborate a growing literature in the implementation sciences that articulates the value of “knowledge experts” and opinion leaders to facilitate innovation diffusion and research evidence use [56, 59]. Notably, such efforts may require working with decision-makers to understand trade-offs and the role of values in prioritizing choices. Thus, dissemination efforts must move beyond simply providing “clearinghouses” that “grade” and disseminate the available research evidence and instead engage knowledge experts in assessing fit-to-context, as well as the applicability to specific decisions. To move beyond knowledge transfer, such efforts call for novel implementation strategies, such as group model building and simulation modeling, to assist in consideration of the trade-offs that various policy alternatives present [60]. Rather than attempting to flatten and equalize all contexts and information sources, this method recognizes that subjective assessments, values, and complex contextual concerns are always present in the translation of research evidence to implementation.

Notably, evidence gaps also demand greater attention from the research community itself. As traditionally conceptualized, research evidence arrives from academic centers that design and test evidence-based practices prior to dissemination; this linear process has been cited as the “science push” [60] to practice settings. In contrast, our findings support models of “linkage and exchange,” emphasizing the bi-directional process by which policymakers, service providers, researchers, and other stakeholders work together to create new knowledge and implement promising findings [60]. Such efforts are increasingly prioritized by organizations such as the Patient-Centered Research Outcomes Institute and the National Institutes of Health which have called for every stage of research to be conducted with a diverse array of relevant stakeholders, including patients and the public, providers, purchasers, payers, policymakers, product makers, and principal investigators.

Limitations

Any methodological approach provides a limited perspective on the phenomenon of interest. As Moss and Hartel (2016) note “both the methods we know and the methodologies that ground our understandings of disciplined inquiry shape the ways we frame problems, seek explanations, imagine solutions, and even perceive the world.” (p. 127) We draw on the decision sciences to propose a decision sampling framework grounded in the concept of rational choice as an organizing framework. At the same time, we have no expectation that the machinations of the decision-making process align with the components required of a formal decision analysis. Rather, we think engaging a decision sampling framework facilitates characterizing and analyzing key dimensions of the inherent complexity of the decision-making process. Systematic collection and analysis of these dimensions is proposed to more accurately reflect the experiences of decision-makers and promote responsive translational efforts.

Characterizing the values underlying decision-making processes is a complex and multi-dimensional undertaking that is frequently omitted in studies on research evidence use [15]. Our methodological approach seeks to assess values both by asking explicit questions and by evaluating which options “won the day” as respondents indicated why specific policy alternatives were selected. Notably, values as articulated and values as realized in the decision-making process frequently did not align, offering insights into how we ascertain information about the values of complex decision-making processes.

Conclusions

Despite increased interest in research evidence use in system-wide innovations, little attention has been given to anchoring studies in the decisions, themselves. In this article, we offer a new methodological framework for “decision sampling” to facilitate systematic inquiry into the decision-making process. Although frameworks for evidence-informed decision-making originated in clinical practice, our findings suggest utility in adapting this model when evaluating decisions that are critical to the development and implementation of system-wide innovations. Implementation scientists, researchers, and intermediaries may benefit from an analysis of the decision-making process, itself, to identify and be responsive to evidence gaps in research production, implementation, and dissemination. As such, the decision sampling framework can offer a valuable approach to guide future translational efforts that aim to improve research evidence use in adoption and implementation of system-wide policy and programmatic innovation.

Availability of data and materials

Due to the detailed nature of the data and to ensure we maintain the confidentiality promised to study participants, the data generated and analyzed during the current study are not publicly available.

References

  1. Bauer MS, Damschroder L, Hagedorn H, Smith J, Kilbourne AM. An introduction to implementation science for the non-specialist. BMC Psychol. 2015;3(1):32.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Eccles MP, Mittman BS. Welcome to implementation science. Implement Sci. 2006;1(1):1–3.

    Article  PubMed Central  Google Scholar 

  3. Tseng V. The uses of research in policy and practice. Soc Policy Rep. 2008;26(2):3–16.

    Google Scholar 

  4. Hayek M, Mackie T, Mulé C, Bellonci C, Hyde J, Bakan J, et al. A multi-state study on mental health evaluation for children entering foster care. Adm Policy Ment Health. 2013;41:1–16.

    Google Scholar 

  5. Ko SJ, Ford JD, Kassam-Adams N, Berkowitz SJ, Wilson C, Wong M, et al. Creating trauma-informed systems: child welfare, education, first responders, health care, juvenile justice. Prof Psychol. 2008;39(4):396.

    Article  Google Scholar 

  6. Kelleher KJ, Rubin D, Hoagwood K. Policy and practice innovations to improve prescribing of psychoactive medications for children. Psych Serv. 2020;71(7):706–12.

    Article  Google Scholar 

  7. Mackie TI, Hyde J, Palinkas LA, Niemi E, Leslie LK. Fostering psychotropic medication oversight for children in foster care: a national examination of states’ monitoring mechanisms. Adm Policy Ment Health. 2017;44(2):243–57.

    Article  PubMed  Google Scholar 

  8. Mackie TI, Schaefer AJ, Karpman HE, Lee SM, Bellonci C, Larson J. Systematic Review: System-wide Interventions to Monitor Pediatric Antipsychotic Prescribing and Promote Best Practice. J Am Acad Child Adolesc Psychiatry. 2021;60(1):76-104.e7.

  9. Cairney P, Oliver K. Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy? Health Res Policy Syst. 2017;15(1):35.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Brownson RC, Chriqui JF, Stamatakis KA. Understanding evidence-based public health policy. Am J Public Health. 2009;99(9):1576–83.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Brownson RC, Gurney JG, Land GH. Evidence-based decision making in public health. J Public Health Manag Pract. 1999;5:86–97.

    Article  CAS  PubMed  Google Scholar 

  12. Dreisinger M, Leet TL, Baker EA, Gillespie KN, Haas B, Brownson RC. Improving the public health workforce: evaluation of a training course to enhance evidence-based decision making. J Public Health Manag Pract. 2008;14(2):138–43.

    Article  PubMed  Google Scholar 

  13. DuMont K. Reframing evidence-based policy to align with the evidence. New York: W.T. Grant Foundation; 2019.

    Google Scholar 

  14. Sheldrick CR, Hyde J, Leslie LK, Mackie T. The debate over rational decision making in evidence-based medicine: Implications for evidence-informed policy. Evid Policy. 2020.

  15. Tanenbaum SJ. Evidence-based practice as mental health policy: three controversies’ and a caveat. Health Affairs. 2005;24(1):163–73.

    Article  PubMed  Google Scholar 

  16. Stojanović B. Daniel Kahneman: The riddle of thinking: thinking, fast and slow. Penguin books, London, 2012. Panoeconomicus. 2013;60(4):569–76.

    Google Scholar 

  17. Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014;14(1):2.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Arnott D, Gao S. Behavioral economics for decision support systems researchers. Decis Support Syst. 2019;122:113063.

    Article  Google Scholar 

  19. Oliver S, Harden A, Rees R, Shepherd J, Brunton G, Garcia J, et al. An emerging framework for including different types of evidence in systematic reviews for public policy. Evaluation. 2005;11(4):428–46.

    Article  Google Scholar 

  20. Hyde JK, Mackie TI, Palinkas LA, Niemi E, Leslie LK. Evidence use in mental health policy making for children in foster care. Adm Policy Ment Health. 2015;43:1–15.

    Google Scholar 

  21. Palinkas LA, Schoenwald SK, Hoagwood K, Landsverk J, Chorpita BF, Weisz JR, et al. An ethnographic study of implementation of evidence-based treatments in child mental health: first steps. Psychiatr Serv. 2008;59(7):738–46.

    Article  PubMed  Google Scholar 

  22. Greenhalgh T, Russell J. Evidence-based policymaking: a critique. Perspect Biol Med. 2009;52(2):304–18.

    Article  PubMed  Google Scholar 

  23. Lomas J, Brown AD. Research and advice giving: a functional view of evidence-informed policy advice in a Canadian Ministry of Health. Milbank Q. 2009;87(4):903–26.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Marston G, Watts R. Tampering with the evidence: a critical appraisal of evidence-based policy-making. Drawing Board. 2003;3(3):143–63.

    Google Scholar 

  25. Clancy CM, Cronin K. Evidence-based decision making: Global evidence, local decisions. Health Affairs. 2005;24(1):151–62.

    Article  PubMed  Google Scholar 

  26. Taussig HN, Culhane SE. Impact of a mentoring and skills group program on mental health outcomes for maltreated children in foster care. Arch Pediatr Adolesc Med. 2010;164(8):739–46.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Politi MC, Lewis CL, Frosch DL. Supporting shared decisions when clinical evidence is low. MCRR. 2013;70(1 suppl):113S–28S.

    PubMed  Google Scholar 

  28. Lomas J. The in-between world of knowledge brokering. BMJ. 2007;334(7585):129.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Orr LL, Olsen RB, Bell SH, Schmid I, Shivji A, Stuart EA. Using the results from rigorous multisite evaluations to inform local policy decisions. J Policy Anal Manag. 2019;38(4):978–1003.

    Article  Google Scholar 

  30. Hunink MM, Weinstein MC, Wittenberg E, Drummond MF, Pliskin JS, Wong JB, et al. Decision making in health and medicine: integrating evidence andvalues. Cambridge: Cambridge University Press; 2014.

    Book  Google Scholar 

  31. Weitzel JN, McCaffrey SM, Nedelcu R, MacDonald DJ, Blazer KR, Cullinane CA. Effect of genetic cancer risk assessment on surgical decisions at breast cancer diagnosis. Arch Surg. 2003;138(12):1323–8.

    Article  PubMed  Google Scholar 

  32. Concannon TW, Meissner P, Grunbaum JA, McElwee N, Guise JM, Santa J, et al. A new taxonomy for stakeholder engagement in patient-centered outcomes research. J Gen Intern Med. 2012;27:985–91.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Concannon TW, Fuster M, Saunders T, Patel K, Wong JB, Leslie LK, et al. A systematic review of stakeholder engagement in comparative effectiveness and patient-centered outcomes research. J Gen Intern Med. 2014;29(12):1692–701.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Frank L, Basch C, Selby JV. The PCORI perspective on patient-centered outcomes research. JAMA. 2014;312(15):1513–4.

    Article  CAS  PubMed  Google Scholar 

  35. Substance Abuse and Mental Health Administration. Guidance on Strategies to Promote Best Practice in Antipschotic Prescribing for Children and Adolescents. Rockville: Sunstance Abuse and Mental Health Services Administration, Office of Chief Medical Officer; 2019. Contract No.: HHS Public No. PEP19-ANTIPSYCHOTIC-BP

    Google Scholar 

  36. U.S. Department of Health and Human Service AoC, Youth and Families, Children’s Bureau. The Adoption and Foster Care Analysis and Reporting System Report. Washington DC: U.S. Department of Health and Human Services; 2017.

    Google Scholar 

  37. Burns BJ, Phillips SD, Wagner HR, Barth RP, Kolko DJ, Campbell Y, et al. Mental health need and access to mental health services by youths involved with child welfare: a national survey. J Am Acad Child Adolesc Psychiatry. 2004;43(8):960–70.

    Article  PubMed  Google Scholar 

  38. National Traumatic Stress Network. What is traumatic stress? 2003. Available from: http://www.nctsn.org/resources/topics/treatments-that-work/promisingpractices.pdf.  Accessed 26 Jan 2021.

  39. Goldman Fraser J, Lloyd S, Murphy R, Crowson M, Casanueva C, Zolotor A, et al. Child exposure to trauma: comparative effectiveness of interventions addressing maltreatment. Rockville: Agency for Healthcare Research and Quality; 2013. Contract No.: AHRQ Publication No. 13-EHC002-EF

    Google Scholar 

  40. Child and Family Services Improvement and Innovation Act, Pub. L. No. 112-34, 42 U.S.C. § 1305 (2011).

  41. Administration for Children and Families. Information Memoradum ACYF-CB-IM-12-04. Promoting social and emotional well-being for children and youth receiving child welfare services. Washington, DC: Administration on Children Youth, and Families; 2012.

    Google Scholar 

  42. Eklund K, Rossen E, Koriakin T, Chafouleas SM, Resnick C. A systematic review of trauma screening measures for children and adolescents. School Psychol Quart. 2018;33(1):30.

    Article  Google Scholar 

  43. National Child Traumatic Stress Network. Measure Reviews. Available from: https://www.nctsn.org/treatments-and-practices/screening-andassessments/measure-reviews/all-measure-reviews. Accessed 26 Jan 2021.

  44. Hanson JL, Balmer DF, Giardino AP. Qualitative research methods for medical educators. Acad Pediatr. 2011;11(5):375–86.

    Article  PubMed  Google Scholar 

  45. Larson R, Csikszentmihalyi M. The experience sampling method. In: Flow and the foundations of positive psychology. Claremont: Springer; 2014. p. 21–34.

  46. Crabtree BF, Miller WL. Doing qualitative research. Newbury Park: Sage Publications; 1999.

    Google Scholar 

  47. Gilchrist V. Key informant interviews. In: Crabtree BF, Miller WL, editors. Doing Qualitative Research. 3. Thousand Oaks: Sage; 1992.

    Google Scholar 

  48. Palinkas LA, Fuentes D, Finno M, Garcia AR, Holloway IW, Chamberlain P. Inter-organizational collaboration in the implementation of evidence-based practices among public agencies serving abused and neglected youth. Adm Policy Ment Health. 2014;41(1):74–85.

    Article  PubMed  Google Scholar 

  49. Gale NK, Heath G, Cameron E, Rashid S, Redwood S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol. 2013;13(1):117.

    Article  PubMed  PubMed Central  Google Scholar 

  50. Palinkas LA, Aarons G, Chorpita BF, Hoagwood K, Landsverk J, Weisz JR. Cultural exchange and the implementation of evidence-based practice: two case studies. Res Soc Work Pract. 2009;19(5):602–12.

    Article  Google Scholar 

  51. Baumann DJ, Fluke JD, Dalgleish L, Kern H. The decision making ecology. In: Shlonsky A, Benbenishty R, editors. From Evidence to Outcomes in Child Welfare. New York: Oxford University Press, 24–40.

  52. Aarons GA, Sklar M, Mustanski B, Benbow N, Brown CH. “Scaling-out” evidence-based interventions to new populations or new health care delivery systems. Implement Sci. 2017;12(1):111.

    Article  PubMed  PubMed Central  Google Scholar 

  53. Fernandez ME, Ten Hoor GA, van Lieshout S, Rodriguez SA, Beidas RS, Parcel G, et al. Implementation mapping: using intervention mapping to develop implementation strategies. Front Public Health. 2019;7:158.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Sheldrick RC, Benneyan JC, Kiss IG, Briggs-Gowan MJ, Copeland W, Carter AS. Thresholds and accuracy in screening tools for early detection of psychopathology. J. Child Psychol Psychiatry. 2015;56(9):936–48.

    Article  PubMed  PubMed Central  Google Scholar 

  55. Sheldrick RC, Garfinkel D. Is a positive developmental-behavioral screening score sufficient to justify referral? A review of evidence and theory. Acad Pediatr. 2017;17(5):464–70.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Valente TW, Davis RL. Accelerating the diffusion of innovations using opinion leaders. Ann Am Acad Pol Soc Sci. 1999;566(1):55.

    Article  Google Scholar 

  57. Gitomer DH, Crouse K. Studying the use of research evidence: a review of methods. New York City: W.T. Grant Foundation; 2019.

    Google Scholar 

  58. Tversky A, Kahneman D. Advances in prospect theory: cumulative representation of uncertainty. J Risk Uncertain. 1992;5(4):297–323.

    Article  Google Scholar 

  59. Mackie TI, Hyde J, Palinkas LA, Niemi E, Leslie LK. Fostering Psychotropic Medication Oversight for Children in Foster Care: A National Examination of States’ Monitoring Mechanisms. Administration and Policy in Mental Health and Mental Health Services Research. 2017;44(2):243-57

  60. Belkhodja O, Amara N, Landry R, Ouimet M. The extent and organizational determinants of research utilization in Canadian health services organizations. Sci Commun. 2007;28(3):377–417.

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported by a research grant in the Use of Research Evidence Portfolio of the W.T. Grant Foundation [PI: Mackie]. We extend our appreciation to the many key informants who gave generously of their time to facilitate this research study; we are deeply appreciative and inspired by their daily commitment to improve the well-being of children and adolescents.

Funding

This research was supported from a research grant provided by the W.T. Grant Foundation [PI: Mackie].

Author information

Authors and Affiliations

Authors

Contributions

TIM led the development of the study, directed all qualitative data collection and analysis, drafted sections of the manuscript, and directed the editing process. AJS assisted with the qualitative data collection and analyses, writing of the initial draft, and editing the final manuscript. JAH and LKL participated in the design of the study, interpretation of the data, and editing of the final manuscript. EB participated in the interpretation of the data analyses and editing of the final manuscript. BF participated in the qualitative data collection and analyses and editing of the final manuscript. RCS participated in the design of the study and data analysis, drafted the sections of the paper, and revised the final manuscript. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Thomas I. Mackie.

Ethics declarations

Competing interest

The authors declare that they have no competing interests.

Ethics approval and consent to participate

This study was reviewed and approved from the Institutional Review Board at Rutgers, the State University of New Jersey.

Consent for publication

Not applicable

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mackie, T.I., Schaefer, A.J., Hyde, J.K. et al. The decision sampling framework: a methodological approach to investigate evidence use in policy and programmatic innovation. Implementation Sci 16, 24 (2021). https://doi.org/10.1186/s13012-021-01084-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13012-021-01084-5

Keywords