Skip to main content

Leveraging artificial intelligence to advance implementation science: potential opportunities and cautions



The field of implementation science was developed to address the significant time delay between establishing an evidence-based practice and its widespread use. Although implementation science has contributed much toward bridging this gap, the evidence-to-practice chasm remains a challenge. There are some key aspects of implementation science in which advances are needed, including speed and assessing causality and mechanisms. The increasing availability of artificial intelligence applications offers opportunities to help address specific issues faced by the field of implementation science and expand its methods.

Main text

This paper discusses the many ways artificial intelligence can address key challenges in applying implementation science methods while also considering potential pitfalls to the use of artificial intelligence. We answer the questions of “why” the field of implementation science should consider artificial intelligence, for “what” (the purpose and methods), and the “what” (consequences and challenges). We describe specific ways artificial intelligence can address implementation science challenges related to (1) speed, (2) sustainability, (3) equity, (4) generalizability, (5) assessing context and context-outcome relationships, and (6) assessing causality and mechanisms. Examples are provided from global health systems, public health, and precision health that illustrate both potential advantages and hazards of integrating artificial intelligence applications into implementation science methods. We conclude by providing recommendations and resources for implementation researchers and practitioners to leverage artificial intelligence in their work responsibly.


Artificial intelligence holds promise to advance implementation science methods (“why”) and accelerate its goals of closing the evidence-to-practice gap (“purpose”). However, evaluation of artificial intelligence’s potential unintended consequences must be considered and proactively monitored. Given the technical nature of artificial intelligence applications as well as their potential impact on the field, transdisciplinary collaboration is needed and may suggest the need for a subset of implementation scientists cross-trained in both fields to ensure artificial intelligence is used optimally and ethically.

Peer Review reports


Healthy People 2030 vision calls for a “society in which all people can achieve their full potential for health and well-being” [1]. This vision is aspirational and leaves much to be done. Among high-income countries, the average number of annual deaths that could be avoided altogether with preventive or treatment strategies ranges from 130 to more than 330 per 100,000 people [2]. Although the lag time from knowledge generation to translation varies by situation, recent estimates suggest the average time is 15 years, which is a modest improvement from prior estimates [3, 4]. To move the needle and make real progress toward this vision, we need to do better faster, which includes producing and sustaining equitable results. We need more rapid knowledge generation and translation done in replicable, equitable, sustainable, locally relevant, and externally valid ways [5, 6]. Implementation science (IS) can play a key role in translating evidence into practice and policy.

IS methods and approaches can drive improvements in equity, sustainability, and the balance between local relevance and external validity needed to support translational science. When used by learning health systems or, more generally, in healthcare or public health settings, IS can iteratively support the continuum of knowledge generation to translation in many ways [7]. IS specifically focuses on feasibility and relevance to the local context while also considering principles of designing for dissemination, sustainability, and equity [8]. Importantly, equity is considered at each step in the continuum of knowledge generation to translation and promoted through the representation of partner perspectives and representativeness of outcomes [9,10,11,12,13,14,15,16]. However, IS has several limitations and challenges, notably the time and resources required to apply its methods and approaches. Such constraints can lead to reduced frequency, sample sizes, or representation of partner engagement and hamper other methods commonly used to assess context and outcomes [17]. Such limitations can dampen the potential for IS to enhance reach, equity, sustainability, and generalizability and ultimately impede its ability to close the evidence-to-practice gap.

As artificial intelligence (AI) gains prominence in the public health and healthcare sectors, it provides avenues to address some of the challenges to IS. AI algorithms such as machine learning (ML), deep learning (DL), and reinforcement learning (RL) serve as the foundation. Domain applications of these algorithms, such as natural language processing (NLP), are increasingly acknowledged as essential tools in the health sciences landscape [18]. See Table 1 for a description of key AI terms used in this paper. Their diverse applications range from predicting disease outbreaks, enhancing medical imaging, and refining patient communication via tools like chatbots to influencing behavior changes at patient, staff, organizational, or even community levels. Over the last decade, there has been a significant increase in the volume of scientific literature integrating AI into health research [19, 20]. This research incorporates a broad spectrum of AI models—from shallow ML algorithms, such as decision trees and k-means clustering, to deep neural networks. These AI models are applied to various data sources and types, such as clinical and observational data and data formats, including tabular, text, and images. The growth of large-scale, diverse health data, coupled with the emergence of new AI techniques, has led to significant change in the healthcare sector, improving our capabilities in diagnosis, disease prediction, patient care, and behavior modification [19,20,21,22,23]. AI technologies also afford opportunities to automate aspects of care delivery, quality improvement, and health services research processes that previously required human labor and thereby can increase speed and efficiency, including for implementation research and practice [24].

Table 1 Definitions of artificial intelligence terminology used

The potential of AI to enhance IS is evident, but there are also cautions to consider, including AI’s potential to exacerbate inequities if unchecked [25,26,27,28]. This paper aims to elucidate how AI can address current IS challenges while also shedding light on its potential pitfalls. We further provide specific examples from global health systems, public health, and precision health to illustrate both the advantages and precautions when integrating AI with IS. We conclude by providing recommendations and selected resources for implementation researchers and practitioners to leverage AI in their work. While there are extant primers on AI in healthcare and research and papers describing how IS can enhance AI [29,30,31], this paper focuses on ways AI can address challenges specific to IS and offers tangible guidance tailored to the IS community on how to apply AI to their work while being cognizant of and mitigating potential unintended consequences. We also discuss intellectual property rights related to the use of AI.

Main text

2A. Opportunities for integration of AI to optimize IS methods

Here, we outline “why” AI should be used in the field of IS by describing some of the key challenges facing IS as well as tangible examples of how AI can help overcome these challenges. The specific IS challenges addressed are (1) speed, (2) sustainability, (3) equity, (4) generalizability, (5) assessing context and context-outcome relationships, and (6) assessing causality and mechanisms. Table 2 summarizes these IS challenges and AI solutions. Table 3 provides examples from health systems and public health settings describing how AI can address the limitations of IS.

Table 2 Implementation science challenges and artificial intelligence solutions with caveats
Table 3 Examples of how AI can address IS challenges in health systems and public health settings

2A1. Speed as an IS challenge

Improving and measuring the speed of its methods and translation is a critical issue for IS [17, 40, 41]. Despite great promise and evolving methods to improve the speed of certain activities, IS methods require time, including time to conduct partner engagement, test implementation strategies, evaluate outcomes, and collect and analyze mixed methods data. The time required to carry out traditional IS approaches can slow the speed of knowledge generation and translation, which can be expedited with AI. For example, AI-enabled chatbots can be trained to lead or moderate qualitative interviews or focus groups, allowing for multiple sessions to be completed in parallel without the usual constraints of personnel available. There are examples of such chatbots that are already being used in the business sector for job candidate interviews [42], which could be adapted for IS applications. NLP and more advanced forms of AI can also be used to collect and analyze data inductively or deductively, including the collection of unstructured data that typically requires manual analysis of qualitative data that is traditionally time-consuming and slow [43]. The use of AI to conduct qualitative analyses is becoming more common, either as a standalone method or in a “human-assisted” method where researchers iteratively review the AI outputs and provide redirection as needed [44,45,46,47]. Newer, rapid approaches to qualitative analysis in IS have already sped up this step [48], but these newer analysis methods could also be augmented with AI to expedite or supplant person time by an order of magnitude. Notably, AI-enabled software is readily available to assist with transcription [49]. Table 3 summarizes a study that compared NLP to traditional qualitative methods and found NLP was effective at identifying major themes but was not as precise at more granular interpretations [33]. Chatbots can also be used to automate the creation and testing of tailored messaging as educational implementation strategies (e.g., behavioral nudges) for different target groups of patients, staff, or settings based on their unique characteristics to increase the speed of identifying contextually appropriate and effective strategies [50]. Most examples of leveraging AI to accelerate speed are currently outside the field of IS [42, 44,45,46,47].

2A2. Sustainability as an IS challenge

Sustainability is a central tenant of IS and ideally requires iterative, ongoing progress assessments to identify intervention components and implementation strategies needing adaptation [51,52,53,54]. However, these ongoing evaluation methods can tax available resources, particularly human capital. By automating iterative evaluation cycles, AI may reduce the demand for human resources, which is often a bottleneck to sustainability methods. A study conducted by the Regional Social Health Agency in Italy (Table 3) demonstrates how AI can improve the efficiency of using health information and promote the sustainability of healthcare systems [35]. There are other avenues in which AI can contribute to sustainability. For instance, AI algorithms can be configured to continuously monitor for and work in tandem with chatbots and NLP tools to detect subtle changes in outcomes that are difficult for traditional quantitative approaches to detect within complex and big data sets used within healthcare. These algorithms can provide partners with real-time insights through integration with platforms like dashboards [55,56,57]. To date, there are examples of dashboards being used to make such sustainability methods feasible for IS projects [58], but there are few examples of AI-enabled approaches in the field of IS. Such use of AI with dashboards can be particularly useful when rapid decisions are required or when partners need to identify and understand complex or subtle patterns in data over time.

AI’s predictive analytics could also be employed to simulate or forecast the sustainability of certain initiatives and estimate the long-term viability of a project or implementation strategy [59]. For example, if an intervention is implemented in a healthcare setting, AI could analyze data on adherence rates, participant feedback, and other relevant metrics to project the likelihood of its continued success. IS frameworks could guide the systematic assessment of the complex and multilevel contextual factors (e.g., culture, strategic priorities, burnout rates, turnover) that influence sustainability which could be categorized into themes and used as input or predictor variables within the AI model. Such predictive capabilities could allow for proactive and iterative adjustments throughout the life of a project to maximize sustainability. This approach could also assist in identifying the optimal allocation of limited resources by knowing in advance which areas might falter or by potentially supplanting the need for a costly or time-consuming trial that is predicted to be unsustainable. The use of AI to predict sustainability is a potential future direction for the field of IS.

2A3. Equity as an IS challenge

IS aims to promote equity, but some equity-enhancing activities can be challenging within resource constraints. Resources are often not available to (1) disband language barriers to participation in partner engagement activities and implementation studies; (2) create culturally appropriate implementation strategies that address issues such as mistrust; or (3) offer data that represent the spectrum of perspectives beyond the usual, including that of persons who have historically been marginalized and experienced disparities [60, 61].

IS can benefit from integrating AI to promote equity amidst resource constraints. AI-driven translation tools render text in different languages and can capture the essence and nuances across dialects and regional variations. Further, speech-to-text systems can convert spoken language into written form, facilitating participation for those who might be literate in their native tongue but not in the primary language of a study. For immediate interactions, real-time AI-enhanced software interpretation allows non-native speakers to understand and contribute actively. AI chatbots, tailored using user data and historical contexts, can resonate with local customs and beliefs, offering a culturally attuned interaction [62, 63]. Additionally, these AI systems can be trained to transparently provide resources that resonate with targeted communities and can be employed for cultural awareness training, ensuring researchers approach communities with heightened sensitivity [62, 63]. In terms of data, AI algorithms can increase diverse representation by pulling from a range of sources, inclusive of historically marginalized voices that are often omitted from traditional datasets because the data are in unstructured formats and/or too large and complex for traditional analytic methods [9].

In Table 3, we present an example of proactively using AI to identify clinical trials for patients from historically underrepresented populations [37]. These AI tools can also be configured to detect and rectify inherent biases in datasets and present complex data visually, aiding in identifying and correcting disparities. For recruitment, AI’s ability to analyze complex population data from diverse sources means that underrepresented groups can be pinpointed for more inclusive outreach [64]. Data sources could include social media platforms, online community forums, or leverage crowdsourcing techniques. AI-driven tools such as voice assistants and adaptive interfaces could also be used to make research platforms more navigable for those with disabilities or language barriers [65]. Finally, AI’s feedback mechanisms enable real-time adjustments to implementation strategies based on participant input, and sentiment analysis tools can gauge the emotional underpinnings of this feedback, illuminating areas of potential mistrust or dissatisfaction [66]. In harnessing these AI capabilities, IS can promote equity more effectively, ensuring historically marginalized communities are actively engaged in research and its applications. The use of AI to promote equity remains largely untapped, with most examples outside the field of IS [60, 61, 64,65,66].

2A4. Generalizability as an IS challenge

Although IS prioritizes generalizability and transportability [67], limitations of data and human resources to conduct partner and participant engagement and collect data can threaten generalizability. Generalizability decreases if the breadth of perspectives considered is limited when designing, implementing, or evaluating a study [68]. While AI’s role in easing resource demands through chatbots and NLP has been acknowledged, AI’s potential to enhance generalizability stretches beyond that. Because AI can sift through large amounts of complex data, it can incorporate insights from non-traditional sources. For example, social media platforms, with user-generated content, can provide rich insights into public sentiment, behavior, and preferences, which increases the representation of perspectives and assists in generalizing findings across diverse populations [69]. The United Kingdom has applied AI to Twitter and Facebook forums to evaluate adverse reactions and understand public sentiment toward the COVID-19 vaccination and found that common and rare adverse effects were discussed with relatively equal frequency and that vaccine perceptions were largely positive over time (Table 3) [70]. Additionally, AI can use crowdsourcing to increase the representation of diverse perspectives [71, 72]. Crowdsourcing has the potential to capture diverse insights from global audiences. AI has been used to coordinate and process data from large, crowdsourced projects, ensuring that perspectives are drawn from a cross-section of diverse individuals [73]. This means that studies can encompass views from varied geographical locations, socio-economic statuses, and cultural backgrounds while operating within existing resource confines [74]. As is also the case with traditional data sources, the selection of an appropriate social media or crowdsourcing data source must be aligned with the target population or issue at hand to ensure relevance, and inherent data biases, including misinformation and missingness must be considered.

Moreover, the dynamic nature of the world means that generalizability is not static. Populations evolve, cultures shift, and societal priorities change. Here, AI can assist by automating continuous assessments of generalizability. Similar to approaches described above related to sustainability, AI can monitor for changes that might impact the external relevance or transportability of study findings and provide alerts or updates when shifts are detected. Such iterative assessments, automated by AI, can be used to strategically guide adaptations such that the work and findings remain generalizable throughout all stages of a study and over time [57, 60, 61].

2A5. Assessing context and context-outcome relationships as an IS challenge

Traditional IS approaches to assessing context and outcomes are often limited to the “stated” or simple interpretations of the “realized.” While the stated (explicit declarations) often come from qualitative methods like partner engagement sessions or surveys with limited samples, the realized is generally garnered from quantitative data necessitating a predefined hypothesis or signal [75]. Although emergent configurational analysis techniques delve deeper into intricate relationships between context and outcomes [76], IS and traditional quantitative approaches often fall short of capturing the intricate relationships of non-linear interactions. AI algorithms present new opportunities to address these challenges that are often inherent in complex data. AI can assimilate large and complex data repositories to discern non-linear relationships and detect patterns or context—implementation strategy—outcome relationships even without predefined signals [77,78,79]. One study leveraged AI and electronic health record data to understand reasons for gaps in clinician prescribing for a clinical scenario that had already been well studied using traditional mixed methods [75]. This study identified a variety of contextual determinants, including some that were previously unrecognized and were used to inform the design of an ongoing IS trial (Table 3) [75]. The versatility of AI means that these algorithms are not only static tools, but that they can be optimized to constantly operate in the background, evolving with the data they encounter. This becomes particularly crucial in dynamic landscapes such as healthcare, where relationships between context and outcomes can change rapidly. As AI iteratively processes this information, it can provide a pulse on any emerging shifts, ensuring that IS remains responsive and adaptive to the changing context. While there are limited examples of AI being used to assess context for IS studies [75], there are additional avenues in which to explore how AI can be leveraged to assess changes in context, strategies, and outcomes.

2A6. Assessing causality and mechanism as an IS challenge

In IS, ascertaining causality and mechanism is difficult. While traditional quantitative tests for causality could assist [80, 81], as could more qualitative approaches such as mechanism mapping [82], deciphering the direct causal connections, rather than mere associative links, between interventions, implementation strategies, and outcomes, is difficult due to the complex interplay of confounding variables in real-world settings. Modern AI-driven causal inference and discovery mechanisms offer a path forward for IS [83,84,85]. Leveraging structured graphical models, techniques such as causal Bayesian networks adeptly delineate explicit cause-and-effect relationships, duly accounting for latent confounders. Consider, for example, a healthcare scenario aimed at curtailing hospital readmissions. Whereas traditional analytical frameworks might predominantly identify an associative link between an intervention and reduced readmissions, AI causal tools probe deeper, scrutinizing whether the intervention itself was the direct catalyst or if obscured variables intervened. In Table 3, we provide a precision health example of using causal AI methods to generate treatment predictions for patients with dementia [39]. Further enriching the AI toolkit are counterfactual neural networks [86, 87], which could be used by IS practitioners to simulate hypothetical outcomes that would have occurred without specific interventions. Another notable advancement is AI's deployment in analyzing potential or simulated outcomes, which elucidates the individual treatment effect, thereby shedding light on the distinct impact of interventions or implementation strategies on specific demographic or clinical subgroups. Such AI-based simulation models have the potential to save unnecessary resource expenditure (time, money) if a trial is predicted to produce a null effect. Through this AI-driven lens, IS could better assess causality and mechanism with heightened precision, fostering the design and deployment of increasingly effective and equitable programs. The use of AI to assess causality and mechanism is a largely unexplored methodologic area for the field of IS.

2B. Potential consequences of using AI in IS

The consequences of using AI can be both positive and negative. Thus far we have focused on those positive consequences that can be anticipated, but there are likely others that are unanticipated. For example, it is not yet known what the true potential of AI is, and AI-generated innovations could create solutions with benefits we cannot begin to predict. However, when using AI, important considerations and potential adverse unintended consequences need to be monitored and minimized [25,26,27,28]. Here we highlight potential cautions of using AI with examples of how AI has caused harm or gone awry. In Table 2, we explicitly relate these AI concerns with the AI solutions proposed above to address IS challenges. Across all of these counterarguments, proactive vigilance is required to identify and mitigate issues at each stage of the AI lifecycle, which includes (1) data creation, (2) data acquisition, (3) model development, (4) model evaluation, and (5) model deployment [88]. We provide examples showing how AI can lead to erroneous conclusions, inequities, biases, or harmful behaviors.

Unmonitored AI applications (e.g., AI algorithms, chatbots, NLP) can lead to erroneous messages or results. AI is restricted to the available data inputs and subject to all the biases of the data collection process, often referred to as “garbage in, garbage out.” For example, it is known that clinician diagnoses are biased by gender and race [25, 26], and models using such data will capture these biases. AI’s sentiment analysis can also incorrectly interpret data or be influenced more by counts or frequencies than a manual human-only process would be, and such errors can significantly influence results [43]. These issues can be hidden or exacerbated when using “black box” AI models that result in an effect or outcome but do not allow for explainability of the processes that produced the effect [89].

In one study, an AI algorithm was applied to 14,199 patients with pneumonia across 78 hospitals to risk-stratify the probability of death [90]. The model recommended that patients with asthma were at lower risk than those without asthma. This recommendation contrasted with existing evidence, thus triggering the researchers to investigate further. The researchers discovered that the data inputs biased this finding. Specifically, the data inputs did not capture the fact that patients with asthma and pneumonia were commonly directly admitted for treatment and thus had better treatment outcomes compared to patients who had pneumonia without asthma.

AI also has the potential to exacerbate or create new inequities. Reliance on data that underrepresent the population or that are subject to inherent biases stemming from sexism, racism, classism, or mistrust leads to inaccurate predictions or evaluations and could perpetuate inequities or misguide decision-making [26,27,28]. Misguided decision-making can be particularly apparent when AI is used to inform recommendations for tools such as clinical decision support within electronic health records.

Authors of another paper provide a use case of AI algorithms scheduling medical appointments to improve scheduling efficiency to illustrate how such AI algorithms can yield racially biased outcomes [91]. Such algorithms consider many factors, such as characteristics of patients that arrive late to appointments or “no show.” However, historically, Black patients have a higher likelihood of “no shows”; thus, the algorithm scheduled these patients into less desirable appointment times.

AI is beholden to the data inputs. Beyond inherent biases of how data are collected, data inputs are also subject to data drift (e.g., temporal changes in how and where data is documented) and can also lead to biased interpretations if the sample sizes are not representative or sufficiently large [92]. Traditional IS data sources have limited sample sizes, and data drift is common in rapidly changing environments where public health and healthcare happen.

In 2009 it was announced that by using AI applications and publicly available data from Google search engine queries for “flu-like symptoms,” researchers could predict regional flu trends 1-2 weeks earlier than the Centers for Disease Control and Prevention [93]. Later, it was discovered that the prediction was no longer accurate, in part because of changes to search engines that prompted or suggested certain search terms to users, which changed the data inputs [94].

AI can tailor messages or nudges for specific populations in ways that prompt and facilitate good decision-making [95, 96]. However, such AI applications can also inadvertently promote harmful behaviors. For example, AI has been leveraged to create tailored messages or nudges to increase consumer uptake of unhealthy food and beverages [97]. AI can learn and adapt its messaging over time, thus posing the potential for messages originally well-intended to encourage inadvertent harm. Other ethical implications of nudges include situations in which certain options are forbidden and autonomy in decision-making is impaired [98,99,100].

2c. Intellectual property rights of using AI for IS

Intellectual property issues present unique challenges and opportunities when AI is used in IS for design, data analysis, or reporting [101]. Questions arise about property rights of the knowledge generated from AI models, especially when the knowledge generated stems from data in which it is unclear who owns the data. For example, AI models could leverage data sourced from public datasets or collaborative efforts in which ownership of the data is unclear. Furthermore, as AI aids in creating or optimizing interventions, discerning the boundaries between human-generated property and machine-augmented contributions can become ambiguous. It is imperative for researchers and practitioners to proactively navigate these complexities, ensuring that while AI propels IS forward, it does so in a manner that respects and delineates intellectual property rights and contributions.

If relying on existing AI applications, it is important to identify and understand any potential intellectual property rights, which could require fees for use or restrictions on how the AI can be used or disseminated. On the flip side, if creating de novo AI applications, it may be prudent to consider establishing intellectual property rights to enforce responsible use and avoid the potential consequences outlined above. Intellectual property rights apply to any invention, such as EHR-based tools and decision aids, but in the case of AI, intentional use to promote responsible AI use may be novel and important to consider. While there are clear implications of intellectual property rights for AI applications themselves, there is less clarity regarding the property rights of AI-generated products [102, 103]. The latter is a new and developing area that is currently handled on a country-by-country basis. Although allowable under the laws of some countries such as the UK, the US stance is that AI-generated products are prohibited from intellectual property rights [103]. The fundamental question that served as the basis for the US’s decision was “How can a thing (not a human) own property?” The inability to predict or anticipate AI-generated products confounded by limited means of regulation is cause for increased caution and monitoring.

Discussion and future directions

IS and AI can complement each other and have the potential to work together to increase the speed of sustainable and equitable knowledge generation and translation to enhance healthcare and population health. We have focused on how AI can augment specific bottlenecks faced by the field of IS, while others have called attention to ways in which IS can augment AI, which includes making AI more relevant to local settings, scalable, and sustainable [29,30,31]. In summary, key ways AI can help address IS-specific challenges include: increasing the speed with which data can be collected, analyzed, and acted on; automating and reducing the workforce required to conduct partner engagement and other IS methods; expanding the size and heterogeneity of available data sources and participant recruitment; and increasing access to new methods to assist in discovering contextual influences and complex interactions between context, implementation strategies and outcomes. Focusing solely on AI’s potential to automate many of IS’s traditional methods and processes, AI provides a path to help IS researchers and practitioners become more rapid and achieve goals of sustainability, equity, and generalizability.

AI presents new opportunities for IS and many potential AI applications remain largely unexplored or untapped by the IS community. As AI use assuredly increases, it needs to be monitored and used responsibly to avoid unintended consequences, especially in the face of limited regulations on AI. It is also important to note that the benefits and pitfalls of AI may not equally apply to all types of AI and it is beyond the scope of this paper to address each separately, but the reader should keep in mind that there are differences based on the specific AI model and application. Among AI’s potential to cause harm, inequities have received much attention and may be one of the most challenging issues to monitor and mitigate. Inequities can surface over time, and multiple root causes include biased data inputs or data that do not represent the spectrum of cultures or perspectives. In this paper, we describe ways AI can optimize equity, but every use of AI also requires careful and ongoing vigilance for potential effects on inequities. Other potential pitfalls of AI discussed above include inaccurate predictions, recommendations, or interpretations of data. Another key challenge of AI is its algorithms’ reproducibility or “brittleness” across settings and over time [104]. There is a need for the regulations, frameworks, and guidance currently being developed for AI [105, 106] to include policies and procedures for systematic and proactive monitoring of unintended consequences and careful consideration of “black box” models [89]. With the widespread update and examination of ChatGPT (and other AI online and related tools), there is increasing awareness about AI’s potential for errors [107,108,109]. The full potential of using AI to enhance IS is not yet known, nor is its potential for errors and harm, which makes the development of regulations even more challenging [101].

IS should take full advantage of AI’s benefits while being mindful of its pitfalls. To do so, a transdisciplinary team science approach is optimal. Team science certainly extends beyond AI and IS partnering, but we focus on these two fields here. Historically, the fields of AI and IS have had limited collaboration with different foci (e.g., heavily quantitative and causal versus mixed methods and pragmatic effectiveness). Now as each becomes essential to the vision of precision public health and learning health systems [7, 110,111,112,113,114,115], they are progressively realizing the value of each other. Given both are rapidly evolving fields and that it is hard to anticipate what is new or next, close collaboration or perhaps a new generation of cross-trained scientists is needed. Such cross-trained scientists may be particularly adept at keeping pace with the latest discoveries related to AI’s potential and monitoring for and mitigating unanticipated consequences. To foster this budding partnership or cross-training between IS and AI, accessibility of expertise and resources is important. In Table 4, we provide a select sample of resources and tools to facilitate the use of AI especially relevant for IS.

Table 4 Select resources and key references especially relevant for IS to learn about artificial intelligence


We call for increased uptake of innovations in AI through transdisciplinary collaboration to overcome challenges to IS methods and to enhance public health and healthcare while remaining vigilant of potential unintended consequences. We acknowledge that our paper is “first generation” in that it is one of the first to describe intersections between AI and IS—in doing so, we hope to spark future debate, scholarship, and enhancement of the concepts we have introduced. In Table 5, we provide concrete summary suggestions of how to begin to responsibly and optimally use AI, including building a representative team. Application of AI is complex and uncertain, but has the potential to make IS more efficient and can facilitate more in-depth and iterative contextual assessment, which in turn can lead to more rapid, sustainable, equitable, and generalizable generation and translation of knowledge into real-world settings.

Table 5 Recommendations to responsibly and optimally use AI in implementation research

Availability of data and materials

Data sharing is not applicable to this article as no datasets were generated or analyzed.



Artificial intelligence


Deep learning


Dissemination and implementation


Evidence-based intervention


Implementation science


Machine learning


Natural language processing


Reinforcement learning


Theories, models, and frameworks


  1. Healthy People 2030 Framework - Healthy People 2030 | US Department of Health and Human Services. Available from: Cited 2023 Aug 19.

  2. U.S. Health Care from a Global Perspective, 2022 | Commonwealth Fund. Available from: Cited 2023 Aug 19.

  3. Khan S, Chambers D, Neta G. Revisiting time to translation: implementation of evidence-based practices (EBPs) in cancer control. Cancer Causes and Control. 2021;32:221.

    Article  PubMed  Google Scholar 

  4. Grant J, Green L, Mason B. Basic research and health: a reassessment of the scientific basis for the support of biomedical science. Res Eval. 2003;12:217.

    Article  Google Scholar 

  5. Green L, Glasgow R. Evaluating the relevance, generalization, and applicability of research: issues in external validation and translation methodology. Eval Health Prof. 2006;29:126–53.

    Article  PubMed  Google Scholar 

  6. Trinkley KE, Fort MP, McNeal D, Green LG, Huebschmann AG. Furthering dissemination and implementation research: the need for more attention to external validity. In: Brownson RC, Colditz G, Proctor EK, editors. Dissemination and implementation research in health: Translating science to practice. 3rd ed. New York: Oxford University Press; 2024.

    Google Scholar 

  7. Trinkley KE, Ho PM, Glasgow RE, Huebschmann AG. How dissemination and implementation science can contribute to the advancement of learning health systems. Acad Med. 2022;97:1447–58. Available from: Cited 2022 Nov 6.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Kwan BM, Brownson RC, Glasgow RE, Morrato EH, Luke DA. Designing for dissemination and sustainability to promote equitable impacts on health. Annu Rev Public Health. 2022;43. Available from: Cited 2022 Jan 19.

  9. Fort MP, Manson SM, Glasgow RE. Applying an equity lens to assess context and implementation in public health and health services research and practice using the PRISM Framework. Front Health Serv. 3:31.

  10. Brownson RC, Kumanyika SK, Kreuter MW, Haire-Joshu D. Implementation science should give higher priority to health equity. Implement Sci. 2021;16. Available from: Cited 2022 Jan 19.

  11. Shelton RC, Chambers DA, Glasgow RE. An extension of RE-AIM to enhance sustainability: addressing dynamic context and promoting health equity over time. Front Public Health. 2020;8. Available from: Cited 2022 Jan 3.

  12. Baumann AA, Shelton RC, Kumanyika S, Haire-Joshu D. Advancing healthcare equity through dissemination and implementation science. Health Serv Res. 2023;58:327.

    Article  PubMed  PubMed Central  Google Scholar 

  13. KlepacPogrmilovic B, Linke S, Craike M. Blending an implementation science framework with principles of proportionate universalism to support physical activity promotion in primary healthcare while addressing health inequities. Health Res Policy Syst. 2021;19:6.

    Article  Google Scholar 

  14. Paniagua-Avila A, Shelton RC, Guzman AL, Gutierrez L, Galdamez DH, Ramirez JM, Rodriguez J, Irazola V, Ramirez-Zea M, Fort MP. Assessing the implementation of a multi-component hypertension program in a Guatemalan under-resourced dynamic context: An application of the RE-AIM/PRISM extension for sustainability and health equity. Res Sq [Preprint].–2362741.

  15. Fort MP, Reid M, Russell J, Santos CJ, Running Bear U, Begay RL, et al. Diabetes prevention and care capacity at Urban Indian Health Organizations. Front Public Health. 2021;9:740946.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Szefler SJ, Cicutto L, Brewer SE, Gleason M, McFarlane A, DeCamp LR, et al. Applying dissemination and implementation research methods to translate a school-based asthma program. J Allergy Clin Immunol. 2022;150:535.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Beidas RS, Dorsey S, Lewis CC, Lyon AR, Powell BJ, Purtle J, et al. Promises and pitfalls in implementation science from the perspective of US-based researchers: learning from a pre-mortem. Implement Sci. 2022;17:1–15. Available from: Cited 2022 Nov 6.

    Article  Google Scholar 

  18. An R, Shen J, Xiao Y. Applications of artificial intelligence to obesity research: scoping review of methodologies. J Med Internet Res. 2022;24. Available from: Cited 2023 Aug 20.

  19. Mukherjee J, Sharma R, Dutta P, Bhunia B. Artificial intelligence in healthcare: a mastery. Biotechnol Genet Eng Rev. 2023. Available from: Cited 2023 Sep 11.

  20. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019;6:94. Available from: Cited 2023 Sep 11.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Yasmin F, Shah SMI, Naeem A, Shujauddin SM, Jabeen A, Kazmi S, et al. Artificial intelligence in the diagnosis and detection of heart failure: the past, present, and future. Rev Cardiovasc Med. 2021;22:1095.

    Article  PubMed  Google Scholar 

  22. Wang D, Li J, Sun Y, Ding X, Zhang X, Liu S, et al. A machine learning model for accurate prediction of sepsis in ICU patients. Front Public Health. 2021;9:754348.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Peng S, Huang J, Liu X, Deng J, Sun C, Tang J, et al. Interpretable machine learning for 28-day all-cause in-hospital mortality prediction in critically ill patients with heart failure combined with hypertension: a retrospective cohort study based on medical information mart for intensive care database-IV and eICU databases. Front Cardiovasc Med. 2022;9:99435.

    Article  Google Scholar 

  24. Aung YYM, Wong DCS, Ting DSW. The promise of artificial intelligence: a review of the opportunities and challenges of artificial intelligence in healthcare. Br Med Bull. 2021;139:4.

    Article  PubMed  Google Scholar 

  25. Straw I. The automation of bias in medical Artificial Intelligence (AI): Decoding the past to create a better future. Artif Intell Med. 2020;110. Available from: Cited 2023 Aug 18.

  26. Zou J, Schiebinger L. AI can be sexist and racist - it’s time to make it fair. Nature. 2018;559:324–6. Available from: Cited 2023 Aug 18 .

    Article  ADS  CAS  PubMed  Google Scholar 

  27. Feehan M, Owen LA, McKinnon IM, Deangelis MM. Artificial intelligence, heuristic biases, and the optimization of health outcomes: cautionary optimism. J Clin Med. 2021;10. Available from: Cited 2023 Aug 18.

  28. Cabitza F, Rasoini R, Gensini GF. Unintended consequences of machine learning in medicine. JAMA. 2017;318:517–8. Available from: Cited 2023 Aug 18.

    Article  PubMed  Google Scholar 

  29. Hogg HDJ, Al-Zubaidy M, Keane PA, Hughes G, Beyer FR, Maniatopoulos G. Evaluating the translation of implementation science to clinical artificial intelligence: a bibliometric study of qualitative research. Front Health Serv. 2023;3:1161822. Available from: Cited 2023 Aug 19.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  30. Nilsen P, Svedberg P, Nygren J, Frideros M, Johansson J, Schueller S. Accelerating the impact of artificial intelligence in mental healthcare through implementation science. Implement Res Pract. 2022;3:263348952211120. Available from: Cited 2023 Aug 19.

    Article  Google Scholar 

  31. Nilsen P, Reed J, Nair M, Savage C, Macrae C, Barlow J, et al. Realizing the potential of artificial intelligence in healthcare: Learning from intervention, innovation, implementation and improvement sciences. Frontiers in health services. 2022;2. Available from: Cited 2023 Aug 19.

  32. How ChatGPT can improve racial disparities in healthcare. Available from: Cited 2023 Aug 20.

  33. Guetterman TC, Chang T, DeJonckheere M, Basu T, Scruggs E, Vinod Vydiswaran VG. Augmenting qualitative text analysis with natural language processing: methodological study. J Med Internet Res. 2018;20. Available from: Cited 2023 Aug 24.

  34. Waughtal J, Luong P, Sandy L, Chavez C, Ho PM, Bull S. Nudge me: tailoring text messages for prescription adherence through N-of-1 interviews. Transl Behav Med. 2021;11:1832–8. Available from: Cited 2023 Aug 24.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Dicuonzo G, Donofrio F, Fusco A, Dell’Atti V. Big data and artificial intelligence for health system sustainability: The case of Veneto Region. Management Control, FrancoAngeli Editore. 2021;(suppl. 1):31–52.

  36. Mane HY, Doig AC, Gutierrez FXM, Jasczynski M, Yue X, Srikanth NP, et al. Practical guidance for the development of rosie, a health education question-and-answer chatbot for new mothers. J Public Health Manag Pract. 2023;29:663–70. Available from: Cited 2023 Aug 24.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Zhang K, Demner-Fushman D. Automated classification of eligibility criteria in clinical trials to facilitate patient-trial matching for specific patient populations. J Am Med Inform Assoc. 2017;24:781–7. Available from: Cited 2023 Aug 24.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Hussain Z, Sheikh Z, Tahir A, Dashtipour K, Gogate M, Sheikh A, et al. Artificial intelligence-enabled social media analysis for pharmacovigilance of COVID-19 vaccinations in the United Kingdom: observational study. JMIR Public Health Surveill. 2022;8:e32543.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Sanchez P, Voisey JP, Xia T, Watson HI, O’Neil AQ, Tsaftaris SA. Causal machine learning for healthcare and precision medicine. R Soc Open Sci. 2022;9. Available from: Cited 2023 Sep 10.

  40. Proctor E, Ramsey AT, Saldana L, Maddox TM, Chambers DA, Brownson RC. FAST: a framework to assess speed of translation of health innovations to practice and policy. Glob Implement Res Appl. 2022;2:107–19. Available from: Cited 2023 Sep 9.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Norton WE, Kennedy AE, Mittman BS, Parry G, Srinivasan S, Tonorezos E, et al. Advancing rapid cycle research in cancer care delivery: a National Cancer Institute workshop report. J Natl Cancer Inst. 2023;115:498–504. Cited 2023 Oct 16.

    Article  PubMed  PubMed Central  Google Scholar 

  42. AI Chatbots Are The New Job Interviewers. Available from: Cited 2023 Aug 20.

  43. Anis S. French JA. Efficient, explicatory, and equitable: why qualitative researchers should embrace AI, but cautiously. 2023;62:1139–44. Available from: Cited 2023 Oct 5.

    Google Scholar 

  44. Bastiaansen JAJ, Veldhuizen EE, De Schepper K, Scheepers FE. Experiences of siblings of children with neurodevelopmental disorders: comparing qualitative analysis and machine learning to study narratives. Front Psychiatry. 2022;13. Available from: Cited 2023 Oct 5.

  45. Palacios G, Noreña A, Londero A. Assessing the heterogeneity of complaints related to tinnitus and hyperacusis from an unsupervised machine learning approach: an exploratory study. Audiol Neurootol. 2020;25. Available from: Cited 2023 Oct 5.

  46. Criss S, Nguyen TT, Michaels EK, Gee GC, Kiang MV, Nguyen QC, et al. Solidarity and strife after the Atlanta spa shootings: A mixed methods study characterizing Twitter discussions by qualitative analysis and machine learning. Front Public Health. 2023;11. Available from: Cited 2023 Oct 5.

  47. Towler L, Bondaronek P, Papakonstantinou T, Amlôt R, Chadborn T, Ainsworth B, et al. Applying machine-learning to rapidly analyse large qualitative text datasets to inform the COVID-19 pandemic response: Comparing human and machine-assisted topic analysis techniques. medRxiv. 2022;2022.05.12.22274993. Available from: Cited 2023 Oct 6.

  48. Palinkas LA, Mendon SJ, Hamilton AB. Innovations in Mixed Methods Evaluations. Annu Rev Public Health. 2019;40:423–42. Available from: Cited 2023 Aug 20.

    Article  PubMed  PubMed Central  Google Scholar 

  49. - Voice Meeting Notes & Real-time Transcription. Available from: Cited 2023 Aug 20.

  50. Singh B, Olds T, Brinsley J, Dumuid D, Virgara R, Matricciani L, et al. Systematic review and meta-analysis of the effectiveness of chatbots on lifestyle behaviours. NPJ Dig Med. 2023;6:1–10. Available from: Cited 2023 Aug 20.

    Google Scholar 

  51. Chambers DA, Norton WE. The adaptome: advancing the science of intervention adaptation. Am J Prev Med. 2016;51:S124–31.

    Article  PubMed  PubMed Central  Google Scholar 

  52. Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8:117.

    Article  PubMed  PubMed Central  Google Scholar 

  53. Glasgow RE, Battaglia C, McCreight M, Ayele RA, Rabin BA. Making implementation science more rapid: use of the RE-AIM framework for mid-course adaptations across five health services research projects in the veterans health administration. Front Public Health. 2020;8:194.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Maw AM, Morris MA, Glasgow RE, Barnard J, Ho PM, Ortiz-Lopez C, et al. Using Iterative RE-AIM to enhance hospitalist adoption of lung ultrasound in the management of patients with COVID-19: an implementation pilot study. Implement Sci Commun. 2022;3:89.

    Article  PubMed  PubMed Central  Google Scholar 

  55. Wong CK, Ho DTY, Tam AR, Zhou M, Lau YM, Tang MOY, et al. Artificial intelligence mobile health platform for early detection of COVID-19 in quarantine subjects using a wearable biosensor: protocol for a randomised controlled trial. BMJ Open. 2020;10:e038555.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Yousefi S, Elze T, Pasquale LR, Saeedi O, Wang M, Shen LQ, et al. Monitoring Glaucomatous Functional Loss Using an Artificial Intelligence-Enabled Dashboard. Ophthalmology. 2020;127:1170–8.

    Article  PubMed  Google Scholar 

  57. Tsai W-C, Liu C-F, Lin H-J, Hsu C-C, Ma Y-S, Chen C-J, et al. Design and Implementation of a Comprehensive AI Dashboard for Real-Time Prediction of Adverse Prognosis of ED Patients. Healthcare (Basel). 2022;10:1498.

    Article  PubMed  PubMed Central  Google Scholar 

  58. Maw AM, Morris MA, Glasgow RE, Barnard J, Ho PM, Ortiz-Lopez C, et al. Using Iterative RE-AIM to enhance hospitalist adoption of lung ultrasound in the management of patients with COVID-19: an implementation pilot study. Implement Sci Commun. 2022;3:89.

    Article  PubMed  PubMed Central  Google Scholar 

  59. Cho B, Geng E, Arvind V, Valliani AA, Tang JE, Schwartz J, et al. Understanding artificial intelligence and predictive analytics: a clinically focused review of machine learning techniques. JBJS Rev. 2022;10. Available from: Cited 2023 Oct 16.

  60. Shellhaas RA, Lemmon ME, Gosselin BN, Sturza J, Franck LS, Glass HC, et al. Toward equity in research participation: association of financial impact with in-person study participation. Pediatr Neurol. 2023;144:107–9.

    Article  PubMed  Google Scholar 

  61. Washington V, Franklin JB, Huang ES, Mega JL, Abernethy AP. Diversity, equity, and inclusion in clinical research: a path toward precision health for everyone. Clin Pharmacol Ther. 2023;113:575–84.

    Article  PubMed  Google Scholar 

  62. Adewole S, Gharavi E, Shpringer B, Bolger M, Sharma V, Yang SM, et al. Dialogue-based simulation for cultural awareness training. 2020.

    Google Scholar 

  63. Mane HY, ChannellDoig A, Marin Gutierrez FX, Jasczynski M, Yue X, Srikanth NP, et al. Practical guidance for the development of rosie, a health education question-and-answer chatbot for new mothers. J Public Health Manag Pract. 2023;29:663–70.

    Article  PubMed  PubMed Central  Google Scholar 

  64. MMA de Graaf et al. Inclusive HRI: Equity and Diversity in Design, Application, Methods, and Community, 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Sapporo, Japan; 2022. p. 1247–49.

  65. Lea C, Huang Z, Jain D, Tooley L, Liaghat Z, Thelapurath S, et al. Nonverbal sound detection for disordered speech. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Virtual and Singapore; 2022.

  66. Cambria E. Affective Computing and Sentiment Analysis. In IEEE Intelligent Systems. 2016;31(2):102–7.

    Article  Google Scholar 

  67. Geng EH, Mody A, Powell BJ. On-the-go adaptation of implementation approaches and strategies in health: emerging perspectives and research opportunities. Annu Rev Public Health. 2023;44:21–36. Available from: Cited 2023 Oct 16.

    Article  PubMed  Google Scholar 

  68. Green L, Nasser M. Furthering dissemination and implementation research: the need for more attention to external validity. In: Brownson R, Colditz G, Proctor E, editors. Dissemination and implementation research in health: translating science to practice. 2nd ed. New York: Oxford University Press; 2018. p. 301–16.

    Google Scholar 

  69. Korjian S, Gibson CM. Digital technologies and the democratization of clinical research: social media, wearables, and artificial intelligence. Contemp Clin Trials. 2022;117:106767.

    Article  PubMed  Google Scholar 

  70. Hussain Z, Sheikh Z, Tahir A, Dashtipour K, Gogate M, Sheikh A, et al. Artificial intelligence-enabled social media analysis for pharmacovigilance of COVID-19 vaccinations in the United Kingdom: observational study. JMIR Public Health Surveill. 2022;8:e32543.

    Article  PubMed  PubMed Central  Google Scholar 

  71. Chen D. Open data: implications on privacy in healthcare research. Blockchain Healthc Today. 2020;3. Available from: Cited 2023 Oct 6.

  72. van der Scheer JW, Woodward M, Ansari A, Draycott T, Winter C, Martin G, et al. How to specify healthcare process improvements collaboratively using rapid, remote consensus-building: a framework and a case study of its application. BMC Med Res Methodol. 2021;21:103.

    Article  PubMed  PubMed Central  Google Scholar 

  73. Ho M, Bull S. Personalized patient data and behavioral nudges to improve adherence to chronic cardiovascular medications: results from the nudge study. Grand Rounds November 17, 2023: NIH Pragmatic Trials Collaboratory; 2023.

  74. Gimpel H, Graf-Seyfried V, Laubacher R, Meindl O. Towards artificial intelligence augmenting facilitation: AI affordances in macro-task crowdsourcing. Group Decis Negot. 2023;32:75–124.

    Article  PubMed  PubMed Central  Google Scholar 

  75. Kim R, Suresh K, Rosenberg MA, Tan MS, Malone DC, Allen LA, et al. A machine learning evaluation of patient characteristics associated with prescribing of guideline-directed medical therapy for heart failure. Front Cardiovasc Med. 2023;10:1169574.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  76. Thiem A. Conducting configurational comparative research with qualitative comparative analysis. Am J Eval. 2017;38:420–33.

    Article  Google Scholar 

  77. Lo-Ciganic W-H, Huang JL, Zhang HH, Weiss JC, Wu Y, Kwoh CK, et al. Evaluation of machine-learning algorithms for predicting opioid overdose risk among medicare beneficiaries with opioid prescriptions. JAMA Netw Open. 2019;2:e190968.

    Article  PubMed  PubMed Central  Google Scholar 

  78. Thottakkara P, Ozrazgat-Baslanti T, Hupf BB, Rashidi P, Pardalos P, Momcilovic P, et al. Application of machine learning techniques to high-dimensional clinical data to forecast postoperative complications. PLoS One. 2016;11:e0155705.

    Article  PubMed  PubMed Central  Google Scholar 

  79. Angraal S, Mortazavi BJ, Gupta A, Khera R, Ahmad T, Desai NR, et al. Machine learning prediction of mortality and hospitalization in heart failure with preserved ejection fraction. JACC Heart Fail. 2020;8:12–21.

    Article  PubMed  Google Scholar 

  80. Glass TA, Goodman SN, Hernán MA, Samet JM. Causal inference in public health. Annu Rev Public Health. 2013;34:61.

    Article  PubMed  PubMed Central  Google Scholar 

  81. Hernán MA, Hsu J, Healy B. A second chance to get causal inference right: a classification of data science tasks. Chance. 2019;32:42.

    Article  Google Scholar 

  82. Geng EH, Baumann AA, Powell BJ. Mechanism mapping to advance research on implementation strategies. PLoS Med. 2022;19:e1003918.

    Article  PubMed  PubMed Central  Google Scholar 

  83. Xiong M. Artificial Intelligence and Causal Inference. 1st ed. New York: Chapman and Hall/CRC; 2022.

  84. MOLAK ALEKSANDER. Causal inference and discovery in Python : unlock the secrets of modern causal machine learning... with dowhy, econml, pytorch and more. 2023. Available from: Cited 2023 Sep 11.

  85. Causal AI. Available from: Cited 2023 Sep 11.

  86. Jia Y, Kaul C, Lawton T, Murray-Smith R, Habli I. Prediction of weaning from mechanical ventilation using convolutional neural networks. Artif Intell Med. 2021;117:102087.

    Article  PubMed  Google Scholar 

  87. Nguyen TM, Quinn TP, Nguyen T, Tran T. Explaining black box drug target prediction through model agnostic counterfactual samples. IEEE/ACM Trans Comput Biol Bioinform. 2023;20:1020.

    Article  CAS  PubMed  Google Scholar 

  88. Ng MY, Kapur S, Blizinsky KD, Hernandez-Boussard T. The AI life cycle: a holistic approach to creating ethical AI for health decisions. Nat Med. 2022;28:2247–9. Available from: Cited 2023 Aug 18.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  89. Rossi P, Lipsey M, Henry G. Evaluation: a systematic approach. 8th ed. Thousand Oaks, CA: Sage Publications; 2018.

    Google Scholar 

  90. Caruana R, Lou Y, Gehrke J, Koch P, Sturm M, Elhadad N. Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2015.

    Google Scholar 

  91. Shanklin R, Samorani M, Harris S, Santoro MA. Ethical redress of racial inequities in AI: lessons from decoupling machine learning from optimization in medical appointment scheduling. Philos Technol. 2022;35. Available from: Cited 2023 Aug 18.

  92. Vela D, Sharp A, Zhang R, Nguyen T, Hoang A, Pianykh OS. Temporal quality degradation in AI models. Sci Rep. 2022;12. Available from: Cited 2023 Aug 20.

  93. Ginsberg J, Mohebbi MH, Patel RS, Brammer L, Smolinski MS, Brilliant L. Detecting influenza epidemics using search engine query data. Nature. 2009;457:1012–4. Available from: Cited 2023 Aug 18.

    Article  ADS  CAS  PubMed  Google Scholar 

  94. Lazer D, Kennedy R, King G, Vespignani A. Big data. The parable of Google Flu: traps in big data analysis. Science. 2014;343:1203–5. Available from: Cited 2023 Aug 18.

    Article  ADS  CAS  PubMed  Google Scholar 

  95. Sadasivam RS, Borglund EM, Adams R, Marlin BM, Houston TK. Impact of a collective intelligence tailored messaging system on smoking cessation: the perspect randomized experiment. J Med Internet Res. 2016;18:e285.

    Article  PubMed  PubMed Central  Google Scholar 

  96. Belli HM, Chokshi SK, Hegde R, Troxel AB, Blecker S, Testa PA, et al. Implementation of a Behavioral Economics Electronic Health Record (BE-EHR) module to reduce overtreatment of diabetes in older adults. J Gen Intern Med. 2020;35:3254.

    Article  PubMed  PubMed Central  Google Scholar 

  97. Brooks R, Nguyen D, Bhatti A, Allender S, Johnstone M, Lim CP, et al. Use of artificial intelligence to enable dark nudges by transnational food and beverage companies: analysis of company documents. Public Health Nutr. 2022;25:1291–9. Available from: Cited 2023 Aug 18.

    Article  Google Scholar 

  98. Soled D. Public health nudges: Weighing individual liberty and population health benefits. J Med Ethics. 2020. Available from: Cited 2021 Jan 29.

  99. Harrison JD, Patel MS. Medicine and society: Designing nudges for success in health care. AMA J Ethics. 2020;22:796–801. Available from: Cited 2021 Jan 29.

    Article  Google Scholar 

  100. Adkisson R. Nudge: Improving Decisions About Health, Wealth and Happiness, R.H. Thaler, C.R. Sunstein. Yale University Press, New Haven (2008), 293 pp. Soc Sci J. 2008;45:700–701.

  101. Minssen T, Vayena E, Cohen IG. The challenges for regulating medical use of ChatGPT and other large language models. JAMA. 2023;330:315–6. Available from: Cited 2023 Oct 5.

    Article  PubMed  Google Scholar 

  102. Gervais D. Avoid patenting AI-generated inventions. Nature. 2023;622:31–31. Available from: Cited 2023 Oct 5.

    Article  ADS  CAS  PubMed  Google Scholar 

  103. Abbott R. Allow patents on AI-generated inventions — for the good of science. Nature. 2023;620:699.

    Article  ADS  CAS  PubMed  Google Scholar 

  104. Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 2019;17. Available from: Cited 2023 Aug 20.

  105. Gama F, Tyskbo D, Nygren J, Barlow J, Reed J, Svedberg P. Implementation frameworks for artificial intelligence translation into health care practice: scoping review. J Med Internet Res. 2022;24. Available from: Cited 2023 Aug 20.

  106. EU AI Act: first regulation on artificial intelligence | News | European Parliament. Available from: Cited 2023 Aug 20.

  107. Srivastav S, Chandrakar R, Gupta S, Babhulkar V, Agrawal S, Jaiswal A, et al. ChatGPT in radiology: the advantages and limitations of artificial intelligence for medical imaging diagnosis. Cureus. 2023;15:e41435. Available from: Cited 2023 Aug 20.

    PubMed  PubMed Central  Google Scholar 

  108. Currie GM. Academic integrity and artificial intelligence: is ChatGPT hype, hero or heresy? Semin Nucl Med. 2023;53:719.

    Article  PubMed  Google Scholar 

  109. Vaishya R, Misra A, Vaish A. ChatGPT: Is this version good for healthcare and research? Diabetes Metab Syndr. 2023;17:102744.

    Article  CAS  PubMed  Google Scholar 

  110. Williams SM, Moore JH. Genetics and precision health: the ecological fallacy and artificial intelligence solutions. BioData Min. 2023;16. Available from: Cited 2023 Aug 20.

  111. Walsh CG, Chaudhry B, Dua P, Goodman KW, Kaplan B, Kavuluru R, et al. Stigma, biomarkers, and algorithmic bias: recommendations for precision behavioral health with artificial intelligence. JAMIA Open. 2020;3:9–15. Available from: Cited 2023 Aug 20.

    Article  PubMed  PubMed Central  Google Scholar 

  112. Cohoon TJ, Bhavnani SP. Toward precision health: applying artificial intelligence analytics to digital health biometric datasets. Per Med. 2020;17:307–16. Available from: Cited 2023 Aug 20.

    Article  CAS  PubMed  Google Scholar 

  113. Keim-Malpass J, Moorman LP, Monfredi OJ, Clark MT, Bourque JM. Beyond prediction: Off-target uses of artificial intelligence-based predictive analytics in a learning health system. Learn Health Syst. 2022;7. Available from: Cited 2023 Aug 20.

  114. Atkins D, Makridis CA, Alterovitz G, Ramoni R, Clancy C. Developing and Implementing Predictive Models in a Learning Healthcare System: Traditional and Artificial Intelligence Approaches in the Veterans Health Administration. Annu Rev Biomed Data Sci. 2022;5:393–413. Available from: Cited 2023 Aug 20.

    Article  PubMed  Google Scholar 

  115. Chambers DA, Feero WG, Khoury MJ. Convergence of implementation science, precision medicine, and the learning health care system: a new model for biomedical research. JAMA. 2016;315:1941–2. Available from: Cited 2021 Jun 17.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  116. Shane J. You look like a thing and I love you : how artificial intelligence works and why it’s making the world a weirder place. New York, NY: Voracious/Little, Brown and Company; 2019. Print.

  117. Chang T, DeJonckheere M, Vydiswaran VGV, Li J, Buis LR, Guetterman TC. Accelerating mixed methods research with natural language processing of big Text data. 2021;15:398–412. Available from: Cited 2023 Aug 24.

    Google Scholar 

  118. Lennon RP, Fraleigh R, van Scoy LJ, Keshaviah A, Hu XC, Snyder BL, et al. Developing and testing an automated qualitative assistant (AQUA) to support qualitative analysis. Fam Med Community Health. 2021;9. Available from: Cited 2023 Aug 24.

Download references


Not applicable.


The findings and conclusions in this paper are those of the authors and do not necessarily represent the official positions of the National Institutes of Health or the Centers for Disease Control and Prevention.


This work was supported in part by the National Cancer Institute (numbers P50CA244688, P50 CA244431), the National Institute of Diabetes and Digestive and Kidney Diseases at the National Institutes of Health (numbers P30DK092950, P30DK056341, R25DK123008), the Centers for Disease Control and Prevention (number U48DP006395), and the Foundation for Barnes-Jewish Hospital. Dr. Trinkley’s time was supported in part by the NHLBI (number 1K23HL161352).

Author information

Authors and Affiliations



KET conceptualized the original ideas and wrote the first draft of the outline and paper. RA, AMM, REG, and RCB provided input on the original outline, contributed text to the draft manuscript, and provided intellectual content to the paper. All authors provided critical edits to the paper and approved the final version.

Corresponding author

Correspondence to Katy E. Trinkley.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

AMM is a consultant for Ultrasight. All other authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Trinkley, K.E., An, R., Maw, A.M. et al. Leveraging artificial intelligence to advance implementation science: potential opportunities and cautions. Implementation Sci 19, 17 (2024).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: