This study used a quasi-experimental design (pre/post with a comparison group) to examine the relationship of the MT-DIRC training program with subsequent scholarly output in the form of D&I-related publications and US federal funding secured for research. Our main research question sought to determine if there was a difference among applicants in the likelihood to produce research outputs after participation (vs. nonparticipation) in the program.
Training program description summary
The MT-DIRC program was an R25 mentored training funded by the National Cancer Institue (PAR-12-049) to increase the capacity for D&I research related to the spectrum of cancer control (etiology to survivorship) . The full description of the program is published with preliminary results on skill building . Here, we provide a brief summary of the program.
Eligibility requirements included a doctorate degree and full-time appointment in a research setting. Recruitment through various listservs and advertisements at D&I-related events and conferences focused mostly on early-career researchers or mid-to-late-career researchers looking to shift the focus of their work to D&I research. To apply, researchers submitted an informational cover page, concept paper for a D&I study to work on during the program, biosketch, and two letters of reference. Program faculty rated each application based on several areas including the overall application quality, commitment to D&I research, experience working in trans-disciplinary networks, research support, likelihood for career development, appropriate methods in the concept paper, appropriate topic in the concept paper, and potential impact of the work proposed.
Selected applicants participated in two Summer Institutes (5-day trainings) in St. Louis, MO, USA, each June. Trainings primarily focused on competencies for D&I research  and in-person mentoring and interactive sessions to work on and receive feedback related to research proposals and or other projects in progress. Ongoing evidence-informed mentoring , often in the form of regularly scheduled calls, continued for 2 years. Fostering collaboration among the program’s network of fellows and mentors was a concerted focus of the program. All current and past fellows and mentors were invited to participate in quarterly webinars and to attend annual meet-up events at the D&I science conference in Washington, DC.
Data collection and processing
In total, 105 applicants applied to the program between 2014 and 2017. Three who were selected into the program later dropped out for various career/personal reasons and were excluded from this study. The total sample of program applicants included in this study is 102: 55 that were selected and participated in the MT-DIRC program (“fellows”) and 47 unselected applicants (“nonfellows”) as the comparison group. Demographic information data was collected from each application.
For information on scholarly output, two main sources of publicly available data were used. To gather all published works, we utilized Scopus (www.scopus.com), a comprehensive citation database with over 75 million records . All applicants to the MT-DIRC program were searched in the Scopus database, and after verifying academic affiliation, their citations and accompanying abstracts were extracted through Scopus’s BibTeX export tool in July 2019 and further processed in R using the “bibliometrix” package (2.3.2) . A total of 5189 publication citations were extracted. Initially, 11% (N = 565) were missing abstracts. Upon closer examination, certain article types were responsible for much of the missing abstracts and were necessary to exclude (erratums = 30, notes = 208, letters = 129). For the remaining with missing abstracts (N = 258), a member of the research team searched each citation to confirm missing abstracts (e.g., some journals do not require) or extract the abstract if found. A total of 208 additional abstracts were located, and the remaining citations without abstracts (N = 50) were excluded. The final dataset contained 4772 citations and included only citation ID number, applicant ID number, title, and abstract for further de-identified coding.
To gather grant funding, we used the National Institutes of Health’s (NIH) Research Portfolio Online Reporting Tools (RePORTER Tool Manual, 2018). RePORTER is an electronic tool to search the US federal repository of intramural and extramural NIH-funded research projects dating back to 1985. The repository also includes funded projects by the Administration for Children and Families (ACF), the Agency for Healthcare Research and Quality (AHRQ), the Centers for Disease Control (CDC), the Health Resources and Services Administration (HRSA), the Food and Drug Administration (FDA), and the Department of Veteran Affairs (VA). The R package “fedreporter” (0.2.1)  was used to extract grant funding information from RePORTER’s application programming interface (API) including abstracts for all applicants in September 2019. We included records where the applicant was either a coinvestigator or a principal investigator. A total of N = 271 funding records were extracted. After keeping the first fiscal year of duplicate entries (e.g., multi-year funding), the total sample of unique US federally funded projects was N = 97. Funding cases were reduced to grant ID, applicant ID, title, and abstract for further de-identified coding.
For each publication and grant abstract, D&I focus was coded as “yes” or “no” similar to Baumann and colleagues’ criterion . D&I focus “yes” included an “implementation-centered hypothesis, design, or framework, focused on assessing the implementation climate of an organization, or described the implementation processes for a particular intervention.” For example, abstracts which explicitly examined the implementation outcomes were coded as “yes,” and those that focused only on the intervention outcomes were coded as “no.” In addition, all publications featured in the Implementation Science journal were coded as D&I “yes” based upon the journal’s inherent D&I focus and scope. Two project staff (RRJ, AG) coded the publication and grant abstracts. For publications, an initial random selection of citations (N = 20) was double coded by both project staff and discussed to reach agreement on the inclusion and exclusion criteria before proceeding with single coding by one coder (AG) for the remaining citations. A random sample of 10% was selected for double coding and showed a 93% agreement. The number of grants was considerably less than the total number of publications, and therefore, all were double coded with any discrepancies reconciled to reach 100% consensus.