This study identified and assigned 43 D&I research competencies into three experience levels as well as sorted them into four topical domains. Some domains did not contain “Advanced” level competencies (Definition, Background, and Rationale and Theory and Approaches), and the distribution of expertise levels overall across the four domains was not equal. The heavier emphasis on intermediate competencies may be an indication that the field is growing and researchers are still unclear of what constitutes advanced-level D&I knowledge. However, arrangement of these competencies within levels provides training programs with guidance on how to structure content based on the progression of the content within a curriculum.
While the overall response rate of 41 % (n = 123) may seem low for this activity, card sorting literature suggests that saturation for card sorts begins around n = 30 participants [23, 26]. It is more important that the participants have characteristics that are most relevant to the use of the intended outcome of the card sort [26]. In this case, it was important to employ participants who have been users of D&I training programs in the past in order to generate robust results.
Work with card sorts have previously allowed researchers to identify the needs of particular groups for developing evidence-based trainings [27]. However, card sorts had not been utilized in the context of D&I training curriculum development. Though Straus’s adaptation of Graham and colleagues’ Knowledge to Action (KTA) Cycle provides one basis for D&I training competencies, the KTA framework does not identify specific content that should be covered in training programs [11, 12]. In future work, educators should focus on understanding how this card sort work and the KTA framework relate to each other.
Many of the participants in this study, both from the initial list gathering to the final card sort activity, have been previous attendees (trainees or faculty) of D&I training programs. While previous development of training program curricula have been an accumulation of experienced researchers’ knowledge [5, 6], our integration of attendees’ and trainees’ feedback helps provide insight into perceived training needs [18]. While the expertise of advanced-level D&I experts is critical, it is also important to gauge how these competencies are viewed by individuals at lower levels of D&I expertise. Since many of these trainees are newer to the field of D&I, their perception of the competencies may be critical to the overall success of how future training programs are framed [25]. By using ANOVA to assess differences between groups, we were also able to determine that there were no significant differences in the way that competencies were sorted based on self-reported expertise levels. As a result, we consider those ratings by individuals at lower levels of expertise just as useful as those who are considered more advanced within the field of D&I science [25].
Implications for training programs
The University of California, San Francisco has laid a framework for a curriculum that is utilized in their concentration and certificate degree programs [9]. Due to the timing of their publication and the initiation of this project, their curriculum was not used to inform ours. However, there are many overlapping themes and concepts between the competency lists [9]. Owing to their call to expand the work they have started, these competencies have begun to address the expansion of their work. To our knowledge, this list of D&I competencies is the first that has been systematically developed by a wide audience of current researchers within the field.
Participants have identified, based on their conceptualization of the competencies, where and when these particular skills should be addressed in the progression of a curriculum. While the majority of the participants had terminal degrees, the card sort activity was presented in a way that the competencies could be adapted for any level of learner. These competencies may equally inform the development of entry-level and mid-level courses that are likely offered in public health or other training programs. By assessing the learning levels for each competency, priorities can be set on specific content depending on the target audience of the different types of training (e.g., masters-level students, doctoral-level students, post-doctoral trainees).
In MT-DIRC, we have adopted this set of competencies to guide the MT-DIRC training and we are assessing how well trainees progress according to these competencies. Upon the inaugural cohort’s entry into the program, they were given a pre-assessment asking them to rate “How skilled do you currently feel in the following D&I competencies” for each of the competencies based on a 5-point scale. The MT-DIRC fellows were given a follow-up post-assessment at 6 months after attendance at their first institute and will be given another post-assessment at 18 months post-initial institute. We plan to conduct these same assessments with subsequent cohorts. We have used the assessment to gauge how the agenda of the institute should be structured to best fit the training needs of the fellows.
While previous programs have used publications, presentations, and grant submissions as metrics of a program success [5], these do not allow fine-grained measurement of knowledge acquisition by trainees. The potential to use competencies as an assessment tool also allows for the ability to customize training programs based on the assessed needs of the trainees prior to their entry in the program (pre- and post-test assessments) and track overall performance. Competency assessments still need to be developed and validated, but our contribution provides an initial starting point for this work.
Study limitations
The use of tertile rankings to establish the cutoff for the competencies may not be the ideal method for establishing groupings. However, because of the nature of the card sort software, as well as the inability to provide in-person direction and feedback as is common with traditional card sorts [24], tertile groupings were our best alternative. This method did not take into consideration the frequency of distribution within each competency score, hence the need to make manual adjustments based on a sometimes heavier emphasis of a competency in one expertise level over another. Since the card sort activity was done virtually rather than in person, the team could not gauge responses from participants as to why they placed heavier emphasis on the “Intermediate” category. Previous card sorts have shown that feedback from participants and a debrief session after the activity provide additional insight into the thought processes of participants as well as identify mediating factors [24, 27].
It is also possible that because the field of D&I research is still relatively new, the concept of “Advanced” competencies may be unclear to the experts. Literature suggests that expertise levels of participants have an effect on the way categorization of the concepts occurs [25]. Despite evidence from the previous literature, we found in our study that expertise levels did not have an overall significant effect on the way competency statements were grouped. We did find an uneven distribution of self-identifying expertise levels. “Advanced” participants were the smallest group (24 %), and the “Intermediate” participants (47 %) were the largest, suggesting a byproduct of the newness of the field or the sampling methods used for this activity.
Finally, the lack of participation by D&I researchers outside the USA poses a limitation. While our team attempted to obtain international participation in the card sort, only 14 % of respondents indicated they were from countries outside of the USA. With many countries actively pursuing the field of D&I, it would have been useful to have a larger response from non-US respondents as they could provide insight into the field of D&I training from other cultural and social perspectives. The fields of dissemination and implementation science are emerging along somewhat parallel paths in different countries, and a unified set of competencies and recommendations for training would ideally include expert guidance from across the globe. It is important to note that most D&I research is occurring in higher income countries and there is a need to engage low- and middle-income countries to provide a more globally relevant view of competencies.
Next steps
The current results provide a foundation for future training programs; however, further rigorous testing is needed for these competencies. Competency sets are not a rigid structure but rather a fluid compass [17]. As new technologies and methodologies are developed, training competencies should be reexamined periodically as well to parallel the ever-evolving social and research climate. An additional element of stakeholder and practitioner definitions can also be added to the development and depth of these competencies, since this study only looked at D&I competencies from the perspective of researchers. Since the aim of D&I science is to better translate research into real-world practice, it is vital that we also explore the training of researchers from the perspective of a broad array of stakeholders [6].
These competencies need to be tested on a more global scale. With the long establishment of Knowledge Translation (KT) Canada’s Summer Institute, and with a strong emphasis on KT and implementation science in countries such as the UK, the Netherlands, and Australia, it is imperative to analyze how these competencies measure up with their current perspectives of D&I skills [11]. As the research climate has become more transnational, standards within training programs should aim to reach a wider audience. This transnational audience needs to include low- and middle-income countries as well to ensure that competencies are relevant to a broader audience. This need appears to be growing. One of the largest US-based D&I training programs, TIDIRH, has seen their number of low- and middle-income country applicants double in the years since its inception (5 middle-low-income international applicants in 2011 to 13 middle-low-income international applicants during their 2014 application cycle).