Skip to main content

Table 5 Approaches for re-evaluating an adapted intervention

From: Adapting evidence-informed complex population health interventions for new contexts: a systematic review of guidance

Approach Rationale Specific methods
Formative evaluationa [47] - Identify factors affecting intervention design, success, and sustainability (e.g. community resources, population characteristics)
- Inform adaptation
- Formative research
- Input from stakeholders prior to or during adaptation
Pilot testing [31, 35, 45, 52, 58, 63] - “Dress rehearsal” to inform revisions
- Identify difficulties with implementation and sources of non-fit
- Identify anticipated immediate outcomes
- Provide adaptation data for other researchers
- Assess satisfaction with and acceptability of the intervention
- Process-oriented qualitative data using in-depth interviews and focus groups with key stakeholders
- Short-term/small-scale trials
- Assessment of engagement constructs and making comparisons with similar data from published studies
Process evaluation [30, 34, 38, 41, 45,46,47, 49, 66] - Identify context-specific factors affecting intervention effectiveness in a new context (i.e. context-specific mediators and moderators)
- Document implementation and adaptation processes (e.g. activities implemented, how and with whom)
- Identify factors affecting intervention implementation
- Determine the intervention reach
- Determine acceptability of and satisfaction with the intervention
- Identify suggested improvements
- Determine the usefulness of the adapted interventions
- Document successes and barriers to inform future adaptations
- Self-reported measures
- Qualitative methods (e.g. interviews, notes, site visits by intervention developers, the adaptation team, and send case videotapes to intervention developers)
- Quantitative methods (e.g. weekly session ratings)
- Mixed-methods approaches
Fidelity assessment/monitoring [29, 41, 50, 54, 57, 58] - Ensure true replication of the intervention by assessing the degree of adherence to delivering the intervention (e.g. whether the core elements have been successfully implemented)
- Assess the adapter’s competence in delivering the intervention
- Ensure intervention quality maintenance
- A phased approach using assessment of the process documentation forms, discussion with a group of developers, refinement of the assessment through discussions with implementers
- Fidelity monitoring tool/checklist (e.g. by using direct observations and ratings)
- Qualitative interviews
- Assessment of notes, client reports
Large-scale implementation evaluation [30] - Assess impact on the mediating variables
- Make inferences about the changes of the distal outcomes
- Assessment of proxy or indirect measures of the key RE-AIMb components
Core component mediational analysis (also termed as mechanisms evaluation) [35, 39, 58, 66] - Determine which components of an intervention most influence intervention effectiveness
- Inform the need for further adaptations
- Inform the need for a larger scale dissemination trial
- Experimental dismantling designs (e.g. a three-arm effectiveness trial using (1) a minimally adapted version of the intervention, (2) a fully adapted version of the intervention, and (3) a treatment as usual.
Outcome evaluation (also termed as a summative evaluation) [30, 34, 35, 38, 41, 46, 47, 49, 50, 52] - Assess the effectiveness of interventions in new contexts/with new populations
- Assure achievement of expected outcomes (proxy, short-term, as well as distal outcomes)
- Inform future implementation and dissemination efforts
- Gather evidence on vulnerable populations underrepresented in clinical/efficacy research
- Use of a control condition, random assignment
- Type 2 hybrid trial testing both effectiveness and implementation (baseline survey, process measures, and at least 3-month post-intervention assessment)
- Small-scale or cluster randomised controlled trials (RCTs)
- Alternatives to RCTs that are more context-specificc (e.g. propensity score matching, interrupted times series)
- Collection of data on community-level outcomes (e.g. social networks, resources, and community capacity levels)
- Pretest/posttest designs and comparison with literature
Comparison evaluation [35] - Assess the superiority of the adapted intervention over standard interventions - A large RCT comparing the adapted interventions with a standard intervention
Cost-benefit assessment [41, 66] - Assess whether the extra costs of intervention adaptation are justified
- Support the case for the intervention adaptation to stakeholders
- Cost-benefit analysis
  1. aProcedures conducted while the intervention is still forming (i.e. in progress)
  2. bThe RE-AIM framework components include Reach, Effectiveness, Adoption, Implementation, and Maintenance
  3. cRCTs may not be feasible in community settings because researchers have less control over intervention delivery, use of usual-care control groups may be unethical, contamination might be an issues, and resistance to randomisation may be heightened in racial/ethnic minority communities