Skip to main content

Table 1 Challenges and recommendations for delivery-system research

From: Methods and metrics challenges of delivery-system research

Issue

Challenge

Recommendation

Examples

Modeling intervention context

• Delivery-system intervention may be mediated by a range of contextual features (e.g., human, sociocultural, and organizational factors) that may accentuate or attenuate its effect on patient care outcomes

• Contextualization through detailed description and informed reflection on the role that context plays in influencing the meaning, variation, and relationship among variables under study

• Use multilevel modeling to assess the influence of context on processes at lower levels (e.g., state, organization, team)

Brooks et al. [11] conducted a comparative study using qualitative and case study data-collection methods, including semistructured interviews with key stakeholders and follow-up telephone interviews over a one-year period, to identify contextual influences inhibiting or promoting the acceptance and integration of innovations in mental health services in both National Health Service (NHS) and community settings.

Readiness for change

• Not all organizations or providers willing or able to undertake change

Confounding capacity for change and readiness for change

• Systematically assess readiness for change prior to evaluation of delivery-system change

Develop separate measures and assessment of readiness and capacity for change

Based on survey data from 249 drug treatment units, Fuller et al. [12] assessed four aspects of readiness for change (motivation for change, institutional resources, staff attributes that influence organizational change, and organizational climate) and found that units with higher levels of readiness for change were associated with workers who had more positive opinions about the use of evidence-based treatment.

Assessing intervention fidelity and sustainability

• Dynamic social context increases the risk of delivery-system intervention deviating from its intended form

• Cross-sectional/short study durations make determination of long-term effects/changes difficult to assess

• Changes may experience entropy and/or revert back to established routines and practices

• Implementation monitoring to assess the degree to which new structures and practices have been deployed

• Focus on group, organizational, or external factors rather than more common individual attitudes

• Design and measure multiple factors that may influence intervention implementation (e.g., resources, prior experience with similar changes)

• Longitudinally assess key program elements

Orwin [13] used quarterly reporting forms, site visits, and bimonthly telephone calls to construct implementation histories of a substance abuse program. These implementation histories were then compared to the logic model of the original, proposed program to assess whether interventions were implemented as planned. The study also used surveys of program participants to construct a fidelity measure that assessed the degree to which the services that were intended to be delivered to all or most participants were in fact delivered.

Assessing complex, multicomponent interventions

• Difficult to parse out the effects of individual intervention components and determine whether some components are more important than others

• Measures of intervention effects assumed to be linear and additive

• Complement traditional quantitative methods with qualitative methods (i.e., multimethod designs) to assess dynamic, multifaceted aspects of complex delivery-system interventions

English et al. [14] used a multimethod design (interviews, group discussions, field notes, detailed longitudinal quantitative data) to examine reasons why an intervention intended to improve essential pediatric hospital services in Kenya did (or did not) produce its desired effects.

Incorporating time as an analytic variable in delivery-system research

• Short evaluation periods make determination of long-term effects/changes difficult to assess

• Patients/organizations may experience different rates and directions of change over time

Effects of intervention may differ over time

• Identify and longitudinally monitor key elements of intervention

• Incorporate temporal aspects of the intervention-patient outcome(s) relationship into conceptual and empirical models

• Identify temporal patterns in the data

• Directly assess time by including time-varying predictors

• Examine interactions among interventions variables and patient growth trajectories

Brekke et al. [15] used linear growth models to assess whether prospective client outcomes over a 36-month period varied across (i.e., had different trajectories) three types of community-based, psychosocial rehabilitation programs for individuals with chronic mental illness.