Skip to main content

Table 3 Challenges and limitations reported in the implementation of realist evaluation

From: Using realist evaluation to open the black box of knowledge translation: a state-of-the-art review

Thematic challenges

Details of challenges reported

Time and resource intensive

• Data collection/demands on participant time and resources (feasibility issue) - revised collection so that survey administration, interviews and discussion demands have been distributed to alleviate demand [36].

• Identification of an outcome that would demonstrate impact of a COP was difficult [36].

• Resource intensive. Only one cycle of data collection was possible; more than one would have allowed more refinement of CMO configurations and possible resolution of difficulties defining mechanisms [40].

• Refinement required flexibility and continual, iterative, process of checking back and forth between configuration and data. Should allow sufficient (ample) time for a process of discussion and debate [40].

• Flexibility must be required to adapt to the needs of the teams within the specific contexts [42].

• Difficult to conduct this type of research within available resources [42].

• Resource and time constraints place limitations on the length of time within which the intervention may run, evaluations must be made and analyses achieved; the authors attempted to include three distinct evaluation points and an ongoing cycle of CMO evaluation and refinement [43].

• Uncertainty with regard as to the best time to begin evaluation and refinement of the conjectured CMOs; influenced by the nature of the intervention but also by the available resource associated with the project [43].

Lack of previous/existing information or evidence to inform the development of C-M-O configurations

• Existing evidence to inform the development of conjectured CMO was scarce (particularly in team environments). Previous programs did not attempt to make underlying theory explicit [33].

• Identification of an outcome - difficult given lack of existing evidence [36].

• Development of initial configurations limited by the amount and quality of available evidence [40].

• Difficult to define `mechanism' and sometimes to distinguish mechanisms from contextual factors. In addition, simultaneously functioning mechanisms difficult to interpret; defining CMOs as clearly as possible as early in the process as possible might help make refinement easier [40].

Defining contextual factors and/or mechanisms

• Difficulty in identifying potential mechanisms and outcomes; required lengthy discussions and many iterations [36].

• No clear steps to guide process; operationalization challenging, requiring trial and error [40].

• Other teams might not produce same refinements; a clear and transparent audit trail was produced, so that others may understand the findings [40].

Defining and assessing outcomes

• Increased, and more complex demands for assessment; must design a means to measure effect sensitive to team culture and values as well as service delivery (feasibility issue). Will use existing data collection where possible [33].

• Results cannot be used to predict outcomes in the future [42].

• Difficult to identify adequate indicators of program effectiveness; may only be able to address how the program worked, not if it worked in an effective manner [43].