Skip to main content

Table 4 An account, based on non-participant observation notes, of knowledge production during project meetings and the perceived influence of the operational researcher’s role in this process

From: Improving the production of applied health research findings: insights from a qualitative study of operational research

Non-participation observation of key project meetings and workshops provided insight into the practices through which knowledge was produced and ways in which OR influenced this.

Physical context: Large conference room. Two parallel screens, lots of PCs [computers] on either side of the conference room. Chairs organised roughly in rows, like an auditorium. Some participants in rows, others sitting on one side. Informal, relatively quiet, measured responses and interactions. Two hours into the meeting, a tin of biscuits appears and is circulated. [Non-participant observation notes, November 2013]

Early meetings tended to focus on the definition of key terms and categories used in the study (e.g. grouping diagnoses for the purpose of the study). Participants would ask questions, raise queries and make comments, based on their “in the moment” reading of the information being presented during meetings, by drawing on their own clinical experience or other research data, or in response to the contributions of others made during the meeting. In particular, imagined audiences for the study’s findings were periodically invoked (including parents and the media) in order to reflect critically on what the study might show and to improve its perceived quality and robustness (e.g. “might get criticised for that” and “last chance to prove how clever we are”). The to and fro of verbal contributions made during the meeting either led the group towards consensus (e.g. chair was able to state “main thing we have to do - done it, which is a miracle”) or recognition that further work to satisfy the needs of the project was still needed (e.g. one participant stated “not sure yet”, to which another responded, “we’re both not sure”).

The graphic or visual aspect of the information presented by the operational researcher appears to be well received. For instance, during a lengthy, rather circular discussion, one participant refers back to the graphic that was presented earlier by the operational researcher in order to move the conversation on: “[she] gave beautiful graphic of defining index, shall we look at it again?”

The operational researcher contributed to the discussions by asking clarifying questions (e.g. “what do we mean by baseline anyway?”) or suggesting points of consensus (e.g. “[are we] saying all would benefit from similar interventions?”).

There appeared to be a political aspect to the ways in which the operational researcher made contributions to the study. For instance, the contributions of the operational researcher were often aligned with the chair’s views or supportive of the chair (e.g. operational researcher stated: “as [chair] said, she’s spoken with number of people on categorising”). Working closely with the chair appeared to help to legitimise the operational researcher’s role in the study and, in particular, gain favour for their approach to gathering and presenting data to inform the study’s findings.

The end-of-study workshop, which was led and facilitated by the operational researcher, highlighted the importance of soft skills (or what I might term “non-academic” skills) for leading the activities of the participants present around translating the evidence collected during the study into a set of recommendations. The observations suggested that such mobilisation of knowledge demanded leadership and facilitation skills, e.g. facilitating workshops, encouraging decision-making around the data, showing participants where they might go with the findings to develop recommendations and marshalling people in different ways, e.g. by assigning action points.

The operational researcher led the meeting from the start (as chair), taking on a leadership role around ensuring the data collected during the study are translated into useful findings and tangible recommendations that can inform policy and practice. A key aspect of this was ensuring that outputs were data-driven (and not just based on participants’ own experiences). The operational researcher was able to move the participants on from discussing the data, to what it might mean for practice, and to how people in the health system might act on it. Enabling this type of discussion included pushing participants to reflect on the analysis of the data collected, e.g. in relation to the CART diagram, asking participants “are these patients groups recognisable to you?” It also seemed to require active facilitation in order to challenge at times participants’ perspectives. For example, where participants might refer to their own experience in a particular heart centre - and use this to challenge the analysis of the data presented - acknowledging and welcoming this, but also asking for broader views on the evidence presented that went beyond their personal experiences. Leadership was also needed in order to encourage participants to take forward pieces of work outside the meeting, e.g. by being firm about assigning action points. [Reflections on end-of-study workshop, October 2014]