Applying GRADE-CERQual to qualitative evidence synthesis findings–paper 7: understanding the potential impacts of dissemination bias
Implementation Science volume 13, Article number: 12 (2018)
The GRADE-CERQual (Confidence in Evidence from Reviews of Qualitative research) approach has been developed by the GRADE (Grading of Recommendations Assessment, Development and Evaluation) Working Group. The approach has been developed to support the use of findings from qualitative evidence syntheses in decision-making, including guideline development and policy formulation.
CERQual includes four components for assessing how much confidence to place in findings from reviews of qualitative research (also referred to as qualitative evidence syntheses): (1) methodological limitations, (2) coherence, (3) adequacy of data and (4) relevance. This paper is part of a series providing guidance on how to apply CERQual and focuses on a probable fifth component, dissemination bias. Given its exploratory nature, we are not yet able to provide guidance on applying this potential component of the CERQual approach. Instead, we focus on how dissemination bias might be conceptualised in the context of qualitative research and the potential impact dissemination bias might have on an overall assessment of confidence in a review finding. We also set out a proposed research agenda in this area.
We developed this paper by gathering feedback from relevant research communities, searching MEDLINE and Web of Science to identify and characterise the existing literature discussing or assessing dissemination bias in qualitative research and its wider implications, developing consensus through project group meetings, and conducting an online survey of the extent, awareness and perceptions of dissemination bias in qualitative research.
We have defined dissemination bias in qualitative research as a systematic distortion of the phenomenon of interest due to selective dissemination of studies or individual study findings. Dissemination bias is important for qualitative evidence syntheses as the selective dissemination of qualitative studies and/or study findings may distort our understanding of the phenomena that these syntheses aim to explore and thereby undermine our confidence in these findings.
Dissemination bias has been extensively examined in the context of randomised controlled trials and systematic reviews of such studies. The effects of potential dissemination bias are formally considered, as publication bias, within the GRADE approach. However, the issue has received almost no attention in the context of qualitative research. Because of very limited understanding of dissemination bias and its potential impact on review findings in the context of qualitative evidence syntheses, this component is currently not included in the GRADE-CERQual approach.
Further research is needed to establish the extent and impacts of dissemination bias in qualitative research and the extent to which dissemination bias needs to be taken into account when we assess how much confidence we have in findings from qualitative evidence syntheses.
The GRADE-CERQual (Confidence in Evidence from Reviews of Qualitative research) approach has been developed by the GRADE (Grading of Recommendations Assessment, Development and Evaluation) Working Group. GRADE-CERQual (hereafter referred to as CERQual) currently includes four components for assessing how much confidence to place in findings from reviews of qualitative research (also referred to as qualitative evidence syntheses): (1) methodological limitations, (2) coherence, (3) adequacy of data and (4) relevance. This paper is part of a series (see Fig. 1) and discusses a probable fifth component, dissemination bias and its potential impact on an overall assessment of confidence in a review finding.
The aims of this paper are to discuss a definition of dissemination bias in qualitative research and consider how and to what extent it might occur, to explain why dissemination bias may be important in relation to the process and findings of qualitative evidence syntheses, to discuss how dissemination bias might impact on assessments of confidence in findings from qualitative evidence syntheses and to outline an agenda for future research. A complementary paper takes a broader look at dissemination bias in qualitative research and potential lessons from available evidence in the quantitative research arena to inform an understanding of the causes and consequences of dissemination bias in qualitative research . Key definitions for the series are provided in Additional file 1.
How CERQual was developed
This paper has been developed in collaboration with the GRADE Working Group (www.gradeworkinggroup.org). The initial stages of the process for developing CERQual, which started in 2010, are outlined elsewhere . Since then, we have further refined the current definitions of each component and the principles for application of the overall approach using a number of methods. We used a pragmatic approach to develop our ideas on dissemination bias by consulting the literature on this topic, including searching MEDLINE and Web of Science to identify and characterise the existing literature discussing or assessing dissemination bias in qualitative research and its wider implications (Nyakang’o SB, Booth A, Meerpohl JJ, Glenton C, Lewin S, Berg RC, Munthe-Kaas HM, Toews I, for the GRADE-CERQual DissQuS Subgroup: Describing non-dissemination and dissemination bias in qualitative research: a mapping review, In preparation); talking to experts in dissemination bias and qualitative evidence synthesis in a number of workshops; and developing consensus through multiple face-to-face CERQual Project Group meetings and teleconferences. We also undertook an online survey of researchers, journal editors and peer reviewers within the qualitative research domain on the extent, awareness and perceptions of dissemination bias in qualitative research . The methods used to develop CERQual are described in more detail in the first paper in this series .
Dissemination bias and qualitative research
Dissemination bias (which encompasses publication bias) has been studied and discussed extensively in the context of randomised controlled trials and other effectiveness studies. The impacts of dissemination bias on the findings of systematic reviews of the effects of interventions have also received considerable attention [5,6,7]. Acknowledging this empirical evidence base, the GRADE for effectiveness approach includes dissemination bias, under the label of ‘publication bias’, as one of the five domains considered when assessing the certainty of evidence, noting that, ‘Even if individual studies are perfectly designed and executed, syntheses of studies may provide biased estimates because systematic review authors or guideline developers fail to identify studies’ (, p. 1278). Non-identification of studies may occur, for example, because effectiveness studies with negative findings have a lower chance of being disseminated than studies that report positive findings .
The issue of dissemination bias has received little attention in the context of qualitative research . This leaves a major gap in our understanding of how dissemination bias might impact on the findings of qualitative evidence syntheses and on assessments of confidence in these findings. Because of our limited understanding of the issue, dissemination bias is not currently included in the CERQual approach. This paper therefore differs from the others in this series in that we do not provide guidance on applying this potential component of the CERQual approach. Rather, we focus here on how dissemination bias in the context of qualitative research might be conceptualised and why it might be important to assess its potential impact within qualitative evidence syntheses. As discussed in the first paper in this series , we have adopted the ‘subtle realist’ position  in our approach to qualitative evidence synthesis and the development of CERQual. Viewed from this perspective, the systematic omission of individual findings or whole studies, and the potential threat that this poses to both the richness and completeness of our understanding of a phenomenon, is a methodological challenge that we need to address rather than an insurmountable obstacle to qualitative evidence synthesis.
Some readers may be surprised by our use of the term ‘bias’ in the context of qualitative research. Indeed, this was the subject of considerable discussion within our group given the term’s association with the positivist paradigm. We would argue that ‘bias’ is sufficiently established within the qualitative, interpretivist paradigm to be a useable term in this context. In their text on qualitative methods, Bloor and Wood define bias as, ‘Any influence that distorts the results of a research study’. They go on to note that, ‘Bias may derive either from a conscious or unconscious tendency on the behalf of the researcher to collect data or interpret them in such a way as to produce erroneous conclusions that favour their own beliefs or commitments’ (, p. 21). We use the term bias in a similar way, but rather than applying it to the conduct of qualitative studies, we focus on the dissemination of the findings of qualitative studies.
What is dissemination bias in the context of qualitative research?
We have defined dissemination bias in the context of qualitative research as ‘a systematic distortion of the phenomenon of interest due to selective dissemination of qualitative studies or the findings of qualitative studies’. There are several important elements in this definition: firstly, the term ‘phenomenon of interest’ refers to the issue that is the focus of qualitative inquiry. The phenomenon of interest may relate to an intervention, a condition/situation or an issue, and is often outlined in the question or scope underlying the primary qualitative study or qualitative evidence synthesis .
Secondly, we use the term ‘systematic distortion’ to indicate that we are concerned with a distortion of our understanding of the phenomenon of interest that occurs because certain groups of study findings are systematically less easily accessible or available (rather than study findings not being accessible or available in a random way). These groups of study findings may be less accessible or available in part or in their entirety. For instance, if studies with findings regarding a particularly sensitive aspect of the phenomenon are seldom submitted for publication, that aspect of the phenomenon will be poorly understood. As a consequence, our understanding of the phenomenon as a whole will be incomplete. Of course, the findings of many qualitative studies are never disseminated, or are only disseminated in part [3, 12]. While this is unethical  and leads to research waste , it will not result in bias if the non-dissemination is random (and thus will not distort our understanding of the phenomenon of interest in a systematic or consistent way).
Another way of looking at this is that the importance of non-dissemination depends on the extent to which the study findings that have been disseminated regarding a phenomenon encompass the full range of findings from those studies. If the range of study findings disseminated is similar to all of the findings identified in the studies, systematic distortion is unlikely. However, if the findings that have been disseminated are consistently different from the full universe of findings that have been identified from primary research, systematic distortion of the phenomenon of interest is likely to occur .
Thirdly, we use the term ‘dissemination bias’ rather than ‘publication bias’ to acknowledge the wide range of ways to disseminate the findings of qualitative studies beyond publication in an indexed journal. ‘Publication’ is also increasingly difficult to define given the variety of electronic and alternative formats through which the findings of qualitative studies can be made available, such as institutional websites, registries of studies and book chapters. We are therefore more concerned with the non-availability or non-accessibility of qualitative study findings rather than only whether they have been formally published or not . If study findings are not disseminated in an accessible way, then dissemination bias might result. In addition, our definition of dissemination bias does not extend to differential ‘uptake’ of the findings of qualitative studies which relates to the behaviours of users rather than to those of the evidence producers.
Fourth, we mention both qualitative studies and the findings of qualitative studies in our definition of dissemination bias. This is to indicate that we are concerned both with the selective dissemination of entire studies and of particular findings from studies. While the selective dissemination of entire studies is more widely discussed in the scientific literature, multiple factors may explain why the study findings themselves may be disseminated selectively. For example, particular study findings that are unpalatable to governments, research commissioners or research funders may not be disseminated (, p. 22). Alternatively, researchers may be asked to earmark available space within their manuscript to study findings that are considered more newsworthy, by implication ‘truncating’, or even omitting, dissemination of other aspects of the phenomenon of interest (, p. 22) . Similarly, qualitative study findings that run counter to a mainstream understanding of a phenomenon, or ways of describing a phenomenon, may be removed from a paper on the request of the peer reviewers or editors and, consequently, may not be disseminated .
Our definition of dissemination bias is compatible both with recent broader work to develop a consistent and comprehensive approach to defining the non-dissemination of research findings  and with the definitions of publication bias used by the GRADE for effectiveness approach  and Cochrane  (Table 1).
When might dissemination bias arise in the process of disseminating the findings of qualitative studies?
Decisions made at numerous points in the process of disseminating the findings of qualitative studies may lead to selective dissemination which may, in turn, result in dissemination bias. Table 2 illustrates some of the decision points, each of which could be unpacked in more detail through examining the contributing decisions. This table simply seeks to describe possible decisions impacting on dissemination without exploring underlying mechanisms or the contexts under which these decisions may be considered more or less appropriate. Decisions taken by the authors of primary studies or qualitative evidence syntheses also impact, for example, on which primary research is prioritised, how this research is conducted, which types of studies are included in qualitative evidence syntheses and which interpretations are favoured in the synthesis process. However, in the context of assessing how much confidence to place in the findings of qualitative evidence syntheses, we are primarily concerned with decisions made in the process of disseminating individual study findings. It is these decisions that may result in dissemination bias and, consequently, in systematic distortion of the phenomenon of interest.
Why might dissemination bias be important in a CERQual assessment?
In the CERQual approach, all review findings start off as ‘high confidence’ and this assessment is then modified if there are concerns regarding any of the CERQual components. This starting point of ‘high confidence’ reflects a view that each review finding should be seen as a reasonable representation of the phenomenon of interest unless concerns are identified to weaken this assumption. Dissemination bias is one concern that may weaken this assumption as the synthesis findings regarding the phenomenon of interest may be distorted if the findings from the group of available and included studies are systematically unrepresentative of the full body of research that has been conducted. As with the existing four CERQual components, the intention is not to exclude potentially valuable insights from studies on the basis of an individual CERQual component judgement or the overall CERQual assessment. Rather, the intention is simply to take into account considerations that impact on confidence in the review findings.
What is the extent of dissemination bias in qualitative research?
Empirical evidence on the extent of dissemination bias in qualitative research, and how it varies across the different fields in which qualitative research is undertaken, is very limited. To our knowledge, only one study has explored empirically the extent of non-dissemination of qualitative research . This study of a cohort of 224 abstracts examined publications emerging from qualitative studies presented at a medical sociology conference. It reported an overall publication rate of only 44.2%--a figure similar to that for quantitative biomedical research presented at conferences [19, 20]. The authors observed that non-publication appeared to be related to the quality of reporting, including whether the research question was outlined and the methods and findings described. This suggests a mechanism by which qualitative studies that do not show ‘clear, or striking, or easily described findings simply disappear from view’ ( p. 552), with the implication that qualitative evidence syntheses that only rely on published papers may be subject to ‘qualitative publication bias’ ( p. 552).
The GRADE-CERQual Project Group has conducted two research projects to widen our understanding of the nature and extent of non-dissemination and dissemination bias in this field. The first study, a mapping review, aims to identify and document the existing literature discussing dissemination bias and related effects in qualitative research. The second study, a cross-sectional survey, aimed to explore stakeholders’ views and experiences of, and reasons for, the non-dissemination of qualitative research studies and individual study findings . The survey findings suggest that the proportion of unpublished qualitative studies and individual findings is substantial and comparable to the extent of non-dissemination of studies using quantitative methods. Considerable further research is needed on both the extent of dissemination bias in qualitative research, including partial reporting of research findings, and the factors that affect this––we discuss this research agenda in more detail below and in a complementary paper .
When might one suspect that dissemination bias may be present?
At present, there is no methodological guidance available on how to assess the possibility and impacts of dissemination bias in the context of a qualitative evidence synthesis. Observations that may lead a review team to suspect dissemination bias include:
Evidence that primary research has been carried out in relation to the synthesis question (for example, evidence that studies have been funded or presented at conferences, the availability of a protocol, or details reported in the methods section of a study) but the full set of study findings are not available (for instance, as a journal article or report)
Findings from available studies reflect only a limited range of participants, settings, time periods, aspects of the phenomena of interest or conceptual or theoretical perspectives when it is likely that a wider range of contexts, time periods, phenomena or perspectives have been considered in research in the area
Findings are available in languages that are not accessible to the review team
Available studies all indicate strong formative input from funders of qualitative research, editors of journals publishing qualitative research or other stakeholders with particular interests in particular types of study findings
Differences in completeness or emphasis that are revealed when comparing findings published in a journal with a corresponding fuller account, such as a thesis or book chapter
Methods need to be developed for exploring whether the findings of a synthesis have been distorted systematically by dissemination bias, and this is a key focus for further research by the GRADE-CERQual Project Group. However, there are numerous reasons why it may be difficult to identify the effects of dissemination bias within qualitative evidence syntheses. Firstly, the contribution of an individual qualitative study to a particular interpretation cannot easily be discerned . Secondly, the occurrence of a finding from a single study in isolation is not in itself an indicator of the presence of bias as it may simply reflect a divergent or disconfirming case . Thirdly, unlike in quantitative research, procedures for estimating or projecting the total population of relevant studies have not yet been developed . Finally, reflexivity--that is, looking critically at the impacts of the review authors on all aspects of a synthesis--is usually encouraged within a frame of what has been included in a synthesis and not in terms of what may have been omitted .
A growing number of qualitative evidence syntheses are reporting consideration of the impacts of dissemination bias on the studies identified for the synthesis and, to some extent, on the review findings as a whole [25,26,27,28,29,30]. This is an important first step in relation to documenting possible dissemination bias and identifying examples of its potential impacts. In Table 3, we outline a research agenda for exploring dissemination bias in qualitative research.
Important goals of the GRADE-CERQual Project Group are to improve understanding of how dissemination bias might occur in qualitative research; its likely impact on the degree of confidence that can be placed in the findings of qualitative evidence syntheses; and whether and how to include dissemination bias as a fifth component of the CERQual approach. We also hope that an improved understanding of dissemination bias may, in the longer term, lead study authors, journal editors, peer reviewers and other stakeholders to devise strategies to minimise the impact of such biases.
Open peer review
Peer review reports for this article are available in Additional file 2.
Toews I, Booth A, Berg RC, Lewin S, Glenton C, Munthe-Kaas HM, Noyes J, Schroter S, Meerpohl JJ. Further exploration of dissemination bias in qualitative research required to facilitate assessment within qualitative evidence syntheses. J Clin Epidemiol. 2017;88:133–9.
Lewin S, Glenton C, Munthe-Kaas H, Carlsen B, Colvin CJ, Gulmezoglu M, Noyes J, Booth A, Garside R, Rashidian A. Using qualitative evidence in decision making for health and social interventions: an approach to assess confidence in findings from qualitative evidence syntheses (GRADE-CERQual). PLoS Med. 2015;12(10):e1001895.
Toews I, Glenton C, Lewin S, Berg RC, Noyes J, Booth A, Marusic A, Malicki M, Munte-Kaas HM, Meerpohl JJ. Extent, awareness and perception of dissemination bias in qualitative research: an explorative survey. PLoS One. 2016;11(8):e0159290.
Lewin S, Booth A, Glenton C, Munthe-Kaas HM, Rashidian A, Wainwright M, Bohren MA, Tuncalp Ö, Colvin CJ, Garside R, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings: introduction to the series. Implement Sci. 2018;13(Suppl 1): https://doi.org/10.1186/s13012-017-0688-3.
Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan AW, Cronin E, Decullier E, Easterbrook PJ, Von Elm E, Gamble C, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS One. 2008;3(8):e3081.
Dwan K, Altman DG, Clarke M, Gamble C, Higgins JP, Sterne JA, Williamson PR, Kirkham JJ. Evidence for the selective reporting of analyses and discrepancies in clinical trials: a systematic review of cohort studies of clinical trials. PLoS Med. 2014;11(6):e1001666.
Schmucker C, Schell LK, Portalupi S, Oeller P, Cabrera L, Bassler D, Schwarzer G, Scherer RW, Antes G, von Elm E, et al. Extent of non-publication in cohorts of studies approved by research ethics committees or included in trial registries. PLoS One. 2014;9(12):e114023.
Guyatt GH, Oxman AD, Montori V, Vist G, Kunz R, Brozek J, Alonso-Coello P, Djulbegovic B, Atkins D, Falck-Ytter Y, et al. GRADE guidelines: 5. Rating the quality of evidence––publication bias. J Clin Epidemiol. 2011;64(12):1277–82.
Song F, Parekh S, Hooper L, Loke YK, Ryder J, Sutton AJ, Hing C, Kwok CS, Pang C, Harvey I. Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess. 2010;14(8):iii. ix-xi, 1-193
Hammersley M. What’s wrong with ethnography? ––methodological explorations. London: Routledge; 1992.
Bloor M, Woods F. Keywords in qualitative methods. London: Sage; 2006.
Petticrew M, Egan M, Thomson H, Hamilton V, Kunkler R, Roberts H. Publication bias in qualitative research: what becomes of qualitative research presented at conferences? J Epidemiol Community Health. 2008;62(6):552–4.
World Medical Association. Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects. 2013. Available from: https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/. Accessed 20 Nov 2017.
Macleod MR, Michie S, Roberts I, Dirnagl U, Chalmers I, Ioannidis JP, Al-Shahi Salman R, Chan AW, Glasziou P. Biomedical research: increasing value, reducing waste. Lancet. 2014;383(9912):101–4.
Campbell R, Pound P, Morgan M, Daker-White G, Britten N, Pill R, Yardley L, Pope C, Donovan J. Evaluating meta-ethnography: systematic analysis and synthesis of qualitative research. Health Technol Assess. 2011;15(43):1–164.
Dickson-Swift V, James EL, Kippen S, Liamputtong P. Researching sensitive topics: qualitative research as emotion work. Qual Res. 2009;9(1):61–79.
Bassler D, Mueller KF, Briel M, Kleijnen J, Marusic A, Wager E, Antes G, von Elm E, Altman DG, Meerpohl JJ, et al. Bias in dissemination of clinical research findings: structured OPEN framework of what, who and why, based on literature review and expert consensus. BMJ Open. 2016;6(1):e010024.
Sterne JAC, Egger M, Moher D. Chapter 10: addressing reporting biases. In: Higgins JPT, Green S, editors. Cochrane Handbook for Systematic Reviews of Intervention Version 510 (updated March 2011). The Cochrane Collaboration; 2011. Available from http://www.cochrane-handbook.org/. Accessed 20 Nov 2017.
Scherer RW, Langenberg P, von Elm E. Full publication of results initially presented in abstracts. Cochrane Database Syst Rev. 2007;2:MR000005.
Scherer RW, Ugarte-Gil C, Schmucker C, Meerpohl JJ. Authors report lack of time as main reason for unpublished research presented at biomedical conferences: a systematic review. J Clin Epidemiol. 2015;68(7):803–10.
Dixon-Woods M, Bonas S, Booth A, Jones DR, Miller T, Sutton AJ, Shaw RL, Smith JA, Young B. How can systematic reviews incorporate qualitative research? A critical perspective. Qual Res. 2006;6(1):27–44.
Booth A, Carroll C, Ilott I, Low LL, Cooper K. Desperately seeking dissonance: identifying the disconfirming case in qualitative evidence synthesis. Qual Health Res. 2013;23(1):126–41.
Atkins S, Lewin S, Smith H, Engel M, Fretheim A, Volmink J. Conducting a meta-ethnography of qualitative literature: lessons learnt. BMC Med Res Methodol. 2008;8:21.
Newton BJ, Rothlingova Z, Gutteridge R, LeMarchand K, Raphael JH. No room for reflexivity? Critical reflections following a systematic review of qualitative research. J Health Psychol. 2012;17(6):866–85.
Ferrer HB, Trotter C, Hickman M, Audrey S. Barriers and facilitators to HPV vaccination of young women in high-income countries: a qualitative systematic review and evidence synthesis. BMC Public Health. 2014;14:700.
Cosco TD, Prina AM, Perales J, Stephan BC, Brayne C. Lay perspectives of successful ageing: a systematic review and meta-ethnography. BMJ Open. 2013;3(6).
Fu Y, McNichol E, Marczewski K, Closs SJ. Patient-professional partnerships and chronic back pain self-management: a qualitative systematic review and synthesis. Health Soc Care Community. 2015.
Gandhi G. Charting the evolution of approaches employed by the Global Alliance for Vaccines and mmunizations (GAVI) to address inequities in access to immunization: a systematic qualitative review of GAVI policies, strategies and resource allocation mechanisms through an equity lens (1999–2014). BMC Public Health. 2015;15:1198.
Mills EJ, Montori VM, Ross CP, Shea B, Wilson K, Guyatt GH. Systematically reviewing qualitative studies complements survey design: an exploratory study of barriers to paediatric immunisations. J Clin Epidemiol. 2005;58(11):1101–8.
Satink T, Cup EH, Ilott I, Prins J, de Swart BJ, Nijhuis-van der Sanden MW. Patients’ views on the impact of stroke on their roles and self: a thematic synthesis of qualitative studies. Arch Phys Med Rehabil. 2013;94(6):1171–83.
Schunemann HJ, Brozek J, Guyatt G, Oxman AD, editors. Handbook for grading the quality of evidence and the strength of recommendations using the GRADE approach. 2013. Available at: http://gdt.guidelinedevelopment.org/app/handbook/handbook.html. Accessed 20 Nov 2017.
Our thanks for their feedback to those who participated in the GRADE-CERQual Project Group meetings in January 2014 or June 2015: Elie Akl, Heather Ames, Zhenggang Bai, Rigmor Berg, Jackie Chandler, Karen Daniels, Hans de Beer, Kenny Finlayson, Bela Ganatra, Susan Munabi-Babigumira, Andy Oxman, Tomas Pantoja, Vicky Pileggi, Kent Ranson, Rebecca Rees, Holger Schünemann, Elham Shakibazadeh, Birte Snilstveit, James Thomas, Hilary Thomson, Judith Thornton and Josh Vogel. Thanks also to Sarah Rosenbaum for developing the figures used in this series of papers; to Rebecca Rees and Sara Schroter for their contributions to discussions in this area; to Jackie Chandler, Signe Flottorp and Joe Tucker for their helpful comments on an earlier draft of this paper; and to members of the GRADE Working Group for their input.
The GRADE-CERQual Coordinating Team includes Meghan A. Bohren, UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction, WHO, Geneva, Switzerland; Benedicte Carlsen, Uni Research Rokkan Centre, Bergen, Norway; Christopher J. Colvin, Division of Social and Behavioural Sciences, School of Public Health and Family Medicine, University of Cape Town, Cape Town, South Africa; Ruth Garside, European Centre for Environment and Human Health, University of Exeter Medical School, Exeter, United Kingdom; Özge Tunçalp, UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction, WHO, Geneva, Switzerland; and Megan Wainwright, Division of Social and Behavioural Sciences, School of Public Health and Family Medicine, University of Cape Town, Cape Town, South Africa.
This work, including the publication charge for this article, was supported by funding from the Alliance for Health Policy and Systems Research, WHO (http://www.who.int/alliance-hpsr/en/). Additional funding was provided by the Department of Reproductive Health and Research, WHO (www.who.int/reproductivehealth/en/); Norad (Norwegian Agency for Development Cooperation: www.norad.no), the Research Council of Norway (www.forskningsradet.no); and the Cochrane Methods Innovation Fund. SL is supported by funding from the South African Medical Research Council (www.mrc.ac.za). The funders had no role in study design, data collection and analysis, preparation of the manuscript or the decision to publish.
Availability of materials
Additional materials are available on the GRADE-CERQual website (www.cerqual.org).
To join the CERQual Project Group and our mailing list, please visit our website: http://www.cerqual.org/contact/. Developments in CERQual are also made available via our Twitter feed: @CERQualNet.
About this supplement
This article has been published as part of Implementation Science Volume 13 Supplement 1, 2018: Applying GRADE-CERQual to Qualitative Evidence Synthesis Findings. The full contents of the supplement are available online at https://implementationscience.biomedcentral.com/articles/supplements/volume-13-supplement-1.
Ethics approval and consent to participate
Not applicable. This study did not undertake any formal data collection involving humans or animals.
Consent for publication
The authors declare that they have no competing interests.
About this article
Cite this article
Booth, A., Lewin, S., Glenton, C. et al. Applying GRADE-CERQual to qualitative evidence synthesis findings–paper 7: understanding the potential impacts of dissemination bias. Implementation Sci 13 (Suppl 1), 12 (2018). https://doi.org/10.1186/s13012-017-0694-5