Skip to main content

Advertisement

Evidence-informed health policy 2 – Survey of organizations that support the use of research evidence

Abstract

Background

Previous surveys of organizations that support the development of evidence-informed health policies have focused on organizations that produce clinical practice guidelines (CPGs) or undertake health technology assessments (HTAs). Only rarely have surveys focused at least in part on units that directly support the use of research evidence in developing health policy on an international, national, and state or provincial level (i.e., government support units, or GSUs) that are in some way successful or innovative or that support the use of research evidence in low- and middle-income countries (LMICs).

Methods

We drew on many people and organizations around the world, including our project reference group, to generate a list of organizations to survey. We modified a questionnaire that had been developed originally by the Appraisal of Guidelines, Research and Evaluation in Europe (AGREE) collaboration and adapted one version of the questionnaire for organizations producing CPGs and HTAs, and another for GSUs. We sent the questionnaire by email to 176 organizations and followed up periodically with non-responders by email and telephone.

Results

We received completed questionnaires from 152 (86%) organizations. More than one-half of the organizations (and particularly HTA agencies) reported that examples from other countries were helpful in establishing their organization. A higher proportion of GSUs than CPG- or HTA-producing organizations involved target users in the selection of topics or the services undertaken. Most organizations have few (five or fewer) full-time equivalent (FTE) staff. More than four-fifths of organizations reported providing panels with or using systematic reviews. GSUs tended to use a wide variety of explicit valuation processes for the research evidence, but none with the frequency that organizations producing CPGs, HTAs, or both prioritized evidence by its quality. Between one-half and two-thirds of organizations do not collect data systematically about uptake, and roughly the same proportions do not systematically evaluate their usefulness or impact in other ways.

Conclusion

The findings from our survey, the most broadly based of its kind, both extend or clarify the applicability of the messages arising from previous surveys and related documentary analyses, such as how the 'principles of evidence-based medicine dominate current guideline programs' and the importance of collaborating with other organizations. The survey also provides a description of the history, structure, processes, outputs, and perceived strengths and weaknesses of existing organizations from which those establishing or leading similar organizations can draw.

Background

Organizations that support the use of research evidence in developing health policy can do so in many ways. Some produce clinical practice guidelines (CPGs) or more generally guidance for clinicians and public health practitioners. Others undertake health technology assessments (HTAs) with a focus on informing managerial and policy decisions about purchasing, coverage, or reimbursement. Still others directly support the use of research evidence in developing health policy on an international, national, and state or provincial level (hereafter called government support units, or GSUs). As we argued in the introductory article in the series, a review of the experiences of such organizations, especially those based in low- and middle-income countries (LMICs) and that are in some way successful or innovative, can reduce the need to 'reinvent the wheel' and inform decisions about how best to organize support for evidence-informed health policy development processes, particularly in LMICs [1].

We focus here on describing the methods and findings from the first phase of a three-phase, multi-method study (Table 1) [2]. In this phase we surveyed a senior staff member (the director or his or her nominee) of CPG-producing organizations, HTA agencies, and GSUs about their history, structure, processes, outputs, and perceived strengths and weaknesses. Previous surveys of organizations that support the development of evidence-informed health policies have focused on organizations that produce CPGs [310], or undertake HTAs [1114]. Only rarely have surveys focused at least in part on GSUs [15], or on organizations that are in some way successful or innovative [9], and to our knowledge surveys have never focused at least in part on organizations that support the use of research evidence in LMICs. In the following two articles in the series, we provide more detail about the methods and findings from the interview and case descriptions phases of the study [16, 17].

Table 1 Overview of the four-article series

Methods

We drew on many people and organizations around the world, including our project reference group, to generate a list of organizations to survey [2]. We modified a questionnaire that had been developed originally by the Appraisal of Guidelines for Research and Evaluation (AGREE) collaboration, adapted one version of the questionnaire for organizations producing CPGs and HTAs and another for GSUs, piloted both versions of the questionnaire, and made a small number of final modifications to both versions of the questionnaire. We sent the questionnaire by email to 176 organizations and followed up periodically with non-responders by email and telephone.

Study population

Eligible CPG-producing organizations, HTA agencies, and GSUs had to perform at least one of the following functions (or a closely related function): 1) produce systematic reviews, HTAs, or other types of syntheses of research evidence in response to requests from decision-makers (i.e., clinicians, health system managers, and public policymakers); 2) identify and contextualise research evidence in response to requests from decision-makers; and/or 3) plan, commission, or carry out evaluations of health policies in response to requests from decision-makers. The GSUs could include units located within a health system, government or international organization, units hosted within a university or other research-intensive organization, and independent units with a mandate to directly support evidence-informed health policy (including health care policy, public health policy, and healthy public policy). We excluded organizations that receive core funding from industry (e.g., pharmaceutical companies) or that only produce or provide health or healthcare utilization data.

While we included all eligible organizations from LMICs, for high-income countries we included: 1) established CPG-producing organizations that are members of the Guidelines International Network (GIN) and select other organizations that are known to produce CPGs in particularly innovative or successful ways; 2) established HTA agencies that are members of the International Network of Agencies for Health Technology Assessment (INAHTA) and select other HTA agencies that are known to produce HTAs in particularly innovative or successful ways; and 3) any units that directly support the use of research evidence in developing health policy. We drew on members of both formal and informal international networks to identify particularly innovative or successful CPG-producing organizations and HTA agencies and to identify GSUs. The formal networks included the Appraisal of Guidelines for Research and Evaluation (AGREE) collaboration, the Cochrane Collaboration, GIN, GRADE Working Group, International Clinical Epidemiology Network (INCLEN) Knowledge Management Program, and INAHTA. The informal networks included our project reference group, staff at WHO headquarters and regional offices, and personal networks.

Survey development and administration

We drew on a questionnaire developed and used by the AGREE collaboration [9], and we modified questions as necessary given our focus on LMICs. The questions covered seven domains: 1) organization; 2) why and how the organization was established; 3) focus; 4) people involved; 5) methodology employed; 6) products and implementation; and 7) evaluation and update procedures. We also included a final group of additional questions. About two-fifths of the questions were open-ended. Two of the questions were changed for the version of the questionnaire administered to GSUs; this questionnaire had 48 questions instead of 49. We piloted the questionnaire with three organizations in each category (and received responses from five organizations). See 'Additional file 1: Questionnaire – CPG & HTA' for the questionnaire for units producing CPGs or HTAs, and see 'Additional file 2: Questionnaire – GSU' for the questionnaire for units supporting health policy.

We sent the questionnaire by email to the director (or another appropriate person) of each eligible organization with three options for responding: by answering questions in the body of our email message and returning it; by answering questions in a Word version of our questionnaire attached to our e-mail message and returning it; or by printing a PDF version of our questionnaire, completing it by hand, and mailing it. We sent three reminders if we did not receive a response (at roughly 2, 8 and 10 weeks after the original contact for most organization and at roughly 1, 2.5 and 4 weeks for the organizations for which we had difficulty tracking down contact information), each time offering to re-send the questionnaire upon request. We used additional mechanisms to increase the response rate, including an endorsement letter and personal contacts [18].

Data management and analysis

Quantitative data were entered manually and summarized using simple descriptive statistics. Written comments were grouped by question, and one member of the team (RM) identified themes using a constant comparative method of analysis. The findings were then independently reviewed by two members of the research team (AO and JL).

The principal investigator for the overall project (AO), who is based in Norway, confirmed that, in accordance with the country's act on ethics and integrity in research, this study did not require ethics approval from one of the country's four regional committees for medical and health research ethics. In keeping with usual conventions in survey research, we took the voluntary completion and return of the survey as indicating consent. We did not mention either treating participants' responses as confidential data or safe-guarding participants' anonymity in our initial request to participate in the study or in the questionnaire itself. Nevertheless, we present only aggregated data and take care to ensure that no individuals or organizations can be identified. We shared a report on our findings with participants and none of them requested any changes to how we present the data.

Results

We sent 176 questionnaires, and 152 (86%) completed questionnaires were returned. Ninety-five organizations produce CPGs, HTAs, or both and 57 units support government policymaking (i.e., are what we call GSUs) (Table 2). Twenty-nine organizations were identified through the GIN membership list, 26 through INAHTA, and 82 through personal contacts, including 49 of the 57 GSUs.

Table 2 Description of the units

Although we intentionally sought out organizations in LMIC, 56% (n = 85) were from high-income countries, 13% (n = 19) from upper middle-income countries, 24% (n = 36) from lower middle-income countries and 5% (n = 8) from low-income countries. Over one-half the organizations (54%) that produced CPGs and HTAs were identified through GIN and INAHTA (51/95), and 68% (n = 65) were from high-income countries compared to 35% (20/57) of GSUs. Although we aimed to identify organizations throughout the world, the included organizations were not spread evenly across different regions. Sixty-seven percent (64/95) of the organizations that produce CPGs and HTAs were located in Western Europe (n = 40), North America (n = 17), Australia and New Zealand (n = 7), compared with 33% of GSUs (19/57). We identified few organizations in Eastern Europe (n = 1), India (n = 2), the Middle East (n = 3) or China (n = 4) that met our inclusion criteria, and only three international organizations were included.

Quantitative results

Organization and establishment

A high proportion of organizations that produce CPGs, HTAs, or both also support government policymaking in other ways, whereas the reverse (GSUs producing CPGs or HTAs) was much less common (Table 3). Among the array of services undertaken in response to requests from public policymakers, GSUs are most likely to convene expert meetings to discuss available research (82%) and undertake short-term research projects (79%). Organizations that produce CPGs were often based in professional associations (45%) whereas organizations that produce HTAs, or both CPGs and HTAs, were often based in government agencies (63% and 49%, respectively). GSUs were also often based in academic institutions (37%) and government agencies (39%). HTA agencies were particularly likely to receive funding from government sources (95%), whereas the other types of organizations did not have such a commonly shared revenue source. More than one-half of the organizations (and particularly HTA agencies) reported that examples from other countries were helpful in establishing their organization.

Table 3 Organization and establishment

Age, budget and production profile

The organizations' ages, budgets, and production profiles varied dramatically (Table 4). The median age was 7 to 10 years depending on the type of organization; however, one organization had just begun directly supporting the use of research evidence in developing health policy and another had a 94-year history. The median annual budget was lowest for CPG-producing organizations and highest for HTA-producing organizations. The median number of CPGs or HTAs produced per year ranged from three to seven, and the median time spent to produce a CPG or HTA ranged from 10 to 15 months.

Table 4 Age, budget and production profile

Focus

Organizations producing CPGs were more often focused on health care (65–84%) than on public health (45%) or healthy public policy (26%), whereas GSUs were more focused on public health (88%) and to a lesser extent on primary healthcare (72%) and healthy public policy (67%) (Table 5). A high proportion of GSUs provided service on many facets of policy issues: characterizing problems (74%), identifying potential solutions (82%), fitting solutions into health systems (75%), and bringing about change in health systems (88%). Organizations producing CPGs were more focused on physicians (100%) and to a lesser extent other types of healthcare providers (77%) as their target users, whereas HTA agencies were more focused on health system managers (95%) and public policymakers (100%). GSUs were most focused on public policymakers in health departments, followed by public policymakers in central agencies (77%), stakeholders (79%), and public policymakers in other departments (63%). A higher proportion of GSUs involved target users in the selection of topics or the services undertaken than CPG- or HTA-producing organizations.

Table 5 Focus

People involved in producing a product or delivering a service

Most organizations have a small number of full-time equivalent (FTE) staff (Table 6). For example, more than one-half of organizations producing CPGs, HTAs, or both have between one and five FTE staff. More than one-half of all organizations always involved an expert in information/library science, and more than two-thirds of CPG- and HTA-producing organizations always involved an expert in clinical epidemiology. More than one-half of all HTA agencies also always involved a health economist and (only if necessary) involved experts in biostatistics, other types of social scientists, and a consumer representative. More than two-thirds of organizations producing CPGs or both CPGs and HTAs involve target users by inviting them to participate in the development group or to review the draft product. A higher proportion of GSUs than other types of organizations involve consumers in product development or service delivery. For example, 44% of GSUs invite consumers to participate in the development group and 54% survey their views/preferences. More than two thirds of organizations producing CPGs consider geographic balance in expert or target user selection, but a lower proportion of other types of organizations use this criterion.

Table 6 People involved in producing a product or delivering a service

Methods used in producing a product or delivering a service

Organizations draw on a wide variety of types of information (Table 7). More than four-fifths (84 to 100%) of organizations reported providing panels with or using systematic reviews. Organizations producing CPGs, HTAs, or both tended to use an explicit valuation process for the research evidence (89 to 97% prioritized evidence by its quality), but used one less often for outcomes (52 to 61% prioritized outcomes by their importance to those affected), and still less often for groups (0 to 26% prioritized groups by their importance to achieving equity objectives). GSUs tended to use a wide variety of explicit valuation processes, but none with the frequency that organizations producing CPGs, HTAs, or both prioritized evidence by its quality. A higher proportion of organizations producing CPGs, HTAs, or both graded recommendations according to the quality of the evidence and/or the strength of the recommendation than used other methods to formulate recommendations. Roughly one-half of GSUs used each of subjective review, consensus, and grading to formulate recommendations. A higher proportion of organizations producing CPGs, HTAs, or both explicitly assessed the quality of evidence in formulating recommendations than explicitly assessed the trade-offs between benefits and harms, costs or equity. Almost one-half of GSUs explicitly assessed equity in formulating recommendations. A higher proportion of organizations used internal review or external review by experts than other review processes.

Table 7 Methods used in producing a product or delivering a service

Products and implementation

All or almost all organizations producing CPGs, HTAs, or both produced a full version of their final product with references, whereas only HTA agencies uniformly produced both the full version and an executive summary (Table 8). Less than one-half of all organizations provided a summary of take-home messages as part of their products. More than two-thirds of organizations producing CPGs, HTAs, or both posted to a website accessed by target users, and more than two thirds of organizations producing HTAs or both CPGs and HTAs mailed or e-mailed products to target users. Only 14% of GSUs submitted products to any form of clearinghouse. More than one-half of organizations were involved in different strategies to develop the capacity of target users to acquire, assess, and use their products or services. Almost two-thirds of GSUs involved target users in an implementation group, whereas lower proportions of other types of organizations involved target users in implementation through this or another approach.

Table 8 Products and implementation

Evaluation and update procedures

Between one-half and two-thirds of organizations do not collect data systematically about uptake, and roughly the same proportions do not systematically evaluate their usefulness or impact in other ways (Table 9). A little over one-half (52%) of organizations producing CPGs update their products regularly whereas less than one-half (45%) update them irregularly. A higher proportion of other types of organizations update their products and services irregularly (49 to 63%) than regularly (11 to 37%).

Table 9 Evaluation and update procedures

Qualitative results

See additional file 3: Qualitative data for the qualitative data from the survey of organizations that support the use of research evidence

Discussion

Principal findings from the survey

A high proportion of organizations that produce CPGs, HTAs, or both also support government policymaking in other ways, whereas the reverse (GSUs producing CPGs or HTAs) was much less common. More than one-half of the organizations (and particularly HTA agencies) reported that examples from other countries were helpful in establishing their organization. The organizations' ages, budgets and production profiles varied dramatically. A higher proportion of GSUs than CPG- or HTA-producing organizations involved target users in the selection of topics or the services undertaken. Most organizations have a small number of FTE staff (e.g., five or fewer FTEs for CPG- and HTA-producing organizations). More than one-half of all organizations always involved an expert in information/library science, and more than two-thirds of CPG- and HTA-producing organizations always involved an expert in clinical epidemiology. More than four-fifths of organizations reported providing panels with or using systematic reviews. GSUs tended to use a wide variety of explicit valuation processes for the research evidence, but none with the frequency that organizations producing CPGs, HTAs, or both prioritized evidence by its quality. Less than one-half of all organizations provided a summary of take-home messages as part of their products. Almost two-thirds of GSUs involved target users in an implementation group, whereas lower proportions of other types of organizations involved target users in implementation through this or another approach. Between one-half and two-thirds of organizations do not collect data systematically about uptake, and roughly the same proportions do not systematically evaluate their usefulness or impact in other ways.

For CPG- and HTA-producing organizations, specifically: 1) when they were being established, many conducted a focused review of one particular organization that they then emulated or a broad review of a variety of organizational models; 2) independence is by far the most commonly cited strength in how they are organized and a lack of resources, both financial and human, the most commonly cited weakness; 3) an evidence-based approach is the most commonly cited strength of the methods they use and their methods' time-consuming and labour-intensive nature the most commonly cited weakness; 4) the brand recognition that was perceived to flow from their evidence-based approach, and much less commonly from their strict conflict-of-interest guidelines, is the main strength of their outputs, while the most commonly cited weaknesses were the lack of dissemination and implementation strategies for the outputs and the lack of monitoring and evaluation of impact; 5) the individuals, groups, and organizations who have worked with them or who have benefited from their outputs are their strongest advocates, and the pharmaceutical industry and clinicians who are closely associated with them their strongest critics; and 6) a facilitating role in coordination efforts (in order to avoid duplication) and in local adaptation efforts (in order to enhance local applicability) are their most frequently offered suggestions for WHO and other international agencies and networks.

For GSUs: 1) focusing on the need for secure funding when establishing a GSU was their most commonly offered advice; 2) working within national networks and, more generally, collaborating rather than competing with other bodies, was a commonly cited strength in how these units are organized; 3) government health departments are their strongest advocates; and 4) helping to adapt global evidence to local contexts or at least supporting such processes are their most frequently offered suggestions for WHO and other international agencies and networks. No themes emerged with any consistency among the diverse weaknesses identified in how the units were organized, strengths and weaknesses identified in their methods and outputs, or critics cited.

Strengths and weaknesses of the survey

The survey has four main strengths: 1) we surveyed the directors of three types of organizations that support evidence-informed policymaking, not just the two types of organizations that are usually studied (i.e., we surveyed GSUs as well as CPG- and HTA-producing organizations); 2) we adapted a widely used questionnaire; 3) we drew on a regionally diverse project reference group to ensure that our draft protocol, study population, and questionnaire were fit for purpose; and 4) we achieved a high response rate (86%). The study has two main weaknesses: 1) despite significant efforts to identify organizations in LMICs, just over one-half (54%) of the organizations we surveyed were drawn from high-income countries; and 2) despite efforts to ask questions in neutral ways, many organizations may have been motivated by a desire to tell us what they thought we wanted to hear (i.e., there may be a social desirability bias in their responses).

What the survey adds

The findings from our survey, the most broadly based of its kind, both extend or clarify the applicability (to HTA agencies, GSUs or both) of the messages arising from previous surveys and related documentary analyses and add several new messages. First, our findings concur with the conclusion of the most recent and comprehensive survey of CPG-producing organizations that 'principles of evidence-based medicine dominate current guideline programs' [9], although this was less true of the broad variety of GSUs that we surveyed. Our findings also concur with their conclusion that 'international collaboration should be encouraged to improve guideline methodology and to globalize the collection and analysis of evidence needed for guideline development', which appears equally germane to the HTA agencies and the GSUs we surveyed. Second, our findings concur with the most recent survey of HTA agencies, which focused on the risks to HTA programs (grouped into the categories of formulation of HTA questions, preparation of HTA products, dissemination, and contracting) and the management of those risks, specifically the conclusions related to establishing strong links with policymakers and involving stakeholders in the work, using good methods and being transparent in the work, and being attentive to implementation considerations [14]. Third, our findings concur with the conclusions of several reviews of CPG and HTA documents, particularly the importance of using good methods and being transparent, collaborating with other organizations, and building capacity [1924]. Our survey also provides a description of the history, structure, processes, outputs, and perceived strengths and weaknesses of GSUs as well as CPG- and HTA-producers, which can be a useful resource for those establishing or leading similar organizations in LMICs.

Implications for policymakers and for international organizations and networks

Both policymakers and international organizations and networks can play an important facilitating role in coordination efforts (in order to avoid duplication) and in local adaptation efforts (in order to enhance local applicability). They also have an important advocacy role to play in calling for coordination and local adaptation. International organizations and networks can play several additional facilitation roles, particularly in the areas of sharing robust methodologies and where necessary improving existing methodologies (which could include establishing a common framework for evaluations of their impact in order to facilitate cross-organization and cross-jurisdiction learning), collecting and analyzing 'global' research evidence and making it available as an input to 'local' processes, and engaging more organizations based in LMICs and providing training and support for their continued development. Select international organizations, such as the Alliance for Health Policy and Systems Research, may have a particular role to play in sponsoring the development of an international organization of GSUs, which can be difficult to identify, let alone support.

Implications for future research

The survey should be repeated in a few years on an augmented sample of organizations, including organizations that have self-identified as partners of the Alliance for Health Policy and Systems Research (many of which may be GSUs). Also, as suggested above, there is a need for improving some of the existing methodologies used by the organizations and for establishing a common framework for evaluations of their impact.

References

  1. 1.

    Lavis JN, Oxman AD, Moynihan R, Paulsen EJ: Evidence-informed health policy 1 – Synthesis of findings from a multi-method study of organizations that support the use of research evidence. Implementation Science. 2008, 3: 53-10.1186/1748-5908-3-53.

  2. 2.

    Moynihan R, Oxman AD, Lavis JN, Paulsen E: Evidence-Informed Health Policy: Using Research to Make Health Systems Healthier – Report from the Kunnskapssenteret (Norwegian Knowledge Centre for the Health Services), No. 1–2008. 2008, Oslo: Norwegian Knowledge Centre for the Health Services

  3. 3.

    Audet A, Greenfield S, Field M: Medical practice guidelines: Current activities and future directions. Annals of Internal Medicine. 1990, 113: 709-714.

  4. 4.

    McGlynn EA, Kosecoff J, Brook RH: Format and conduct of consensus development conferences. International Journal of Technology Assessment in Health Care. 1990, 6: 450-469.

  5. 5.

    Grol R, Eccles M, Maisonneuve H, Woolf S: Developing clinical practice guidelines: The European experience. Disease Management and Health Outcomes. 1998, 4: 355-366. 10.2165/00115677-199804050-00002.

  6. 6.

    Engelbrecht R, Courte-Wienecke S: A Survey on the Current State of Development, Dissemination and Implementation of Guidelines of Clinical Practice in European countries. 1999, Neuherberg: GSF – National Research Center for Environment and Health

  7. 7.

    Woolf SH, Grol RP, Hutchinson A, Eccles M, Grimshaw JM: Potential benefits, limitations, and harms of clinical guidelines. British Medical Journal. 1999, 318: 527-530.

  8. 8.

    The Appraisal of Guidelines, Research and Evaluation in Europe (AGREE) Collaborative Group: Guideline development in Europe: An international comparison. International Journal of Technology Assessment in Health Care. 2000, 16: 1039-1049. 10.1017/S0266462300103101.

  9. 9.

    Burgers JS, Grol R, Klazinga NS, Makela M, Zaat J, AGREE Collaboration: Towards evidence-based clinical practice: An international survey of 18 clinical guideline programs. International Journal for Quality in Health Care. 2003, 15: 31-45. 10.1093/intqhc/15.1.31.

  10. 10.

    Graham ID, Beardall S, Carter AO, Tetroe J, Davies B: The state of the science and art of practice guidelines development, dissemination and evaluation in Canada. Journal of Evaluation in Clinical Practice. 2003, 9: 195-202. 10.1046/j.1365-2753.2003.00385.x.

  11. 11.

    Perry S, Gardner E, Thamer M: The status of health technology assessment worldwide: Results of an international survey. International Journal of Technology Assessment in Health Care. 1997, 13: 81-98.

  12. 12.

    Perry S, Thamer M: Health technology assessment: Decentralized and fragmented in the US compared to other countries. Health Policy. 1997, 40: 177-198. 10.1016/S0168-8510(97)00897-X.

  13. 13.

    Sassi F: The European way to health technology assessment. Lessons from an evaluation of EUR-ASSESS. International Journal of Technology Assessment in Health Care. 2000, 16: 282-290. 10.1017/S0266462300161240.

  14. 14.

    Hastings J, Adams EJ: Joint project of the international network of agencies for health technology assessment–Part 1: Survey results on diffusion, assessment, and clinical use of positron emission tomography. International Journal of Technology Assessment in Health Care. 2006, 22: 143-148. 10.1017/S026646230605094X.

  15. 15.

    Lavis JN, Robertson D, Woodside JM, McLeod CB, Abelson J: How can research organizations more effectively transfer research knowledge to decision makers?. Milbank Quarterly. 2003, 81: 221-248. 10.1111/1468-0009.t01-1-00052.

  16. 16.

    Lavis JN, Oxman AD, Moynihan R, Paulsen EJ: Evidence-informed health policy 3 – Interviews with the directors of organizations that support the use of research evidence. Implementation Science. 2008, 3: 55-10.1186/1748-5908-3-55.

  17. 17.

    Lavis JN, Moynihan R, Oxman AD, Paulsen EJ: Evidence-informed health policy 4 – Case descriptions of organizations that support the use of research evidence. Implementation Science. 2008, 3: 56-10.1186/1748-5908-3-56.

  18. 18.

    Edwards P, Roberts I, Clarke M, DiGuiseppi C, Pratap S, Wentz R, Kwan I, Cooper R: Methods to increase response rates to postal questionnaires. Cochrane Database Syst Rev. 2007, 18 (2): MR000008-

  19. 19.

    Shaneyfelt TM, Mayo-Smith MF, Rothwangl J: Are guidelines following guidelines?: The methodological quality of clinical practice guidelings in the peer-reviewed medical literature. Journal of American Medical Association. 1999, 281: 1900-1905. 10.1001/jama.281.20.1900.

  20. 20.

    Grilli R, Magrini N, Penna A, Mura G, Liberati A: Practice guidelines developed by specialty societies: The need for a critical appraisal. Lancet. 2000, 355: 103-106. 10.1016/S0140-6736(99)02171-6.

  21. 21.

    Menon D, Topfer L-A: Health technology assessment in Canada: A decade in review. International Journal of Technology Assessment in Health Care. 2000, 16: 896-902. 10.1017/S0266462300102168.

  22. 22.

    Burgers JS, Cluzeau FA, Hanna SE, Hunt C, Grol R: Characteristics of high-quality guidelines: Evaluation of 86 clinical guidelines developed in ten European countries and Canada. International Journal of Technology Assessment in Health Care. 2003, 19: 148-157. 10.1017/S026646230300014X.

  23. 23.

    Garcia-Altes A, Ondategui-Parra S, Neumann PJ: Cross-national comparison of technology assessment processes. International Journal of Technology Assessment in Health Care. 2004, 20: 300-310. 10.1017/S0266462304001126.

  24. 24.

    Lehoux P, Tailliez S, Denis J-L, Hivon M: Redefining health technology assessment in Canada: Diversification of producers and contextualization of findings. International Journal of Technology Assessment in Health Care. 2004, 20: 325-336. 10.1017/S026646230400114X.

Download references

Acknowledgements

The study was funded by the Norwegian Knowledge Centre for the Health Services, Oslo, Norway. JL receives salary support as the Canada Research Chair in Knowledge Transfer and Exchange. These funders played no role in study design, in data collection, analysis and interpretation, in writing and revising the article or in the decision to submit the manuscript for publication.

We thank the members of the project reference group for their input: Atle Fretheim (Norway), Don de Savigny (Switzerland), Finn Borlum Kristensen (Denmark), Francisco Becerra Posada (Mexico), Jean Slutsky (USA), Jimmy Volminck (South Africa), Judith Whitworth (WHO ACHR), Marjukka Makela (Finland), Mary Ann Lansang (Philippines), Mike Kelly (United Kingdom), Peter Tugwell (Canada), Rodrigo Salinas (Chile), Sue Hill (WHO), Suwit Wibulpolprasert (Thailand), Suzanne Fletcher (United States), Tikki Pang (WHO), and Ulysses Panisset (WHO). We thank Jako Burgers (Netherlands), Mary Ann Lansang (Philippines), Nelson Sewankambo (Uganda), and Zulma Ortiz (Argentina) for providing a detailed review of the final report on which this article is based. We also thank the survey respondents for sharing their views and experiences with us.

Author information

Correspondence to John N Lavis.

Additional information

Competing interests

The authors declare that they have no financial competing interests. The study reported herein, which is the first phase of a larger three-phase study, is in turn part of a broader suite of projects undertaken to support the work of the WHO Advisory Committee on Health Research (ACHR). Both JL and AO are members of the ACHR. JL is also President of the ACHR for the Pan American Health Organization (WHO's regional office for the Americas). The Chair of the WHO ACHR, a member of the PAHO ACHR, and several WHO staff members were members of the project reference group and, as such, played an advisory role in study design. Two of these individuals provided feedback on the penultimate draft of the report on which the article is based. The authors had complete independence, however, in all final decisions about study design, in data collection, analysis and interpretation, in writing and revising the article, and in the decision to submit the manuscript for publication.

Authors' contributions

JL participated in the design of the study, participated in analyzing the qualitative data and deciding how to present the quantitative data, and drafted the article and the report in which it is based. AO conceived of the study, led its design and coordination, participated in analyzing the qualitative data, and contributed to drafting the article. RM participated in the design of the study, led the analysis of the qualitative data, and contributed to drafting the article. EP led the data collection for the study and led the analysis of the quantitative data. All authors read and approved the final manuscript.

Electronic supplementary material

Additional file 1: Questionnaire for units producing clinical practice guidelines or health technology assessments. This questionnaire is designed to be completed by units or departments that primarily produce clinical practice guidelines (CPGs), and/or produce health technology assessments (HTAs). (DOC 84 KB)

Additional file 2: Questionnaire for units supporting health policy. This questionnaire is designed to be completed by units or departments that primarily provide research evidence and other support for organisations or policymakers developing health policy. (DOC 84 KB)

Additional file 3: Qualitative data from the survey of organizations that support the use of research evidence. (DOC 85 KB)

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Keywords

  • Research Evidence
  • Target User
  • Health System Manager
  • Public Policymaker
  • Healthy Public Policy