- Open Access
Predicting research use in a public health policy environment: results of a logistic regression analysis
Implementation Science volume 9, Article number: 142 (2014)
Use of research evidence in public health policy decision-making is affected by a range of contextual factors operating at the individual, organisational and external levels. Context-specific research is needed to target and tailor research translation intervention design and implementation to ensure that factors affecting research in a specific context are addressed. Whilst such research is increasing, there remain relatively few studies that have quantitatively assessed the factors that predict research use in specific public health policy environments.
A quantitative survey was designed and implemented within two public health policy agencies in the Australian state of Victoria. Binary logistic regression analyses were conducted on survey data provided by 372 participants. Univariate logistic regression analyses of 49 factors revealed 26 factors that significantly predicted research use independently. The 26 factors were then tested in a single model and five factors emerged as significant predictors of research over and above all other factors.
The five key factors that significantly predicted research use were the following: relevance of research to day-to-day decision-making, skills for research use, internal prompts for use of research, intention to use research within the next 12 months and the agency for which the individual worked.
These findings suggest that individual- and organisational-level factors are the critical factors to target in the design of interventions aiming to increase research use in this context. In particular, relevance of research and skills for research use would be necessary to target. The likelihood for research use increased 11- and 4-fold for those who rated highly on these factors. This study builds on previous research and contributes to the currently limited number of quantitative studies that examine use of research evidence in a large sample of public health policy and program decision-makers within a specific context. The survey used in this study is likely to be relevant for use in other public health policy contexts.
Internationally and in Australia, governments are investing in efforts to try and increase evidence-informed public health policy and program decision-making through the establishment of specific research centres and providing funding to support the translation of research -. There is also an increasing push from government funders to require academics to demonstrate that investments in research deliver a positive impact for the community ,. Whilst this activity demonstrates commitment to driving increased use of research, significant challenges remain. Much is known about the factors that can act as barriers and facilitators to use of research by public health policy and program decision-makers. However, there remains a lack of evidence of how to effectively address these factors in intervention design ,,.
Use of research evidence is context-dependent. The factors that affect research use in one context will not necessarily be the same factors affecting research use in another ,. Context-specific research on factors affecting research use is needed to inform the design and implementation of research translation interventions so that they can be targeted and tailored to the needs of a specific context . Targeting intervention design to the needs of a specific context is expected to enhance effectiveness. Context is affected by individual, organisational and external or societal-level factors ,. To develop a detailed understanding of context, there is a need to understand the factors that operate at these different levels.
A recent study  demonstrated that individual factors can significantly affect contextual factors operating at the organisational level. These findings have emphasised the importance of examining factors at the individual, organisational and external level in studies seeking to understand research use in specific public health policy contexts. Studies of the use of research evidence in specific public health policy and program contexts are growing rapidly ,. However, most studies examining the use of research in public health policy environments to date have been qualitative in nature. Fewer studies have quantified the factors that affect research use in specific contexts in large samples of policy and program decision-makers, however, such research is growing ,.
This study sought to identify factors that predict use of research evidence in a specific public health policy and program decision-making context using quantitative analysis methods based on survey data.
The two government public health agencies studied, WorkSafe Victoria and the Transport Accident Commission (referred to here on as Agency 1 and Agency 2, respectively) are responsible for workplace and transport injury prevention and rehabilitation compensation for the Australian state of Victoria, which has a population of over 5.5 million people as of September 2012 . These organisations undertake prevention activity and provide financial compensation for the costs of workplace and transport injury and illness rehabilitation treatment and services. They have a very similar mandate but are structured differently and operate differently. In 2009, they partnered with Monash University to establish a research institute dedicated to workplace and transport injury prevention and rehabilitation compensation research.
There are currently no ‘gold standard’ validated surveys assessing the use of research evidence for use with government policy workers . To measure individual-level factors that affect research use and to quantify the type, extent and purpose of use of information, including research evidence, a quantitative survey was developed. A detailed description of the survey is provided in the paper Zardo and Collie, Use of information in a public health policy environment: Type, frequency and purpose, submitted 2014. A brief overview of the survey is outlined here.
Survey development was informed by and based on the following: existing literature regarding use of research in health policy decision-making environments, qualitative interviews with employees from Agencies 1 and 2, Michie et al.'s domain framework  and questions from a survey developed by Campbell et al. .
The survey was comprised of four main parts. Part 1 measured use of information types, frequency of use of information and purpose of use of information (Additional file 1). The information types asked about in the survey included the following: internal data and reports; policy, legislation and legal information; medical and clinical evidence; experience, expertise and advice; information collected online; and academic research evidence. This paper will only focus on the results regarding the use of research evidence. Academic research evidence was defined as follows: peer-reviewed journal articles, reports of academic/scientific research and academic conference abstracts and papers.
Part 2 of the survey was comprised of five questions focused on sources of academic research evidence and engagement in research exchange activities such as attending research forums and collaborating in research projects; the latter reflects Campbell et al.'s  study. Part 3 was comprised of 16 questions related to factors that can affect the use of research. Questions were developed in relation to the domains of Michie et al.'s  framework to ensure all domains relevant to attitudes and behaviour regarding research use were covered. There were also questions asking about barriers and facilitators to use of research. Part 4 covered demographic questions related to the individual and to their role within their organisation.
Questions for all parts of the survey were focused at the individual level. Questions were framed as follows: ‘How do you use…’, ‘What is your view…’ and ‘In your day-to-day work…’; the emphasis was on the individual's view and experience about their own behaviour, not that of others. All questions were closed-ended and were compulsory. Answers to all questions were self-reported; therefore, the responses are participants' perceptions and beliefs regarding their own information use.
A previous paper described results from Part 1 [Zardo and Collie. Use of information in a public health policy environment: Type, frequency and purpose, submitted 2014]. This study reports on analyses of data regarding factors affecting the use of research evidence collected in Part 2 and Part 3 of the survey.
A total of eight individuals piloted the survey. Piloting resulted in revision of information-type definitions to provide more detail and greater clarity and description of the survey purpose in the survey introduction and invitation email.
Face validity of the survey was confirmed via feedback from those who piloted the survey, the agencies' key contacts for this project, as well as via review of the survey by the authors. This survey also covers key concepts addressed in other quantitative surveys in this field , and expands on these and addresses key concepts outlined in the research translation literature ,. This suggests that content validity is present; however, further review by field experts is required.
To test the reliability of the survey, the split-half method was used, as a measure of internal consistency, to calculate Cronbach's alpha for 78 valid cases, representing 21% of all cases. Ordinal items only were tested; options for reliability testing of categorical survey items is limited. Cronbach's alpha for the first random split half was 0.61 and the second 0.56. The Spearman-Brown coefficient for both equal and unequal halves was 0.52. The Guttman split-half coefficient, a conservative measure of split-half reliability, was 0.51. Streiner  and others  argue that for a survey which is in an early stage of development, a Cronbach's alpha of 0.50 to 0.60 is acceptable. As there are no ‘gold standard' surveys published in this field, and as this is the first version of this survey developed and tested, these are positive results. Further analyses and revisions will be undertaken on this survey prior to further use.
Selection of participants
Potential participants were identified by a review of organisational charts provided by project key contacts. Areas that were involved in the development, implementation or evaluation of policies, program and projects were selected and confirmed by key contacts. All employees from the selected areas (N =1,278) were included in the participant pool.
Potential participants were invited to participate via emails sent by the heads of the selected areas on behalf of the research team. Participants were offered a prize for completing the survey; all participants who fully completed the survey entered the draw.
Invitations were mailed out between 11 and 23 November 2012. Agency 2 sent reminder emails to all selected areas on 10 December 2012. The survey closed on 21 December 2012. Participants self-selected to take part in the study. The program Qualtrics was used to distribute the survey and collect responses.
Survey data was extracted from the Qualtrics programs as a Statistical Package for the Social Sciences version 20 (SPSS) data file and analysed in SPSS. All survey data were categorical. The demographic data were first analysed using crosstabs and chi-square tests to broadly describe the data related to use of research and identify demographics associated with research use. Binary logistic regression was undertaken to determine which factors predict use of academic research evidence. Predictors entered into regression models included all demographic factors and all factors related to research use from Part 2 and Part of 3 of the survey, including the following: barriers and facilitators, research exchange activities, skills, perceived research relevance, difficulty in accessing research, perceived value of research, management support, internal prompts for research use, presentation and communication of research, etc. Ethics approval for the study was received from Monash University Human Research Ethics Committee.
Results and discussion
Of the 1,278 people invited to participate, 405 fully completed the survey (response rate =31.7%). Thirty-three participants whose main role focus was administration and assistance were excluded, as the research question relates to those who are involved in policy and program development and implementation. This resulted in a sample of 372 participants (Table 1).
Use of academic research evidence
There were 145 participants that indicated that they had used research evidence to inform their decision-making in the last 12 months. Most of those who used research evidence used it monthly (52; 35.9%) or quarterly or less (51; 35.2%). A further 23.4% of participants (34) used research evidence weekly and there were eight participants (5.5%) who used academic research daily.
Determinants of use of academic research evidence
In total, 49 independent factors were tested using univariate binary logistic regression to identify significant predictors of use of research including demographic factors. Twenty-six of the 49 independent variables significantly predicted the use of research when tested individually.
The 26 factors that significantly predicted research use in univariate analyses included a broad range of factors known to affect research use including the following: skills; access; relevance; value; management encouragement; need to increase research use; time; communication; competing interests; lack of actionable messages; research exchange factors, such as ‘attended a research forum’, ‘participated in a research project’, ‘acted as an advisor on a research project’, ‘invited a researcher to give a perspective on a policy issue’ and individual and organisational demographic factors such as agency, time in agency, time in role and level of education.
These 26 factors were then tested in one block in a multivariate binary logistic regression. Only five factors were found to significantly predict use of research when the effect of all other factors was controlled for in the analysis: skills for use of research, relevance of research, intention to use research, internal prompts for use of research and agency. The five significant predictors of research use were then tested again in one block in a final model.
The final model containing the five predictors was statistically significant (χ 2 (5, N =372) =139.28, p =0.000), indicating that the model was able to distinguish between participants who did and did not use research. The factors in the model showed standard errors lower than 2.0 indicating that there was no issue with multi-co-linearity . The number of positive cases of use of evidence (N =145) is also appropriate for the model, in that there are at least ten positive cases per variable .
The model overall explained between 31.2% (Cox and Snell R square) and 42.4% (Nagelkerke R square) of the variance in research use and correctly classified 78.2% of cases. As evident in Table 2 below, all five independent factors made a unique statistically significant contribution to the model. Only the category ‘yes’ related to the factor internal prompts was found not to be significant. Perceived relevance of research was the strongest predictor of use of research, with an odds ratio of 11.87 for ‘relevant’ and an odds ratio of over 5.5 for ‘somewhat relevant'.
This study of 372 decision-makers identified five factors that significantly affect use of academic research evidence in policy and program decision-making in injury prevention and compensation settings in the Australian state of Victoria. These included the following: relevance of research to day-to-day decision-making, skills for using research to inform decision-making, intention to use research in the next 12 months, agency worked with and internal prompts for use of research. This is one of few Australian studies to quantify use of academic research evidence by government policy and program decision-makers on a large scale and to identify specific predictors of research use. These results highlight areas to focus on in the development of research translation interventions in the context studied.
The five significant predictors of research use identified in this context are related to the individual and organisational levels. Other than relevance, the significant predictors identified in this study are factors that could be addressed within the agencies themselves or by the research institute that the agencies jointly fund. It has been argued that research relevance can be enhanced by increased exchange and deliberation between researchers and decision-makers ,. However, it is difficult for researchers to engage in exchange and communication with decision-makers or address factors such as skills through development and delivery of training if decision-makers are not receptive to research or researchers. Internal leadership, senior management level commitment and provision of adequate resources are all factors that have been found to affect decision-makers receptivity to and engagement with research ,,. As these factors are in the control of the agencies, and not in the direct control of researchers, we suggest that research translation interventions developed for these agencies will need to focus on the policy setting as the primary target for intervention rather than the academic setting.
Whilst it has been identified that researchers can have a critical role to play in increasing and supporting policy and program decision-makers' use of research -, the findings of this study and systematic reviews of factors affecting research use suggest that in this context there is first a need to build organisational receptivity to research. Organisational receptivity refers to an organisation's interest in, and capacity for, utilising research evidence to inform decision-making -. The need to build organisational receptivity for the use of research in Australian government departments generally, and in public health policy decision-making environments specifically, has previously been identified ,,,,. It is expected that supporting the agencies to take ownership and leadership of increasing their organisational receptivity to and capacity for research use will assist in enabling increased engagement and interaction between decision-makers and researchers in this context ,,.
Despite there being many questions in the survey regarding decision-makers' interaction and exchange with researchers and research forums external to the agency, these were not found to be significant in the final model. When tested individually (in univariate regression analyses), 2 of the 13 options from the section of the survey focused on exchange and communication, ‘sought evidence from a researcher on a regular basis' and ‘co-authored a publication’, were significant predictors of research use. However, once entered into the model with the other 24 individual predictors they did not remain significant over and above the effect of the five significant variables. This finding is particularly important as increasing opportunities for exchange are frequently recommended as an effective means for driving research use. It is possible that factors such as communication, interaction and engagement may be related to other factors, such as relevance. Factor analysis undertaken as part of future survey refinement will explore this issue. This also highlights that undertaking context-specific research such as this is critical to improving intervention effectiveness, as the factors affecting research use are context-dependent and cannot be assumed ,. It is increasingly argued that any one research translation activity may not be sufficient to increase use of research in policy decision-making ,. Organisations seeking to increase their capacity for use of research will likely need to build research translation platforms or infrastructure that involves a mix of research push, pull, facilitating pull and exchange activities targeted to the needs in their specific context ,.
Research relevance has been consistently identified in individual studies and systematic reviews as a critical factor affecting research use ,. Perceived relevance of research was the strongest predictor of research use identified in this study. Participants who viewed research as ‘relevant’ to their role were 11 times more likely to have used research than participants who viewed research as ‘not relevant’. Even participants who viewed research as ‘somewhat relevant’ were five times more likely to use research than those who viewed research as ‘not relevant’. Despite being consistently identified in systematic reviews as a key factor found to affect research use for well over a decade now ,,,-, relevance remains a complex issue to understand and address. Notions of relevance held by decision-makers are affected by actual experiences of using research, as well as perceptions and assumptions. Studies by Carol Weiss and others undertaken in the 1980s , demonstrated that decision-makers perceptions and experiences of the relevance and utility of research are more complex and nuanced than many researchers appreciate.
Weiss and Weiss  have also highlighted that any self-reported measures regarding research use are difficult to interpret. This is because it is difficult to determine whether what one says will affect their use of research actually does affect their research use. To explore this issue, Weiss and Weiss examined how individual decision-makers' self-reported views of research usefulness co-varied with their reports of research utility in relation to practical research case studies. Whilst they did find that decision-makers' self-reports tended to understate the importance of certain factors expected to affect research usefulness such as ‘applicability of research to existing programs’ or ‘consistency with existing knowledge’, they concluded that ‘decision-makers self- reports about the research characteristics they find useful are for the most part confirmed by their judgments of the usefulness of actual studies' . It was also interesting to note that researchers and decision-makers assessments of the usefulness and quality of studies were generally comparable.
Weiss and Buculavas  showed that decision-makers consider research through three main frames of reference: relevance, truth and utility. Relevance referred to the relevance of the research to decision-maker roles and responsibilities. Truth was composed of two main components, ‘research quality’ and ‘conformity to user experience’ and utility was comprised of the components ‘action orientation’ and ‘challenge to the status quo’. They found that quality was most important when research did not conform to user (decision-maker) experience. However, even when research did conform to user experience, quality remained important. This suggests that contrary to researcher expectations decision-makers remain concerned about quality even when presented with research findings that confirm their expectations. Further and most interestingly, the study showed that decision-makers are interested in and value research that ‘challenges the status quo’ even if they are not able to immediately use it.
This research highlights that perceptions of relevance are complex. Even if decision-makers perceptions of relevance may be enhanced by findings that align with their existing practices and expectations, decision-makers may also expect that research will yield results that challenge the policy ‘status quo’. Kingdon's research  has also highlighted that decision-makers can be open to research and interested in changing direction and views on policy issues but are constrained in their ability to act by a range of contextual factors that are not in their direct control. Weiss's research  has also highlighted that research is often used conceptually, to inform thinking and debate without resulting in immediate action, and argues that this constitutes a valid and valuable use of research. This links with Kingdon's  notion that to create policy change decision-makers must wait for the rare opportunity when the ‘streams’ of ‘policy’, ‘problems’ and ‘politics’ align to create a ‘window of opportunity’ for new policy development or change to occur. This suggests that an initial conceptual use of research by decision-makers can lead to instrumental research use when the right political opportunity or climate presents itself.
In our study, decision-makers were not asked to make an assumption or assertion about what they expected would increase their research use. Decision-makers were asked both if they had used research in the last 12 months and also how relevant they thought research was to their day-to-day work. Their rating of the relevance of research predicted the likelihood of their use of research based on their actual self-reported use of research. Because relevance is a complex concept to interpret, the utility of this finding for informing intervention design would be enhanced by qualitative exploration of what ‘relevance’ means for decision-makers in this specific context.
Increasing skills for use of research has also been identified in systematic reviews as a key strategy for enhancing the use of research . Participants in this study who rated their skills for research use as ‘high’ to ‘very high’ were four times as likely to use research than those who rated their skill as ‘low’, and participants who rated their skill as ‘medium’ were twice as likely to use research. Providing training to improve research skills, including how to access and how to critically appraise research, has been identified as an organisational-level facilitator for use of research ,. It has also been identified as an indicator of organisational receptivity for the use of research . Intervention studies have indicated that provision of training for increasing research skills can increase the ability to appraise research and can also improve perceptions of the potential value of research. To date, however, those studies have not been linked to increases in research use .
It is interesting that in this study, participants self-rated skills for research use predicted their use of research when this has not been found previously. The two intervention studies included in Moore's  systematic review sought to increase research skills through the provision of training. The types of training they tested were quite different: one was a half-day training session; the other involved learning modules completed over a 2-year period. Outcomes of both studies focused on whether self-reported ratings of skills for different aspects of research appraisal and use, such as skills for assessing quality, skills in accessing or evidence-seeking behaviour, etc., improved post training completion. As mentioned, increases in use of research were not directly measured. Taylor's study  found improvement in only one aspect of research skills measured which was the ability to appraise systematic review results. Denis' study , which took place over a 2-year period, showed that participants improved on a broader range of research skills.
We did not collect data in our survey on whether or what kind of research skills training participants may have been involved in. Whilst the findings of the studies by Taylor and Denis, as well as our study, suggest that training may be useful in increasing research use, there remains limited knowledge about the types of training and training delivery that are more likely to support increased research use. The findings of Taylor's and Denis' , works suggest that more intensive programs are potentially more valuable than short, once-off training sessions. However, the current body of knowledge regarding research skills is limited and somewhat conflicting, and there is need for further research on the types of training and education that would be useful for increasing research skills and research use.
‘Intention to use research in the next 12 months' and ‘internal prompts for use of research’ were both significant predictors of research use in this particular decision-making context. Participants who indicated that they intended to use research in the next 12 months were 3.75 times more likely to already be using research. This study also revealed that participants who were unsure about whether there were internal prompts for use of research were not likely to use research. Internal prompts, such as inclusion of sections in project, strategy and business planning templates requiring a review of academic research, reminders in team meetings or weekly newsletters or recognition or accountability for use of research being discussed in performance reviews, could provide a practical means for including research use as part of ‘business as usual’ and support the development of a receptive research climate .
That ‘intention to use research in the next 12 months’ predicted research use suggests that encouraging initial use of research by those who have not previously used research may have the potential to increase their continued or future use of research. Over time this could potentially increase the overall number of decision-makers engaging with research. Research by Azjen , (best known for the development of the theory of planned behaviour ) has shown that both implicit and explicit intentions can predict behaviour. This demonstrates that intentions are powerful predictors of behaviour regardless of whether the intention is made in a general sense such as ‘I will use research in my work’ or specifically, for example ‘I will use research to conduct a literature review to inform the development of project X’. Internal prompts for use of research, such as those outlined above, potentially provide a practical means for both implicitly and explicitly encouraging both ‘first time’ and continued use of research.
The finding that intentions regarding research use predicted use also highlights that decision-makers who are already using research and intend to continue using research may require different intervention activity or targeting than those who have not previously used research. This is supported by research by Dobbins  which found that knowledge brokering as an intervention to increase research use was more effective for organisations with low research culture compared to those rated as having a high research culture. Together, these findings highlight that intervention design seeking to increase decision-makers research use must take into account both the organisational context and receptivity, as well as decision-makers individual research capability.
Participants from Agency 1 were more likely to use research evidence than Agency 2. This supports previous research that has found that use of research varies across government agencies ,,. It is particularly interesting that use can vary significantly between two agencies that have very similar mandates, operate in a homogenous external political environment and have the same primary research provider. This highlights that whilst these agencies are similar, it cannot be assumed that their approach to the use of research and the support needed to drive increased research use are going to be the same. A key difference in the agencies and the survey sample is that Agency 2 employs claims managers and Agency 1 outsources claims management, which is predominately an operational role and is not focused on policy and program development. However, there were participants in the sample from both agencies who identified that their role was focused on operations.
Shine and Bartley  have described their very different experiences of working with two very similar policy agencies in very similar contexts. One of the agencies was mandated to use research evidence and the other was not. Interestingly, the agency that was mandated to use research had lower receptivity to research use. They found that the agency that willingly collaborated on research use tended to be positively reinforced by the experience of using research and that conversely their efforts to collaborate with the agency mandated to use research tended to lead to negative reinforcement and receptivity to research remained low. They identified ownership, receptivity and values as key ‘underlying dynamics shaping collaborative efforts’ . They suggest that where it is identified that there is no genuine intention, or no receptivity to accepting or utilising research and that ‘collaborative efforts and public sector reforms are effectively used as a smokescreen to justify ‘business as usual’, research translation efforts may not be appropriate and could even be damaging .
Taken together, the analysis by Shine and Bartley  and research by Weiss and others , discussed above reinforces the critical need for researchers to have a deep understanding of the factors affecting and motivating decision-makers use of research. It also highlights that such knowledge is required prior to determining whether a research translation intervention is appropriate, and if it is, to ensure that the intervention is targeted to the needs of the specific decision-makers and organisational context in which the intervention is to be implemented. The findings of this survey and previous research indicate that further research is needed to explore the difference found between the agencies studied and that any research translation intervention strategies will need to be specifically tailored to the needs of each agency.
Whilst the survey has yielded interesting insights, there are limitations. The survey was based on self-report which can be subject to social desirability bias. As the survey was emailed to decision-makers by their head of department, we were careful to ensure that the email invitation and participant information sheet clearly stated that information provided would be kept completely confidential, reporting would be de-identified and that only the research team had access to the data. The email invitation was sent exactly as we crafted it with no altering from the heads of department (the lead researcher was cc'd on all email invitations sent). In all cases, the email was sent by the assistant of the department head on their behalf. It was also made clear that any queries were to be directed to the researchers only and the department heads were not to be contacted nor their assistant and that the email was not to be replied to.
Further this is a new survey, and whilst tested for validity and reliability, further testing and refinement is needed. However, development of the survey was based on existing frameworks specifically designed to inform research on decision-makers use of research evidence developed by leading experts in the field ,. Further, these frameworks were the result of extensive development utilising systematic reviews of research as well as systematic reviews of literature and social and psychological theory regarding motivation and behaviour change that took into account factors at the individual, organisational and environmental level ,. Whilst there are other similar surveys being developed and utilised to measure research use ,, there currently remains no gold standard survey available for use in this field.
One of the strengths of this study is that this survey measured use of research in the last 12 months. This provided a practical dependent variable to utilise in the regression analysis. The survey was able to ask decision-makers about factors known to affect research use and examine this in relation to the decision-maker's actual research use. This meant that decision-makers were not required to make assumptions or state what they expected would increase their use of research; they were rating themselves on factors that are known to affect research use and that was tested against their actual self-reported use of research. Another key strength of this study is that research evidence was specifically defined and defined in relation to other information types. The lack of clear definitions of ‘research’ and evidence has been identified in a recent systematic review as a key issue in the quality of studies conducted to date . The findings of this survey also point to interesting areas for further analysis of data arising from this survey. Further planned analyses include testing whether the predictive factors also predict use of other information types collected in this survey as well as factor analysis to assist further refinement of the survey. Finally, the utility of the findings of the survey in terms of informing research translation intervention design would be substantially advanced by qualitative research that explored the findings further, paying particular attention to the issues raised in this discussion.
This study has revealed that relevance, skills, intention to use research, internal prompts and agency are significant predictors of use of research in the injury prevention and rehabilitation compensation policy and program context in the Australian state of Victoria. These findings suggest that individual and organisational level factors are the critical factors to target in the design of interventions aiming to increase research use in this context. External factors, such as engagement and interaction with researchers, were not found to affect research use in this study once the effect of other factors were taken into account. The individual and organisational factors that significantly affected research use in this context broadly aligned with previous research in the field. This study builds on previous research and contributes to the currently limited number of quantitative studies to examine the use of research evidence in a large sample of Australian public health policy and program decision-makers.
PZ led and undertook the research design, data collection and analysis and writing of the manuscript. AC informed the design and informed and reviewed the writing of the manuscript. Both authors read and approved the final manuscript.
Moore G, Redman S, Haines M, Todd A: What works to increase the use of research in population health policy and programmes: a review. Evid Pol. 2011, 7 (3): 277-305. 10.1332/174426411X579199.
Davies P: The state of evidence-based policy evaluation and its role in policy formation. National Institute Economic Review. 2012, 219 (1): R41-R52. 10.1177/002795011221900105.
National Health and Medical Research Council: Research Translation Faculty. 2013. Available from: http://www.nhmrc.gov.au/research-translation/research-translation-faculty.
Rymer L: Measuring The Impact Of Research—The Context For Metric Development. 2011, The Group of Eight, Canberra
Lavis J, Ross S, McLeod C, Gildiner A: Measuring the impact of health research. J Health Serv Res Policy. 2003, 8 (3): 165-170. 10.1258/135581903322029520.
Orton L, Lloyd-Williams F, Taylor-Robinson D, O'Flaherty M, Capewell S: The use of research evidence in public health decision making processes: systematic review. PLoS One. 2011, 6 (7): e21704-10.1371/journal.pone.0021704.
Grimshaw J, Eccles M, Lavis J, Hill S, Squires J: Knowledge translation of research findings. Implement Sci. 2012, 7 (1): 50-10.1186/1748-5908-7-50.
Glasgow RE, Emmons KM: How can we increase translation of research into practice? Types of evidence needed. Annu Rev Public Health. 2007, 28: 413-433. 10.1146/annurev.publhealth.28.021406.144145.
Pawson R, Greenhalgh T, Harvey G, Walshe K: Realist review—a new method of systematic review designed for complex policy interventions. J Health Serv Res Policy. 2005, 10: 21-34. 10.1258/1355819054308530.
Greenhalgh , Robert G, Macfarlane F, Bate P, Kyriakidou O: Diffusion of innovations in service organisations: systematic review and recommendations. Millbank Q. 2004, 82 (4): 581-629. 10.1111/j.0887-378X.2004.00325.x.
Bowen Z: Pathways to "evidence-informed" policy and practice: a framework for action. PLoS Med. 2005, 2 (7): e166-10.1371/journal.pmed.0020166.
Lavis JN, Røttingen JA, Bosch-Capblanch X, Atun R, El-Jardali F, Gilson L, Lewin S, Oliver S, Ongolo-Zogo P, Haines A: Guidance for evidence-informed policies about health systems: linking guidance development to policy development. PLoS Med. 2012, 9 (3): e1001186-10.1371/journal.pmed.1001186.
Rycroft-Malone J, Seers K, Chandler J, Hawkes C, Crichton N, Allen C, Bullock I, Strunin L: The role of evidence, context, and facilitation in an implementation trial: implications for the development of the PARIHS framework. Implement Sci. 2013, 8 (1): 28-10.1186/1748-5908-8-28.
Chagnon F, Pouliot L, Malo C, Gervais M-J, Pigeon M-E: Comparison of determinants of research knowledge utilization by practitioners and administrators in the field of child and family social services. Implement Sci. 2010, 5 (1): 41-10.1186/1748-5908-5-41.
Stetler CB, Ritchie JA, Rycroft-Malone J, Schultz AA, Charns MP: Institutionalizing evidence-based practice: an organizational case study using a model of strategic change. Implement Sci. 2009, 4 (1): 78-10.1186/1748-5908-4-78.
Head B, Ferguson M, Cherney A, Boreham P: Are policy-makers interested in social research? Exploring the sources and uses of valued information among public servants in Australia. Pol Soc. 2014, 33 (2): 89-101. 10.1016/j.polsoc.2014.04.004.
Australian Bureau of Statistics: Australian Demographic Statistics. Canberra: Commonwealth Government; 2012. Report No.
Kothari A, Edwards N, Hamel N, Judd M: Is research working for you? validating a tool to examine the capacity of health organizations to use research.Implementation Science 2009, 4(46): doi:10.1186/1748-5908-4-46.,
Michie S, Johnston M, Abraham C, Lawton R, Parker D, Walker A: Making psychological theory useful for implementing evidence based practice: a consensus approach. Qual Saf Health Care. 2005, 14: 26-33. 10.1136/qshc.2004.011155.
Campbell DM, Redman S, Jorm L, Cooke M, Zwi AB, Rychetnik L: Increasing the use of evidence in health policy: practice and views of policy makers and researchers.Aust New Zealand Health Policy 2009, 6(21).,
Amara N, Ouimet M, Landry R: New evidence on instrumental, conceptual, and symbolic utilization of university research in government agencies. Sci Comm. 2004, 26 (1): 75-106. 10.1177/1075547004267491.
Mitton C, Adair CE, McKenzie E, Patten SB, Waye PB: Knowledge transfer and exchange: review and synthesis of the literature. Milbank Q. 2007, 85 (4): 729-768. 10.1111/j.1468-0009.2007.00506.x. Epub 2007/12/12
Streiner D: Starting at the beginning: an introduction to coefficient alpha and internal consistency. J Pers Assess. 2003, 80 (1): 99-103. 10.1207/S15327752JPA8001_18.
DeVellis RF: Scale Development Theory and Applications. 2012, Sage, Thousand Oaks
Hosmer D, Lemeshow S: Applied Logistic Regression Second Edition. 2000, John Wiley and Sons, Hoboken, N.J.
Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J: A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014, 14 (1): 2-10.1186/1472-6963-14-2.
Haynes AS, Derrick GE, Chapman S, Redman S, Hall WD, Gillespie J, Sturk H: From "our world" to the "real world": exploring the views and behaviour of policy-influential Australian public health researchers. Soc Sci Med. 2011, 72 (7): 1047-1055. 10.1016/j.socscimed.2011.02.004.
Haynes AS, Gillespie JA, Derrick GE, Hall WD, Redman S, Chapman S, Sturk H: Galvanizers, guides, champions, and shields: the many ways that policymakers use public health researchers. Milbank Q. 2011, 89 (4): 564-598. 10.1111/j.1468-0009.2011.00643.x.
Cherney A, Head B, Boreham P, Povey J, Ferguson M: Perspectives of academic social scientists on knowledge transfer and research collaborations: a cross-sectional survey of Australian academics. Evid Pol. 2012, 8 (4): 433-453. 10.1332/174426412X660098.
Shine KT, Bartley B: Whose evidence base? The dynamic effects of ownership, receptivity and values on collaborative evidence-informed policy making. Evid Pol. 2011, 7 (4): 511-530. 10.1332/174426411X603489.
Banks G: Challenges of Evidence-Based Policy-Making. Edited by Commission APS. 2009, Commonweath of Australia, Canberra
Lomas J: Master Class: Diffusion, Spread and Sustainability of Innovation Australian Primary Health Care Research Institute Lecture Series. Australian National University; 2011.
Ritter A: How do drug policy makers access research evidence?. Int J Drug Policy. 2009, 20 (1): 70-75. 10.1016/j.drugpo.2007.11.017.
Flitcroft K, Gillespie J, Salkeld G, Carter S, Trevena L: Getting evidence into policy: the need for deliberative strategies?. Soc Sci Med. 2011, 72 (7): 1039-1046. 10.1016/j.socscimed.2011.01.034.
Ouimet M, Bédard PO, Turgeon J, Lavis JN, Gélineau F, Gagnon F, Dallaire C: Correlates of consulting research evidence among policy analysts in government ministries: a cross-sectional survey. Evid Pol. 2010, 6 (4): 433-460. 10.1332/174426410X535846.
Ellen ME, Lavis JN, Ouimet M, Grimshaw J, Bédard P-O: Determining research knowledge infrastructure for healthcare systems: a qualitative study. Implement Sci. 2011, 6 (1): 60-10.1186/1748-5908-6-60.
Lavis JN, Lomas J, Hamid M, Sewankambo NK: Assessing country-level efforts to link research to action. Bull World Health Organ. 2006, 84 (8): 620-628. 10.2471/BLT.06.030312. Epub 2006/08/19
Lavis J: How can we support the use of systematic reviews in policymaking?. PLoS Med. 2009, 6 (11): e1000141-10.1371/journal.pmed.1000141.
Landry R, Lamari M, Amara N: The extent and determinants of the utilization of university research in government agencies. Public Adm Rev. 2003, 63 (2): 192-205. 10.1111/1540-6210.00279.
Innvær S, Vist G, Trommald M, Oxman A: Health policy-makers' perceptions of their use of evidence: a systematic review. J Health Serv Res Policy. 2002, 7 (4): 239-244. 10.1258/135581902320432778.
Weiss CH, Bucuvalas MJ: Truth tests and utility tests: decision-makers' frames of reference for social science research. Am Sociol Rev. 1980, 45 (2): 302-313. 10.2307/2095127.
Weiss JA, Weiss CH: Social scientists and decision makers look at the usefulness of mental health research. Am Psychol. 1981, 36 (8): 837-847. 10.1037/0003-066X.36.8.837.
Kingdon JW: Agendas, Alternatives, and Public Policies. 1995, HarperCollins College Publishers, New York
Weiss CH: The many meanings of research utilization. Public Adm Rev. 1979, 39 (5): 426-431. 10.2307/3109916.
Taylor RS, Reeves BC, Ewings PE, Taylor RJ: Critical appraisal skills training for health care professionals: a randomized controlled trial [ISRCTN46272378]. BMC Med Educ. 2004, 4 (1): 30-10.1186/1472-6920-4-30.
Denis J-L, Lomas J, Stipich N: Creating receptor capacity for research in the health system: the Executive Training for Research Application (EXTRA) program in Canada. J Health Serv Res Policy. 2008, 13 (Suppl 1): 1-7. 10.1258/jhsrp.2007.007123.
Azjen I, Czasch C: From intentions to behavior: implementation intention, commitment, and conscientiousness. J Appl Psychol. 2009, 39 (6): 1356-1372.
Maio GR, Haddock G: Contemporary Perspectives on The Psychology of Attitudes, Volume xviii. Hove, U.K.; New York: Psychology Press; 2004:469.
Dobbins M, Hanna SE, Ciliska D, Manske S, Cameron R, Mercer SL, O'Mara L, DeCorby K, Robeson P: A randomized controlled trial evaluating the impact of knowledge translation and exchange strategies.Implement Sci 2009, 61(4).,
Australasian Cochrane Centre: Centre for Informing Policy in Health with Evidence from Research. 2013. Available from: http://acc.cochrane.org/cipher
The authors declare that they have no competing interests.
Electronic supplementary material
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Cite this article
Zardo, P., Collie, A. Predicting research use in a public health policy environment: results of a logistic regression analysis. Implementation Sci 9, 142 (2014). https://doi.org/10.1186/s13012-014-0142-8
- Research use
- Research translation