This article has Open Peer Review reports available.
“Push” versus “Pull” for mobilizing pain evidence into practice across different health professions: A protocol for a randomized trial
© MacDermid et al.; licensee BioMed Central Ltd. 2012
Received: 21 August 2012
Accepted: 15 October 2012
Published: 24 November 2012
Optimizing pain care requires ready access and use of best evidence within and across different disciplines and settings. The purpose of this randomized trial is to determine whether a technology-based “push” of new, high-quality pain research to physicians, nurses, and rehabilitation and psychology professionals results in better knowledge and clinical decision making around pain, when offered in addition to traditional “pull” evidence technology. A secondary objective is to identify disciplinary variations in response to evidence and differences in the patterns of accessing research evidence.
Physicians, nurses, occupational/physical therapists, and psychologists (n = 670) will be randomly allocated in a crossover design to receive a pain evidence resource in one of two different ways. Evidence is extracted from medical, nursing, psychology, and rehabilitation journals; appraised for quality/relevance; and sent out (PUSHed) to clinicians by email alerts or available for searches of the accumulated database (PULL). Participants are allocated to either PULL or PUSH + PULL in a randomized crossover design. The PULL intervention has a similar interface but does not send alerts; clinicians can only go to the site and enter search terms to retrieve evidence from the cumulative and continuously updated online database. Upon entry to the trial, there is three months of access to PULL, then random allocation. After six months, crossover takes place. The study ends with a final three months of access to PUSH + PULL. The primary outcomes are uptake and application of evidence. Uptake will be determined by embedded tracking of what research is accessed during use of the intervention. A random subset of 30 participants/ discipline will undergo chart-stimulated recall to assess the nature and depth of evidence utilization in actual case management at baseline and 9 months. A different random subset of 30 participants/ discipline will be tested for their skills in accessing evidence using a standardized simulation test (final 3 months). Secondary outcomes include usage and self-reported evidence-based practice attitudes and behaviors measured at baseline, 3, 9, 15 and 18 months.
The trial will inform our understanding of information preferences and behaviors across disciplines/practice settings. If this intervention is effective, sustained support will be sought from professional/health system initiatives with an interest in optimizing pain management.
Registered as NCT01348802 on clinicaltrials.gov.
Acute and chronic pain affects a large number of Canadians. The prevalence of chronic pain in Canadian adults has been estimated to range from 18–29% [1–4], with 60% experiencing major losses of occupation or function . The 2005 Canadian Community Health Survey found that chronic pain affected 27% of community seniors . In Canada, current healthcare spending to address this problem is $6 billion/year  and, with an aging population, is expected to rise to $10 billion/year by 2025.
There is considerable variation in practice and widespread patient dissatisfaction with the way pain is managed across the spectrum, from relatively benign conditions to terminal illness. For example, in low back pain , unresolved pain is a common reason for repeat visits. Inadequately managed pain in serious illness is the most common reason for requesting assisted suicide . Conversely, there are societal concerns about the use of narcotic medications for pain management and the potential societal impacts of misuse .
Many clinicians are not aware of recent pain research, and misconceptions are common [9, 10]. In fact, health professionals may begin clinical practice with insufficient knowledge about pain, as illustrated by an entry-level curriculum for physicians, nurses, and rehab professionals. The majority (68%) of programs did not specify designated hours for pain education (only 32.5% of the respondents could identify specific hours allotted for pain course content), and the total amount of time allocated to pain varied from 13 to 41 hours. Of interest, veterinarians spent twice as much time studying pain . Studies that cross different types of pain and health professions indicate a large gap between evidence and practice—this is reflected by a high prevalence of low competence/confidence in pain management [12–19].
Pain management is central to health disciplines. Physicians, nurses, and rehabilitation professionals are commonly involved in managing acute and chronic pain . Pain is one of the most common reasons for patients to consult these professions . While scope of practice and focus may vary across professions, pain management provides an important context to study professional differences in use of evidence since pain is a high priority to all and an area where interprofessional practice is supported by evidence [22–26].
Evidence-based practice (EBP) can narrow the gap between research knowledge and practice . Studies suggest that an EBP approach is used in nursing , medical , and rehabilitation practice [30–32] with variable success. Increased use of evidence-based approaches  or supports such as guidelines or decision aids is linked to improved clinical decision making and patient care [34, 35], but the effects are not consistent [36, 37] and, thus, cannot be assumed to work in all contexts.
Professional associations and individual clinicians in rehabilitation are highly supportive of EBP [30, 32, 38–45]. Despite positive attitudes and motivations about EBP, many studies have reported barriers to effective implementation of evidence from research in clinical practice . Several studies indicate that clinicians experience time as an obstacle to searching for research evidence [47, 48]. Clinicians also admit that they lack the skills required to navigate literature databases and to appraise medical literature effectively [32, 49, 50]. A lack of time [40–42] combined with inadequate searching and appraisal skills [20, 21, 43–45] are the primary barriers reported. It has been determined that it requires 53 minutes on average—divided between database searches (39 minutes) and obtaining the articles (25 minutes)—to answer a clinical question using a traditional EBP approach .
Traditional approaches to knowledge translation (KT) in EBP have focused on providing knowledge and technical EBP “skills” to practitioners, leaving them with the burden of searching and appraising evidence [40, 52–56]. Studies suggest these interventions change knowledge about EBP, but not behavior [57, 58]. Practitioners have limited success “pulling out” relevant quality evidence because they lack the required searching, filtering, and appraisal skills [40–43, 51, 54, 59, 60]. Even with training, the time demands are substantial [51, 56]. Current tools provide primitive alerting services that do not target specific practitioners, but send information by “dumping” procedures. As a result, practitioners are overwhelmed with a large volume of information, most of which does not pertain to their practice area. Thus, either these services are not used or alerts are ignored. A basic flaw in both the training or dumping-out KT approaches is that they fail to resolve the essential barriers of time pressures and lack of appraisal/filtering skills.
Evidence-based decision making originated within medicine and evolved to other professions. Limited studies, mostly in the United Kingdom, have addressed differences in attitude or awareness across professions [61, 62]. Familiarity with evidence resources is less than might be expected. In 2009, only 44% of urologists were unaware of the PubMed search engine, and only 14% used it regularly .
Systematic reviews suggest technology-based EBP can enhance care [34, 64, 65], although evidence is sporadic. Working directly with the Health Information Research Unit (HIRU) at McMaster University, we developed an approach to push out targeted, clinically relevant, high-quality research evidence to practitioners. This Premium LiteratUre Service (PLUS) [66, 67] removes the burden of appraising research evidence, as technical experts perform basic critical appraisal, and then filters the high-quality studies and systematic reviews with ratings of relevance and interest from expert clinicians from the field to send only what is clinically relevant and new to the individual clinician. We know this approach increases uptake in physicians  and is valued by nurses  and rehabilitation users (pilot data). In a cluster randomized trial funded by the Canadian Institutes of Health Research, with physicians in Northern Ontario, PLUS was shown to increase the use of medical literature by 57% and substantially increase the depth of use of evidence-based resources . One of the limitations in KT research on this topic to date has been the use of surrogate outcome measures of knowledge utilization. It can be challenging to measure how new knowledge is integrated into clinical decision making, particularly in areas where complex reasoning is required. This study is designed to address some of those deficiencies.
The primary objective of this randomized crossover trial is to evaluate the incremental effect of a PUSH evidence resource (Pain PLUS) across professions (physicians, nursing, rehabilitation, psychologists) as compared to a standard (PULL) approach. The primary outcomes are uptake and application of evidence, and they will be assessed at baseline, 3, 9, 15 and 18 months.
Use of technology (structural evaluation): Embedded tracking will record the retrieval behavior (frequency of access; number of pain studies retrieved).
Perceived usefulness (subjective evaluation): Users will respond to randomly embedded judgments about the usefulness of their session experience.
Skills/behaviors in accessing information: An observational test of information-searching behaviors measuring how different professions access evidence for specific clinical questions and the type, quality, and usefulness of evidence retrieved.
- 4.Application: Evidence-based decision making
Participants’ EBP behaviors and decision making will be measured by a standardized self-report scale.
Chart-stimulated recall will measure competency for using an evidence-based approach to address problems within participants’ own clinical practice.
Our secondary objective is to evaluate professional differences in access/use of research. Therefore, we will evaluate whether the primary outcome measures—either at baseline or in response to the intervention—are modified by professional group. Further, we will perform a structured classification of the types of studies accessed by different professions to look at variations in the type of evidence valued.
We expect that (1) retrieval and application of pain evidence and satisfaction with evidence resources will improve more with push-out intervention and (2) there will be differences between professions in attitudes towards EBP, response to evidence tools/resources, and types of evidence most accessed.
PUSH + PULL intervention
Users will “see” a similar format when they access the pull intervention, but this “placebo control” website will not include any individualized information, nor will they receive alert emails. This group can access new research but must search for it.
Computerized randomization will allocate individuals to intervention group at three months; crossover is automatic at nine months.
Study population and recruitment
Eligible practitioners must (1) be physicians, nurses, occupational therapists (OTs), physical therapists (PTs), or psychologists who are currently working in clinical practice at least one day/week; (2) be fluent in English; (3) have access to a computer at home or at work that has unrestricted access to the World Wide Web; and (4) have an active email account. Clinicians who meet the eligibility criteria are enrolled in the study by the research assistant.
Overview of Study Assessments
What is evaluated
Evaluation methods (applied throughout the trial)
Technological functionality and design
1. Embedded quick questions on functionality
2. Number of articles accessed.
Perceived usefulness of intervention
1. Embedded questions on usefulness of randomly selected logins.
1. Attitudes towards the use of evidence in practice
1. Attitudes about EBP from Knowledge/Attitude/Behaviour Questionnaire 
2. Types of evidence valued
2. We will code using a taxonomy that addresses type of pain, type of intervention and type of research to look at preferences for information across disciplines.
Behaviour in Applying Evidence in Practice
1. Self-reported EBP behaviour
1. Self-reported from Knowledge/ Attitude/Behaviour Questionnaire 
2. Research Access Skills
2. Skills at retrieving useful evidence when presented with a clinical question will be assessed by a structured online performance-based test of information access behavior 
3. Application of Research Evidence in Clinical Decision-Making
Structural usability analysis will have some embedded questions on functionality but will primarily be based on tracking of how the technology is used. We will have data on the number of logins and number of pain and other (specialty) research studies in the database that are accessed. These data will be summarized by the average frequency of use per month and mean number of studies accessed. Other utilization outcomes that will be tracked by the PLUS software include the change in the average frequency of use, average number of keystrokes per month, and the average number of times practitioners use certain specific electronic resources. Collection of these outcomes will be software regulated and, thus, not require any participant burden.
The subjective elements will focus on perceived usefulness of individual sessions. Participants will be asked at the end of each use to answer a single question with a 7-point scale, indicating the extent to which the service provided useful information. If some users fail to log off the system and invoke the final question, we will send them a query by email.
There are few well-validated outcomes assessments of EBP, knowledge acquisition, or attitude/behavior change. In this study, cognitive elements will be assessed using a previously validated scale containing cognitive and behavioral questions specifically related to EBP . This questionnaire has validated factor structure and construct validity and demonstrated responsiveness to EBP interventions . We have added supplemental questions tailored specifically to Pain PLUS, adapted from the Evidence Updates trial  follow-up evaluations, for comparison purposes.
Behavior change is the ultimate goal of KT. It will be assessed through self-report, performance tests, and observer-based competency assessments. EBP behaviors will be self-reported using the Behavior Subscale of the Knowledge Attitudes and Behaviour Questionnaire (KABQ) validated in different professions . Similar questions have demonstrated reliability and validity in nurses .
Skill at accessing research evidence
We will directly measure respondents’ skill in accessing information by providing a clinical scenario where a clinical question needs to be addressed and asking participants to search for research information to address that request. We will monitor their approach to searching and their satisfaction and intention to act upon the information acquired. This test will replicate previous methods used to compare information retrieval by physicians on Clinical Queries versus PubMed . A subset of volunteers (30/profession) will search for information to address two posed clinical questions: one scenario will be a uniform multidisciplinary question (provided to all participants) and the other will be profession specific. Using our tracking software within Pain PLUS, we will be able to identify what search terms are used, what studies are located, the efficiency of their search strategies (e.g., percent of relevant articles, time to find), and effectiveness of their search (quality of the relevant articles located). Volunteers will rate the usefulness of the research located. Our previous study determined that Clinical Queries was more efficient for physicians . In this substudy, we will randomly allocate half of the participants to access through PubMed traditional search versus Clinical Queries, so we can assess whether Clinical Queries is more useful across professions. This subanalysis will provide more detailed and direct data on professional differences in accessing information.
Application of research evidence
We will obtain scores and a more detailed description of actual application of pain research evidence use before and after the intervention by performing chart-stimulated recall (CSR). This approach yields a rich and deep understanding of how evidence is used in practice based on semistructured interviews that probe how/why clinicians make choices about cases retrieved from their own records. A predetermined set of competencies are probed during the interview, and the performance is rated on a 7-point Likert scale. The score provides a quantitative estimate of competency (to use evidence in management of pain in their individual practice). We have successfully coded previous CSR interviews using content-analysis techniques. We have successfully used CSR in a KT study  to evaluate competency in clinician decision making, and others have indicated CSR is reliable and valid to assess different types of practice competency in physicians, nurses, and rehabilitation practitioners [73–75, 77, 78].
Timing of measures
The KABQ will be administered at baseline, 3, 9, 15, and 18 months. Other measures are embedded and distributed over time. CSR will be performed in a subset of 30/group during the first randomization cycle (at baseline and nine months). The skill assessment will be conducted in the final three months of the trial to allow maximum skill development to occur so that our comparisons of professional differences will be less affected by learning curves.
Mediators of use of evidence/intervention response
It is important to monitor and potentially adjust for important covariates that influence study outcomes. We will measure the following variables for each clinician: baseline knowledge/attitudes towards EBP, years of practice, highest degree, and comfort with technology. Organizational variables will also be collected at baseline, including rural/urban location, clinic size, access to online journals, peer support, administrative support, and management support. These variables will be entered into an analysis of covariance examining pre- and post-intervention effects. Our sample size is sufficiently large that we will be able to assess the primary effects of these variables, as well as interactions between organizational variables and participant variables.
To determine the appropriate sample size for this trial, the following factors were considered: (1) the primary and secondary outcomes in the trial, (2) clinically significant differences for the outcomes based on our previous RCT, (3) Type I (alpha) and Type II (beta) errors, (4) estimates of variation from our pilot data, (5) study design, and (6) dropouts. These calculations represent sample size for the overall comparisons between the four professional groups and two intervention groups. Sample-size estimation was based on detecting an effect size of 0.40 between groups on any of the three aspects of outcome (usability, usefulness, and knowledge/attitude/behavior). Assuming Type I error = 0.05 (two-tailed), Type II error = 0.80, and the effect size = 0.40, the sample size required per group is 80. The sample size required for two comparison groups across four professions = 80 × 2 × 4 = 640. To account for our dropout rate (8/431), we have conservatively added 30 to recruit a sample of 670 practitioners. For analysis of response mediators, an R 2 of 30% corresponds to an effect size of 0.42; 600 practitioners provides more than 99% power to detect this effect size with 12 predictors .
This study is organized locally but recruits participants internationally to inform practitioners of the study and the potential for free access to evidence resources. Our multimodal recruitment strategy will use professional associations, websites/conferences, and social media to inform practitioners of the study and the potential for free access to evidence resources.
Proposed methods for protecting against other sources of bias
Control over eligibility violations
Participants will complete a baseline screen with the study research assistant, which will be independently reviewed by a trainee for violations. These will be reported (as a protocol diligence indicator), but no participants will be removed. No further rechecks of eligibility or exclusions will be included after this screening.
Control of biases through blinding
We have attempted to create a blinded control by creating a similar front-face to the website and blinding participants to the nature of the difference in the two resources (push versus pull), although we acknowledge differences will become apparent to users. Since usage data are embedded and automated, they are not influenced by data collectors or social bias. Research assistants and data analysts will be blinded to group allocation for the duration of the study and data analysis. After crossover, subjects will be aware of the difference and, therefore, no longer be blinded. Thus, the overall potential for bias due to lack of blinding has been minimized.
Control of contamination and co-intervention
Inside the study, participant crossovers are impossible since we control access to the intervention. Contamination that happens by participants accessing open access content for physicians by signing up under an alias for service is possible, but not probable. As part of enrollment, we will ask participants to refrain from using any new evidence access tools/services or EBP training during the study trial and will re-evaluate their reports of services used at the end of each intervention cycle. Co-intervention will be discouraged, including attendance at intensive EBP training (although we expect that routine exposure in settings is neither controllable nor a concern since these pre-existing efforts have not been effective).
Ensuring reproducibility and reliability of measurements of outcome
We will use reliable and valid measures of attitudes to EBP that have been used with the studied clinical disciplines [44, 76, 80, 81]. CSR will be performed by two independent raters and checked for initial reliability.
Control of biases relating to (loss to) follow-up
We will use the following measures to ensure minimal loss to follow-up: (1) enrollment/consent will emphasize committing to the entire trial period and will exclude clinicians who cannot commit to the study for 18 months, (2) contact information will be updated at each study interval, and (3) response burden has been kept low. Burden is reduced since brief measures are embedded in the intervention; the total time required for extra questionnaires is relatively low and will be time distributed. The system will initiate regular reminders until the data collection is complete (or the participant withdraws consent).
Compliance is one of our monitored variables. When participants fail to use an intervention, they are classified in some studies as noncompliant. We view lack of use as a failure of the intervention to meet participant needs. Thus, participants who do not use this service will not be classified as noncompliant or dropouts. Dropouts will be defined as participants who inform us that they no longer wish to participate in the study follow-up process. These participants will not be denied access to the intervention, and we will request permission to continue to use data from the embedded tracking of usage. Each participant will receive a $45 gift card upon completion of at least three survey assessments: baseline, one follow-up (at 3, 9, or 15 months), final assessment. Participants who complete the CSR or the search skills assessment will receive an additional $20 gift card as honorarium for time volunteered.
Planned data analyses
Quality checking of primary and secondary outcome data will be performed by re-entry of 100 cases and checking concordance. Descriptive statistics, including frequencies, means, and standard deviations, will be calculated and data displayed graphically for all study variables as a means of exploring data distributions. The data for all outcome measures will be analyzed to ensure that they meet the statistical assumptions necessary to use parametric statistics. Transformations may be considered if necessary to meet test assumptions. To test our main research question, we will use general, linear mixed-effects models  for repeated measurements over time to explore the relationship among the use/behavior scores, comparing the effects between professional groups and intervention types. The models will be fitted for the outcome scores over time as a function of the between-participant (profession/intervention) and within-participant (time) variables. Utilizing a mixed-model approach allows us to account for the dependence between outcome measurements taken over time from the same participant. The structure of the variance-covariance matrix, which dictates the dependence between measurements in the fitting of the model, will be chosen based on standard criteria in statistics. Post hoc pairwise comparisons of the two treatment arms at different time points will be done using multiple comparisons with adjustment to the Type I error rate. Type I error will be set to 5% (i.e., α = 0.05) for all the calculation of the confidence intervals and performing the various hypothesis-testing procedures. Type I error will be adjusted accordingly for multiple comparisons. Standard diagnostic tools will be used to assess any of the model fittings. An intent-to-treat analysis will be employed—those who do not participate will be included in the analysis in their original group. No interim analyses are planned since there are no safety issues and we wish to ensure full power for both primary and secondary analyses.
We also plan to analyze difference in professional attitudes, evidence access, and use. We have powered our study to allow this analysis. Our subgroup analyses will also describe differences in types of evidence accessed. Our group previously determined that physicians prefer systematic reviews to primacy studies and that access was better through peer-reviewed journals than through Cochrane Collaboration ; we will now extend this type of analysis to look at evidence accessed across professions by comparing the types of primary studies (clinical, basic science; quantitative/qualitative), content (prognosis, diagnosis, clinical measurement, treatment effectiveness/outcomes, economic, etiology), and journals (disciplinary/interdisciplinary/other discipline).
The Steering Committee will consist of the four investigators. The study coordinator will prepare a monthly report, detailing the accrual rate, loss to follow-up, withdrawals, and compliance with study protocols. Since this trial does not directly involve patients, there is no need for a Data Safety Monitoring Committee.
We have enrolled more than 70% of the sample target. More than 60% of the participants have reached the nine-month milestone (crossover).
This trial should provide unique information on evidence-access skills and professional differences in use of evidence. We expect to demonstrate the effectiveness of push-out targeted evidence (Pain PLUS) and to move the intervention into open access upon trial completion. We expect that some differences in evidence access and application will occur across professional groups, both in terms of types of pain interventions and types of research that are accessed.
It is our intent to ensure this intervention becomes open access to have ongoing impact. We believe this trial will inform our understanding on how to best deliver evidence to clinicians across a variety of professions dealing with pain. Sustainability of KT interventions is a critical issue area. As our findings emerge, we will deal with sustainability issues. Since evidence updates have been created for specific disciplines, including physicians, nurses, and rehabilitation professionals, it may be that folding in a pain resource into these existing resources would be the optimal method to ensure sustainability. Conversely, pain stakeholders may prefer a customized resource. A variety of pain resources are being developed, and strategies to share these across stakeholders are just emerging in stakeholder discussions. This may form an opportunity to tailor the knowledge from this study to emerging resources. Using the knowledge about what pain evidence exists, is accessed, and valued across professions will allow us to develop a pain knowledge strategy. The sustainability plan will be designed to maximize uptake of pain knowledge in a sustainable manner.
One of the benefits of this study is the ability to compare the evidence accessed by different professions when dealing with the same issue. We will use a descriptive approach to convey these differences by classifying the type of pain, the type of clinical questions, and the type of research design of these studies to convey if different types of evidence are valued by different professions. We anticipate, for example, that nurses and therapists might access more qualitative research, whereas physicians might access more RCTs on drug interventions. Psychologists are being included for the first time in this type of intervention, and their information behaviors are relatively unstudied. We expect that this information might help us understand if evidence scanning and filtering needs to be adjusted across professions. For example, decisions about how to filter qualitative studies may be more challenging than are such decisions in quantitative research.
We also acknowledge that the information resources that are available to knowledge users are rapidly expanding. Resources vary in their delivery, quality, customization, source content, and format of information. Knowledge users may become overwhelmed with information resources or stick with ones they have become comfortable with, irrespective of the quality of the information. It may not be clear to end users that the evidence-filtering process used in Pain PLUS protects them against low-quality information. Further, as clinicians often value advice about clinical decision making in actual cases, or implementation issues, they may be drawn to information resources that focus on these rather than the primary or synthesized research available. Since information is always in competition with other information, we cannot be certain that Pain PLUS will win in this competition.
This study is funded by the Canadian Institutes of Health Research (CIHR). The authors thank Margaret Lomotan for assistance with the manuscript preparation and study coordination.
- Gummesson C, Atroshi I, Ekdahl C, Johnsson R, Ornstein E: Chronic upper extremity pain and co-occurring symptoms in a general population. Arthritis Rheum. 2003, 49: 697-702. 10.1002/art.11386.View ArticlePubMedGoogle Scholar
- Ramage-Morin PL: Chronic pain in Canadian seniors. Health Rep. 2008, 19: 37-52.PubMedGoogle Scholar
- Verhaak PF, Kerssens JJ, Dekker J, Sorbi MJ, Bensing JM: Prevalence of chronic benign pain disorder among adults: a review of the literature. Pain. 1998, 77: 231-239. 10.1016/S0304-3959(98)00117-1.View ArticlePubMedGoogle Scholar
- Von Korff M, Crane P, Lane M, Miglioretti DL, Simon G, Saunders K: Chronic spinal pain and physical-mental comorbidity in the United States: results from the national comorbidity survey replication. Pain. 2005, 113: 331-339. 10.1016/j.pain.2004.11.010.View ArticlePubMedGoogle Scholar
- Phillips CJ, Schopflocher D: The economics of Chronic Pain. Chronic Pain: A Health Policy Perspective. 2008, Wiley-Blackwell, WeinhamGoogle Scholar
- McPhillips-Tangum CA, Cherkin DC, Rhodes LA, Markham C: Reasons for repeated medical visits among patients with chronic back pain. J Gen Intern Med. 1998, 13: 289-295. 10.1046/j.1525-1497.1998.00093.x.View ArticlePubMedPubMed CentralGoogle Scholar
- Fischer S, Huber CA, Furter M, Imhof L, Mahrer IR, Schwarzenegger C: Reasons why people in Switzerland seek assisted suicide: the view of patients and physicians. Swiss Med Wkly. 2009, 139: 333-338.PubMedGoogle Scholar
- Webster LR, Bath B, Medve RA: Opioid formulations in development designed to curtail abuse: who is the target?. Expert Opin Investig Drugs. 2009, 18: 255-263. 10.1517/13543780902751622.View ArticlePubMedGoogle Scholar
- Wolfert MZ, Gilson AM, Dahl JL, Cleary JF: Opioid analgesics for pain control: Wisconsin physicians’ knowledge, beliefs, attitudes, and prescribing practices. Pain Med. 2010, 11: 425-434. 10.1111/j.1526-4637.2009.00761.x.View ArticlePubMedGoogle Scholar
- Veale DJ, Woolf AD, Carr AJ: Chronic musculoskeletal pain and arthritis: impact, attitudes and perceptions. Ir Med J. 2008, 101: 208-210.PubMedGoogle Scholar
- Watt-Watson J, McGillion M, Hunter J, Choiniere M, Clark AJ, Dewar A: A survey of prelicensure pain curricula in health science faculties in Canadian universities. Pain Res Manag. 2009, 14: 439-444.View ArticlePubMedPubMed CentralGoogle Scholar
- Douglass MA, Sanchez GM, Alford DP, Wilkes G, Greenwald JL: Physicians’ pain management confidence versus competence. J Opioid Manag. 2009, 5: 169-174.PubMedGoogle Scholar
- Finestone AS, Raveh A, Mirovsky Y, Lahad A, Milgrom C: Orthopaedists’ and family practitioners’ knowledge of simple low back pain management. Spine (Phila Pa 1976). 2009, 34: 1600-1603. 10.1097/BRS.0b013e3181a96622.View ArticleGoogle Scholar
- Bishop A, Foster NE, Thomas E, Hay EM: How does the self-reported clinical management of patients with low back pain relate to the attitudes and beliefs of health care practitioners? A survey of UK general practitioners and physiotherapists. Pain. 2008, 135: 187-195. 10.1016/j.pain.2007.11.010.View ArticlePubMedPubMed CentralGoogle Scholar
- Todd KH, Sloan EP, Chen C, Eder S, Wamstad K: Survey of pain etiology, management practices and patient satisfaction in two urban emergency departments. CJEM. 2002, 4: 252-256.PubMedGoogle Scholar
- Zenz M, Zenz T, Tryba M, Strumpf M: Severe undertreatment of cancer pain: a 3-year survey of the German situation. J Pain Symptom Manage. 1995, 10: 187-191. 10.1016/0885-3924(94)00122-2.View ArticlePubMedGoogle Scholar
- Plaisance L, Logan C: Nursing students’ knowledge and attitudes regarding pain. Pain Manag Nurs. 2006, 7: 167-175. 10.1016/j.pmn.2006.09.003.View ArticlePubMedGoogle Scholar
- Byrd PJ, Gonzales I, Parsons V: Exploring barriers to pain management in newborn intensive care units: a pilot survey of NICU nurses. Adv Neonatal Care. 2009, 9: 299-306. 10.1097/ANC.0b013e3181c1ff9c.View ArticlePubMedGoogle Scholar
- Valente SM: Research dissemination and utilization improving care at the bedside. J Nurs Care Qual. 2003, 18: 114-121. 10.1097/00001786-200304000-00004.View ArticlePubMedGoogle Scholar
- Peng P, Stinson JN, Choiniere M, Dion D, Intrater H, LeFort S: Role of health care professionals in multidisciplinary pain treatment facilities in Canada. Pain Res Manag. 2008, 13: 484-488.View ArticlePubMedPubMed CentralGoogle Scholar
- Zailinawati AH, Teng CL, Kamil MA, Achike FI, Koh CN: Pain morbidity in primary care - preliminary observations from two different primary care settings. Med J Malaysia. 2006, 61: 162-167.PubMedGoogle Scholar
- Scascighini L, Toma V, Dober-Spielmann S, Sprott H: Multidisciplinary treatment for chronic pain: a systematic review of interventions and outcomes. Rheumatology (Oxford). 2008, 47: 670-678. 10.1093/rheumatology/ken021.View ArticleGoogle Scholar
- Thomsen AB, Sorensen J, Sjogren P, Eriksen J: Economic evaluation of multidisciplinary pain management in chronic pain patients: a qualitative systematic review. J Pain Symptom Manage. 2001, 22: 688-698. 10.1016/S0885-3924(01)00326-8.View ArticlePubMedGoogle Scholar
- Karjalainen K, Malmivaara A, van Tulder M, Roine R, Jauhiainen M, Hurri H: Multidisciplinary biopsychosocial rehabilitation for subacute low back pain among working age adults. Cochrane Database Syst Rev. 2003, CD002193-2Google Scholar
- Karjalainen K, Malmivaara A, van Tulder M, Roine R, Jauhiainen M, Hurri H: Multidisciplinary biopsychosocial rehabilitation for neck and shoulder pain among working age adults. Cochrane Database of Systematic Reviews. 2003, CD002194-update of Cochrane Database Syst Rev. 2000;(3):CD002194; PMID: 10908529]. [Review] [23 refsGoogle Scholar
- Guzman J, Esmail R, Karjalainen K, Malmivaara A, Irvin E, Bombardier C: Multidisciplinary rehabilitation for chronic low back pain: systematic review. BMJ. 2001, 322: 1511-1516. 10.1136/bmj.322.7301.1511.View ArticlePubMedPubMed CentralGoogle Scholar
- MacDermid JC, Graham ID: Knowledge translation: putting the “practice” in evidence-based practice. Hand Clin. 2009, 25: 125-143. 10.1016/j.hcl.2008.10.003.View ArticlePubMedGoogle Scholar
- Doran DM, Haynes RB, Kushniruk A, Straus S, Grimshaw J, Hall LM: Supporting Evidence-Based Practice for Nurses through Information Technologies. Worldviews Evid Based Nurs. 2010, 7: 4-15. 10.1111/j.1741-6787.2009.00179.x.View ArticlePubMedGoogle Scholar
- Dorsch JL, Aiyer MK, Meyer LE: Impact of an evidence-based medicine curriculum on medical students’ attitudes and skills. J Med Libr Assoc. 2004, 92: 397-406.PubMedPubMed CentralGoogle Scholar
- Bennett S, Hoffmann T, McCluskey A, McKenna K, Strong J, Tooth L: Evidence-based practice forum - Introducing OTseeker - (Occupational therapy systematic evaluation of evidence): a new evidence database for occupational therapists. Am J Occup Ther. 2003, 57: 635-638. 10.5014/ajot.57.6.635.View ArticlePubMedGoogle Scholar
- Dysart AM, Tomlin GS: Factors related to evidence-based practice among U.S. occupational therapy clinicians. Am J Occup Ther. 2002, 56: 275-284. 10.5014/ajot.56.3.275.View ArticlePubMedGoogle Scholar
- Jette DU, Bacon K, Batty C, Carlson M, Ferland A, Hemingway RD: Evidence-based practice: beliefs, attitudes, knowledge, and behaviors of physical therapists. Phys Ther. 2003, 83: 786-805.PubMedGoogle Scholar
- Denmark D: The evidence-based advantage. The proper use of EBM in CPOE can enhance physician decision-making and improve patient outcomes. Health Manag Technol. 2008, 29: 22-24–22, 25PubMedGoogle Scholar
- Garg AX, Adhikari NK, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J: Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005, 293: 1223-1238. 10.1001/jama.293.10.1223.View ArticlePubMedGoogle Scholar
- Gilmore AS, Zhao Y, Kang N, Ryskina KL, Legorreta AP, Taira DA: Patient outcomes and evidence-based medicine in a preferred provider organization setting: a six-year evaluation of a physician pay-for-performance program. Health Serv Res. 2007, 42: 2140-2159. 10.1111/j.1475-6773.2007.00725.x.View ArticlePubMedPubMed CentralGoogle Scholar
- Heselmans A, Van de Velde S, Donceel P, Aertgeerts B, Ramaekers D: Effectiveness of electronic guideline-based implementation systems in ambulatory care settings - a systematic review. Implement Sci. 2009, 4: 82. 10.1186/1748-5908-4-82.View ArticlePubMedPubMed CentralGoogle Scholar
- Lugtenberg M, Zegers-van Schaick JM, Westert GP, Burgers JS: Why don’t physicians adhere to guideline recommendations in practice? An analysis of barriers among Dutch general practitioners. Implement Sci. 2009, 4: 54. 10.1186/1748-5908-4-54.View ArticlePubMedPubMed CentralGoogle Scholar
- Dubouloz CJ, Egan M, Vallerand J, von Zweck C: Occupational therapists’ perceptions of evidence-based practice. Am J Occup Ther. 1999, 53: 445-453. 10.5014/ajot.53.5.445.View ArticlePubMedGoogle Scholar
- Goldhahn S, Audige L, Helfet DL, Hanson B: Pathways to evidence-based knowledge in orthopaedic surgery: an international survey of AO course participants. Int Orthop. 2005, 29: 59-64. 10.1007/s00264-004-0617-3.View ArticlePubMedPubMed CentralGoogle Scholar
- Judd MGP: Examining the use of evidence-based practice resources among physiotherapists in Canada. 2004, University of Ottawa, Ottawa, Ontario, MScGoogle Scholar
- Kamwendo K: What do Swedish physiotherapists feel about research? A survey of perceptions, attitudes, intentions and engagement. Physiother Res Int. 2002, 7: 23-34. 10.1002/pri.238.View ArticlePubMedGoogle Scholar
- Palfreyman S, Tod A, Doyle J: Comparing evidence-based practice of nurses and physiotherapists. Br J Nurs. 2003, 12: 246-253.View ArticlePubMedGoogle Scholar
- Schaafsma F, Hulshof C, van Dijk F, Verbeek J: Information demands of occupational health physicians and their attitude towards evidence-based medicine. Scand J Work Environ Health. 2004, 30: 327-330. 10.5271/sjweh.802.View ArticlePubMedGoogle Scholar
- Thiel L, Ghosh Y: Determining registered nurses’ readiness for evidence-based practice. Worldviews Evid Based Nurs. 2008, 5: 182-192. 10.1111/j.1741-6787.2008.00137.x.View ArticlePubMedGoogle Scholar
- Ulvenes LV, Aasland O, Nylenna M, Kristiansen IS: Norwegian physicians’ knowledge of and opinions about evidence-based medicine: cross-sectional study. PLoS One. 2009, 4: e7828. 10.1371/journal.pone.0007828.View ArticlePubMedPubMed CentralGoogle Scholar
- O’Donnell CA: Attitudes and knowledge of primary care professionals towards evidence-based practice: a postal survey. J Eval Clin Pract. 2004, 10: 197-205. 10.1111/j.1365-2753.2003.00458.x.View ArticlePubMedGoogle Scholar
- Ely JW, Osheroff JA, Chambliss ML, Ebell MH, Rosenbaum ME: Answering physicians’ clinical questions: obstacles and potential solutions. J Am Med Inform Assoc. 2005, 12: 217-224.View ArticlePubMedPubMed CentralGoogle Scholar
- Ely JW, Osheroff JA, Ebell MH, Chambliss ML, Vinson DC, Stevermer JJ: Obstacles to answering doctors’ questions about patient care with evidence: qualitative study. BMJ. 2002, 324: 710. 10.1136/bmj.324.7339.710.View ArticlePubMedPubMed CentralGoogle Scholar
- Griffiths JM, Bryar RM, Closs SJ, Cooke J, Hostick T, Kelly S: Barriers to research implementation. Br J Community Nurs. 2001, 6: 501-510.View ArticlePubMedGoogle Scholar
- Verbeek JH, van Dijk FJ, Malmivaara A, Hulshof CT, Rasanen K, Kankaanpaa EE: Evidence-based medicine for occupational health. Scand J Work Environ Health. 2002, 28: 197-204. 10.5271/sjweh.665.View ArticlePubMedGoogle Scholar
- Krahn J, Sauerland S, Rixen D, Gregor S, Bouillon B, Neugebauer EA: Applying evidence-based surgery in daily clinical routine: a feasibility study. Arch Orthop Trauma Surg. 2006, 126: 88-92. 10.1007/s00402-005-0095-0.View ArticlePubMedGoogle Scholar
- Dirschl DR, Tornetta P, Bhandari M: Designing, conducting, and evaluating journal clubs in orthopaedic surgery. Clin Orthop Relat Res. 2003, 413: 146-157.View ArticlePubMedGoogle Scholar
- Fliegel JE, Frohna JG, Mangrulkar RS: A computer-based OSCE station to measure competence in evidence-based medicine skills in medical students. Acad Med. 2002, 77: 1157-1158. 10.1097/00001888-200211000-00022.View ArticlePubMedGoogle Scholar
- Maher CG, Sherrington C, Elkins M, Herbert RD, Moseley AM: Challenges for evidence-based physical therapy: accessing and interpreting high-quality evidence on therapy. Phys Ther. 2004, 84: 644-654.PubMedGoogle Scholar
- McCluskey A: Occupational therapists report on low level of knowledge, skill and involvement in evidence-based practice. Aust Occ Ther J. 2003, 50: 3-12. 10.1046/j.1440-1630.2003.00303.x.View ArticleGoogle Scholar
- Ely JW, Osheroff JA, Ebell MH, Chambliss ML, Vinson DC, Stevermer JJ: Obstacles to answering doctors’ questions about patient care with evidence: qualitative study. BMJ (Clinical research ed ). 2002, 324: 710. 10.1136/bmj.324.7339.710.View ArticleGoogle Scholar
- Coomarasamy A, Taylor R, Khan KS: A systematic review of postgraduate teaching in evidence-based medicine and critical appraisal. Med Teach. 2003, 25: 77-81. 10.1080/0142159021000061468.View ArticlePubMedGoogle Scholar
- Taylor RS, Reeves BC, Ewings PE, Taylor RJ: Critical appraisal skills training for health care professionals: a randomized controlled trial [ISRCTN46272378]. BMC Med Educ. 2004, 4: 30. 10.1186/1472-6920-4-30.View ArticlePubMedPubMed CentralGoogle Scholar
- Newman M, Papadopoulos I, Sigsworth J: Barriers to evidence-based practice. Intensive Crit Care Nurs. 1998, 14: 231-238. 10.1016/S0964-3397(98)80634-4.View ArticlePubMedGoogle Scholar
- Stevenson K, Lewis M, Hay E: Do physiotherapists’ attitudes towards evidence-based practice change as a result of an evidence-based educational programme?. J Eval Clin Pract. 2004, 10: 207-217. 10.1111/j.1365-2753.2003.00479.x.View ArticlePubMedGoogle Scholar
- Upton D, Upton P: Knowledge and use of evidence-based practice by allied health and health science professionals in the United Kingdom. J Allied Health. 2006, 35: 127-133.PubMedGoogle Scholar
- Upton D, Upton P: Knowledge and use of evidence-based practice of GPs and hospital doctors. J Eval Clin Pract. 2006, 12: 376-384. 10.1111/j.1365-2753.2006.00602.x.View ArticlePubMedGoogle Scholar
- Dahm P, Poolman RW, Bhandari M, Fesperman SF, Baum J, Kosiak B: Perceptions and competence in evidence-based medicine: a survey of the American urological association membership. J Urol. 2009, 181: 767-777.View ArticlePubMedGoogle Scholar
- Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, Roth E: Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006, 144: 742-752.View ArticlePubMedGoogle Scholar
- Hunt DL, Haynes RB, Hanna SE, Smith K: Effects of computer-based clinical decision support systems on physician performance and patient outcomes: a systematic review. JAMA. 1998, 280: 1339-1346. 10.1001/jama.280.15.1339.View ArticlePubMedGoogle Scholar
- Haynes RB, Holland J, Cotoi C, McKinlay RJ, Wilczynski NL, Walters LA: McMaster PLUS: a cluster randomized clinical trial of an intervention to accelerate clinical use of evidence-based information from digital libraries. J Am Med Inform Assoc. 2006, 13: 593-600. 10.1197/jamia.M2158.View ArticlePubMedPubMed CentralGoogle Scholar
- Holland J, Haynes RB: McMaster Premium Literature Service (PLUS): an evidence-based medicine information service delivered on the Web. AMIA Annu Symp Proc. 2005, 2005: 340-344.PubMed CentralGoogle Scholar
- Ho K, Chockalingam A, Best A, Walsh G: Technology-enabled knowledge translation: building a framework for collaboration. CMAJ. 2003, 168: 710-711.PubMedPubMed CentralGoogle Scholar
- Ho K, Bloch R, Gondocz T, Laprise R, Perrier L, Ryan D: Technology-enabled knowledge translation: frameworks to promote research and practice. J Contin Educ Health Prof. 2004, 24: 90-99. 10.1002/chp.1340240206.View ArticlePubMedGoogle Scholar
- Ho K, Lauscher HN, Best A, Walsh G, Jarvis-Selinger S, Fedeles M: Dissecting technology-enabled knowledge translation: essential challenges, unprecedented opportunities. Clin Invest Med. 2004, 27: 70-78.PubMedGoogle Scholar
- Johnston JM, Leung GM, Fielding R, Tin KY, Ho LM: The development and validation of a knowledge, attitude and behaviour questionnaire to assess undergraduate evidence-based practice teaching and learning. Med Educ. 2003, 37: 992-1000. 10.1046/j.1365-2923.2003.01678.x.View ArticlePubMedGoogle Scholar
- Lokker C, Haynes RB, Wilczynski NL, McKibbon KA, Walter SD: Retrieval of diagnostic and treaWilcz; Wtment studies for clinical use through PubMed and PubMed’s Clinical Queries filters. Journal of the American Medical Informatics Association. 2012, 18: 652-659.View ArticleGoogle Scholar
- MacDermid JC, Solomon P, Law M, Russell D, Stratford P: Defining the effect and mediators of two knowledge translation strategies designed to alter knowledge, intent and clinical utilization of rehabilitation outcome measures: a study protocol [NCT00298727]. Implement Sci. 2006, 1: 14. 10.1186/1748-5908-1-14.View ArticlePubMedPubMed CentralGoogle Scholar
- Norman GR, Davis DA, Lamb S, Hanna E, Caulford P, Kaigas T: Competency assessment of primary care physicians as part of a peer review program. JAMA. 1993, 270: 1046-1051. 10.1001/jama.1993.03510090030007.View ArticlePubMedGoogle Scholar
- Salvatori P, Baptiste S, Ward M: Development of a tool to measure clinical competence in occupational therapy: a pilot study?. Can J Occup Ther. 2000, 67: 51-60.View ArticlePubMedGoogle Scholar
- Upton D, Upton P: Development of an evidence-based practice questionnaire for nurses. J Adv Nurs. 2006, 53: 454-458. 10.1111/j.1365-2648.2006.03739.x.View ArticlePubMedGoogle Scholar
- Cunnington JP, Hanna E, Turnhbull J, Kaigas TB, Norman GR: Defensible assessment of the competency of the practicing physician. Acad Med. 1997, 72: 9-12.View ArticlePubMedGoogle Scholar
- Goulet F, Jacques A, Gagnon R, Bourbeau D, Laberge D, Melanson J: Performance assessment. Family physicians in Montreal meet the mark!. Can Fam Physician. 2002, 48: 1337-1344.PubMedPubMed CentralGoogle Scholar
- Soper D: The Free Statistics Calculators Website-Online Software. 2010,http://www.danielsoper.com/statcalc/,Google Scholar
- Gomez E: Tap the Internet to strengthen evidence-based oncology nursing practice. ONS News. 2000, 15: 7-Google Scholar
- Leach MJ, Gillham D: Evaluation of the evidence-based practice attitude and utilization SurvEy for complementary and alternative medicine practitioners. J Eval Clin Pract. 2008, 14: 792-798. 10.1111/j.1365-2753.2008.01046.x.View ArticlePubMedGoogle Scholar
- Verbeke G, Molengerghs GS: Linear Mixed Models for Longitudinal Data. 2001, Springer-Verlag, BerlinGoogle Scholar
- McKinlay RJ, Cotoi C, Wilczynski NL, Haynes RB: Systematic reviews and original articles differ in relevance, novelty, and use in an evidence-based service for physicians: PLUS project. J Clin Epidemiol. 2008, 61: 449-454. 10.1016/j.jclinepi.2007.10.016.View ArticlePubMedGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.