Skip to main content

“Push” versus “Pull” for mobilizing pain evidence into practice across different health professions: A protocol for a randomized trial



Optimizing pain care requires ready access and use of best evidence within and across different disciplines and settings. The purpose of this randomized trial is to determine whether a technology-based “push” of new, high-quality pain research to physicians, nurses, and rehabilitation and psychology professionals results in better knowledge and clinical decision making around pain, when offered in addition to traditional “pull” evidence technology. A secondary objective is to identify disciplinary variations in response to evidence and differences in the patterns of accessing research evidence.


Physicians, nurses, occupational/physical therapists, and psychologists (n = 670) will be randomly allocated in a crossover design to receive a pain evidence resource in one of two different ways. Evidence is extracted from medical, nursing, psychology, and rehabilitation journals; appraised for quality/relevance; and sent out (PUSHed) to clinicians by email alerts or available for searches of the accumulated database (PULL). Participants are allocated to either PULL or PUSH + PULL in a randomized crossover design. The PULL intervention has a similar interface but does not send alerts; clinicians can only go to the site and enter search terms to retrieve evidence from the cumulative and continuously updated online database. Upon entry to the trial, there is three months of access to PULL, then random allocation. After six months, crossover takes place. The study ends with a final three months of access to PUSH + PULL. The primary outcomes are uptake and application of evidence. Uptake will be determined by embedded tracking of what research is accessed during use of the intervention. A random subset of 30 participants/ discipline will undergo chart-stimulated recall to assess the nature and depth of evidence utilization in actual case management at baseline and 9 months. A different random subset of 30 participants/ discipline will be tested for their skills in accessing evidence using a standardized simulation test (final 3 months). Secondary outcomes include usage and self-reported evidence-based practice attitudes and behaviors measured at baseline, 3, 9, 15 and 18 months.


The trial will inform our understanding of information preferences and behaviors across disciplines/practice settings. If this intervention is effective, sustained support will be sought from professional/health system initiatives with an interest in optimizing pain management.

Trial registration

Registered as NCT01348802 on

Peer Review reports


Acute and chronic pain affects a large number of Canadians. The prevalence of chronic pain in Canadian adults has been estimated to range from 18–29% [14], with 60% experiencing major losses of occupation or function [5]. The 2005 Canadian Community Health Survey found that chronic pain affected 27% of community seniors [2]. In Canada, current healthcare spending to address this problem is $6 billion/year [5] and, with an aging population, is expected to rise to $10 billion/year by 2025.

There is considerable variation in practice and widespread patient dissatisfaction with the way pain is managed across the spectrum, from relatively benign conditions to terminal illness. For example, in low back pain [6], unresolved pain is a common reason for repeat visits. Inadequately managed pain in serious illness is the most common reason for requesting assisted suicide [7]. Conversely, there are societal concerns about the use of narcotic medications for pain management and the potential societal impacts of misuse [8].

Many clinicians are not aware of recent pain research, and misconceptions are common [9, 10]. In fact, health professionals may begin clinical practice with insufficient knowledge about pain, as illustrated by an entry-level curriculum for physicians, nurses, and rehab professionals. The majority (68%) of programs did not specify designated hours for pain education (only 32.5% of the respondents could identify specific hours allotted for pain course content), and the total amount of time allocated to pain varied from 13 to 41 hours. Of interest, veterinarians spent twice as much time studying pain [11]. Studies that cross different types of pain and health professions indicate a large gap between evidence and practice—this is reflected by a high prevalence of low competence/confidence in pain management [1219].

Pain management is central to health disciplines. Physicians, nurses, and rehabilitation professionals are commonly involved in managing acute and chronic pain [20]. Pain is one of the most common reasons for patients to consult these professions [21]. While scope of practice and focus may vary across professions, pain management provides an important context to study professional differences in use of evidence since pain is a high priority to all and an area where interprofessional practice is supported by evidence [2226].

Evidence-based practice (EBP) can narrow the gap between research knowledge and practice [27]. Studies suggest that an EBP approach is used in nursing [28], medical [29], and rehabilitation practice [3032] with variable success. Increased use of evidence-based approaches [33] or supports such as guidelines or decision aids is linked to improved clinical decision making and patient care [34, 35], but the effects are not consistent [36, 37] and, thus, cannot be assumed to work in all contexts.

Professional associations and individual clinicians in rehabilitation are highly supportive of EBP [30, 32, 3845]. Despite positive attitudes and motivations about EBP, many studies have reported barriers to effective implementation of evidence from research in clinical practice [46]. Several studies indicate that clinicians experience time as an obstacle to searching for research evidence [47, 48]. Clinicians also admit that they lack the skills required to navigate literature databases and to appraise medical literature effectively [32, 49, 50]. A lack of time [4042] combined with inadequate searching and appraisal skills [20, 21, 4345] are the primary barriers reported. It has been determined that it requires 53 minutes on average—divided between database searches (39 minutes) and obtaining the articles (25 minutes)—to answer a clinical question using a traditional EBP approach [51].

Traditional approaches to knowledge translation (KT) in EBP have focused on providing knowledge and technical EBP “skills” to practitioners, leaving them with the burden of searching and appraising evidence [40, 5256]. Studies suggest these interventions change knowledge about EBP, but not behavior [57, 58]. Practitioners have limited success “pulling out” relevant quality evidence because they lack the required searching, filtering, and appraisal skills [4043, 51, 54, 59, 60]. Even with training, the time demands are substantial [51, 56]. Current tools provide primitive alerting services that do not target specific practitioners, but send information by “dumping” procedures. As a result, practitioners are overwhelmed with a large volume of information, most of which does not pertain to their practice area. Thus, either these services are not used or alerts are ignored. A basic flaw in both the training or dumping-out KT approaches is that they fail to resolve the essential barriers of time pressures and lack of appraisal/filtering skills.

Evidence-based decision making originated within medicine and evolved to other professions. Limited studies, mostly in the United Kingdom, have addressed differences in attitude or awareness across professions [61, 62]. Familiarity with evidence resources is less than might be expected. In 2009, only 44% of urologists were unaware of the PubMed search engine, and only 14% used it regularly [63].

Systematic reviews suggest technology-based EBP can enhance care [34, 64, 65], although evidence is sporadic. Working directly with the Health Information Research Unit (HIRU) at McMaster University, we developed an approach to push out targeted, clinically relevant, high-quality research evidence to practitioners. This Premium LiteratUre Service (PLUS) [66, 67] removes the burden of appraising research evidence, as technical experts perform basic critical appraisal, and then filters the high-quality studies and systematic reviews with ratings of relevance and interest from expert clinicians from the field to send only what is clinically relevant and new to the individual clinician. We know this approach increases uptake in physicians [66] and is valued by nurses [28] and rehabilitation users (pilot data). In a cluster randomized trial funded by the Canadian Institutes of Health Research, with physicians in Northern Ontario, PLUS was shown to increase the use of medical literature by 57% and substantially increase the depth of use of evidence-based resources [57]. One of the limitations in KT research on this topic to date has been the use of surrogate outcome measures of knowledge utilization. It can be challenging to measure how new knowledge is integrated into clinical decision making, particularly in areas where complex reasoning is required. This study is designed to address some of those deficiencies.

The primary objective of this randomized crossover trial is to evaluate the incremental effect of a PUSH evidence resource (Pain PLUS) across professions (physicians, nursing, rehabilitation, psychologists) as compared to a standard (PULL) approach. The primary outcomes are uptake and application of evidence, and they will be assessed at baseline, 3, 9, 15 and 18 months.


The trial is a crossover randomized controlled trial (RCT) with repeated baseline and follow-up measurements (Figure 1). The interventions are two different types of access to pain research evidence for four different types of professionals involved in pain management. The trial will begin with a three-month (repeated) baseline, during which average participant use of the standard PULL resource will be monitored. Participants will then be randomly allocated to either receive PUSH + PULL or continue to use the PULL resource. After six months, participants will cross over to the alternate intervention for an additional six months. To complete the trial, both groups will finish with three months of PUSH + PULL access. The repeated baseline period provides more stable estimates of participants’ access prior to randomization. The rationale for the crossover design is that it will maximize our statistical efficiency, increasing the precision of effect estimates. This is particularly important for our subgroup (profession) estimates; it will also provide better control of unknown potential confounders. The main potential drawback of a crossover design occurs if the “wash-out” is incomplete. While we will have control of the intervention delivery, the exposure to email alerts may affect use of the pull resources; therefore, we will test for order effects. The study received Ethics Approval from the McMaster University Research Ethics Board.

Figure 1
figure 1

Study Design.

Traditional RCT methodology supports a single primary outcome measure, but a theoretical framework for outcome evaluation was based on the conceptual framework of technology-based KT [68, 69]. Therefore, indicators of intervention effectiveness for this study are as follows:

  1. 1.

    Use of technology (structural evaluation): Embedded tracking will record the retrieval behavior (frequency of access; number of pain studies retrieved).

  2. 2.

    Perceived usefulness (subjective evaluation): Users will respond to randomly embedded judgments about the usefulness of their session experience.

  3. 3.

    Skills/behaviors in accessing information: An observational test of information-searching behaviors measuring how different professions access evidence for specific clinical questions and the type, quality, and usefulness of evidence retrieved.

  4. 4.

    Application: Evidence-based decision making

    1. a.

      Participants’ EBP behaviors and decision making will be measured by a standardized self-report scale.

    2. b.

      Chart-stimulated recall will measure competency for using an evidence-based approach to address problems within participants’ own clinical practice.

Our secondary objective is to evaluate professional differences in access/use of research. Therefore, we will evaluate whether the primary outcome measures—either at baseline or in response to the intervention—are modified by professional group. Further, we will perform a structured classification of the types of studies accessed by different professions to look at variations in the type of evidence valued.

We expect that (1) retrieval and application of pain evidence and satisfaction with evidence resources will improve more with push-out intervention and (2) there will be differences between professions in attitudes towards EBP, response to evidence tools/resources, and types of evidence most accessed.


All participants will receive unique IDs and passwords and will be able to log into an evidence resource, Pain PLUS (Figure 2), to search for recent studies and systematic reviews of pain management (PULL). Once signed up, all practitioners will receive more specific instructions on using the evidence repository. This evidence repository has a cumulative collection of research evidence that has evolved over time—first for medicine, then for nursing, and then rehabilitation and psychology. It contains a searchable database with links to PubMed abstracts and direct links to any free full-text articles.

Figure 2
figure 2

Pain PLUS Intervention. Screenshot of the Pain PLUS main webpage.

PUSH + PULL intervention

Pain PLUS is a customized, personalized, evidence-based, alerting and look-up service that links clinicians to new research findings most relevant to their clinical practice. This service is based on ongoing hand searches of over 120 clinical journals. Trained, experienced research staff determine scientific merit/quality rating. Discipline-specific clinical ratings of relevance and newsworthiness are obtained from volunteer raters who agree to screen abstracts. Raters are in active clinical practice and are recruited from the intervention target group communities (“educational influential”). All study participants will be signed into the pain content of the service but can also select email frequency and the cut-off for relevance and quality to control the degree of relevance and frequency of evidence they receive. Their experience will be a customized notification of new research (Figure 3).

Figure 3
figure 3

Pain PLUS Intervention. Screenshot of an example Pain PLUS email alert.

PULL intervention

Users will “see” a similar format when they access the pull intervention, but this “placebo control” website will not include any individualized information, nor will they receive alert emails. This group can access new research but must search for it.


Computerized randomization will allocate individuals to intervention group at three months; crossover is automatic at nine months.

Study population and recruitment

Eligible practitioners must (1) be physicians, nurses, occupational therapists (OTs), physical therapists (PTs), or psychologists who are currently working in clinical practice at least one day/week; (2) be fluent in English; (3) have access to a computer at home or at work that has unrestricted access to the World Wide Web; and (4) have an active email account. Clinicians who meet the eligibility criteria are enrolled in the study by the research assistant.

Outcome measures

We will use an evaluation framework developed specifically for technology-enabled knowledge translation [6870] to evaluate our primary research question on KT effectiveness as outlined below and in Table 1. It is important when evaluating technology to determine whether the technology performs as expected (structural) and in a way that users find useful (subjective) to understand impacts on higher-level outcomes (cognitive and behavioral).

Table 1 Overview of Study Assessments


Structural usability analysis will have some embedded questions on functionality but will primarily be based on tracking of how the technology is used. We will have data on the number of logins and number of pain and other (specialty) research studies in the database that are accessed. These data will be summarized by the average frequency of use per month and mean number of studies accessed. Other utilization outcomes that will be tracked by the PLUS software include the change in the average frequency of use, average number of keystrokes per month, and the average number of times practitioners use certain specific electronic resources. Collection of these outcomes will be software regulated and, thus, not require any participant burden.


The subjective elements will focus on perceived usefulness of individual sessions. Participants will be asked at the end of each use to answer a single question with a 7-point scale, indicating the extent to which the service provided useful information. If some users fail to log off the system and invoke the final question, we will send them a query by email.



There are few well-validated outcomes assessments of EBP, knowledge acquisition, or attitude/behavior change. In this study, cognitive elements will be assessed using a previously validated scale containing cognitive and behavioral questions specifically related to EBP [71]. This questionnaire has validated factor structure and construct validity and demonstrated responsiveness to EBP interventions [71]. We have added supplemental questions tailored specifically to Pain PLUS, adapted from the Evidence Updates trial [66] follow-up evaluations, for comparison purposes.


Behavior change is the ultimate goal of KT. It will be assessed through self-report, performance tests, and observer-based competency assessments. EBP behaviors will be self-reported using the Behavior Subscale of the Knowledge Attitudes and Behaviour Questionnaire (KABQ) validated in different professions [71]. Similar questions have demonstrated reliability and validity in nurses [76].

Skill at accessing research evidence

We will directly measure respondents’ skill in accessing information by providing a clinical scenario where a clinical question needs to be addressed and asking participants to search for research information to address that request. We will monitor their approach to searching and their satisfaction and intention to act upon the information acquired. This test will replicate previous methods used to compare information retrieval by physicians on Clinical Queries versus PubMed [72]. A subset of volunteers (30/profession) will search for information to address two posed clinical questions: one scenario will be a uniform multidisciplinary question (provided to all participants) and the other will be profession specific. Using our tracking software within Pain PLUS, we will be able to identify what search terms are used, what studies are located, the efficiency of their search strategies (e.g., percent of relevant articles, time to find), and effectiveness of their search (quality of the relevant articles located). Volunteers will rate the usefulness of the research located. Our previous study determined that Clinical Queries was more efficient for physicians [72]. In this substudy, we will randomly allocate half of the participants to access through PubMed traditional search versus Clinical Queries, so we can assess whether Clinical Queries is more useful across professions. This subanalysis will provide more detailed and direct data on professional differences in accessing information.

Application of research evidence

We will obtain scores and a more detailed description of actual application of pain research evidence use before and after the intervention by performing chart-stimulated recall (CSR). This approach yields a rich and deep understanding of how evidence is used in practice based on semistructured interviews that probe how/why clinicians make choices about cases retrieved from their own records. A predetermined set of competencies are probed during the interview, and the performance is rated on a 7-point Likert scale. The score provides a quantitative estimate of competency (to use evidence in management of pain in their individual practice). We have successfully coded previous CSR interviews using content-analysis techniques. We have successfully used CSR in a KT study [73] to evaluate competency in clinician decision making, and others have indicated CSR is reliable and valid to assess different types of practice competency in physicians, nurses, and rehabilitation practitioners [7375, 77, 78].

Timing of measures

The KABQ will be administered at baseline, 3, 9, 15, and 18 months. Other measures are embedded and distributed over time. CSR will be performed in a subset of 30/group during the first randomization cycle (at baseline and nine months). The skill assessment will be conducted in the final three months of the trial to allow maximum skill development to occur so that our comparisons of professional differences will be less affected by learning curves.

Mediators of use of evidence/intervention response

It is important to monitor and potentially adjust for important covariates that influence study outcomes. We will measure the following variables for each clinician: baseline knowledge/attitudes towards EBP, years of practice, highest degree, and comfort with technology. Organizational variables will also be collected at baseline, including rural/urban location, clinic size, access to online journals, peer support, administrative support, and management support. These variables will be entered into an analysis of covariance examining pre- and post-intervention effects. Our sample size is sufficiently large that we will be able to assess the primary effects of these variables, as well as interactions between organizational variables and participant variables.

Sample-size calculations

To determine the appropriate sample size for this trial, the following factors were considered: (1) the primary and secondary outcomes in the trial, (2) clinically significant differences for the outcomes based on our previous RCT, (3) Type I (alpha) and Type II (beta) errors, (4) estimates of variation from our pilot data, (5) study design, and (6) dropouts. These calculations represent sample size for the overall comparisons between the four professional groups and two intervention groups. Sample-size estimation was based on detecting an effect size of 0.40 between groups on any of the three aspects of outcome (usability, usefulness, and knowledge/attitude/behavior). Assuming Type I error = 0.05 (two-tailed), Type II error = 0.80, and the effect size = 0.40, the sample size required per group is 80. The sample size required for two comparison groups across four professions = 80 × 2 × 4 = 640. To account for our dropout rate (8/431), we have conservatively added 30 to recruit a sample of 670 practitioners. For analysis of response mediators, an R 2 of 30% corresponds to an effect size of 0.42; 600 practitioners provides more than 99% power to detect this effect size with 12 predictors [79].

This study is organized locally but recruits participants internationally to inform practitioners of the study and the potential for free access to evidence resources. Our multimodal recruitment strategy will use professional associations, websites/conferences, and social media to inform practitioners of the study and the potential for free access to evidence resources.

Proposed methods for protecting against other sources of bias

Control over eligibility violations

Participants will complete a baseline screen with the study research assistant, which will be independently reviewed by a trainee for violations. These will be reported (as a protocol diligence indicator), but no participants will be removed. No further rechecks of eligibility or exclusions will be included after this screening.

Control of biases through blinding

We have attempted to create a blinded control by creating a similar front-face to the website and blinding participants to the nature of the difference in the two resources (push versus pull), although we acknowledge differences will become apparent to users. Since usage data are embedded and automated, they are not influenced by data collectors or social bias. Research assistants and data analysts will be blinded to group allocation for the duration of the study and data analysis. After crossover, subjects will be aware of the difference and, therefore, no longer be blinded. Thus, the overall potential for bias due to lack of blinding has been minimized.

Control of contamination and co-intervention

Inside the study, participant crossovers are impossible since we control access to the intervention. Contamination that happens by participants accessing open access content for physicians by signing up under an alias for service is possible, but not probable. As part of enrollment, we will ask participants to refrain from using any new evidence access tools/services or EBP training during the study trial and will re-evaluate their reports of services used at the end of each intervention cycle. Co-intervention will be discouraged, including attendance at intensive EBP training (although we expect that routine exposure in settings is neither controllable nor a concern since these pre-existing efforts have not been effective).

Ensuring reproducibility and reliability of measurements of outcome

We will use reliable and valid measures of attitudes to EBP that have been used with the studied clinical disciplines [44, 76, 80, 81]. CSR will be performed by two independent raters and checked for initial reliability.

Control of biases relating to (loss to) follow-up

We will use the following measures to ensure minimal loss to follow-up: (1) enrollment/consent will emphasize committing to the entire trial period and will exclude clinicians who cannot commit to the study for 18 months, (2) contact information will be updated at each study interval, and (3) response burden has been kept low. Burden is reduced since brief measures are embedded in the intervention; the total time required for extra questionnaires is relatively low and will be time distributed. The system will initiate regular reminders until the data collection is complete (or the participant withdraws consent).


Compliance is one of our monitored variables. When participants fail to use an intervention, they are classified in some studies as noncompliant. We view lack of use as a failure of the intervention to meet participant needs. Thus, participants who do not use this service will not be classified as noncompliant or dropouts. Dropouts will be defined as participants who inform us that they no longer wish to participate in the study follow-up process. These participants will not be denied access to the intervention, and we will request permission to continue to use data from the embedded tracking of usage. Each participant will receive a $45 gift card upon completion of at least three survey assessments: baseline, one follow-up (at 3, 9, or 15 months), final assessment. Participants who complete the CSR or the search skills assessment will receive an additional $20 gift card as honorarium for time volunteered.

Planned data analyses

Quality checking of primary and secondary outcome data will be performed by re-entry of 100 cases and checking concordance. Descriptive statistics, including frequencies, means, and standard deviations, will be calculated and data displayed graphically for all study variables as a means of exploring data distributions. The data for all outcome measures will be analyzed to ensure that they meet the statistical assumptions necessary to use parametric statistics. Transformations may be considered if necessary to meet test assumptions. To test our main research question, we will use general, linear mixed-effects models [82] for repeated measurements over time to explore the relationship among the use/behavior scores, comparing the effects between professional groups and intervention types. The models will be fitted for the outcome scores over time as a function of the between-participant (profession/intervention) and within-participant (time) variables. Utilizing a mixed-model approach allows us to account for the dependence between outcome measurements taken over time from the same participant. The structure of the variance-covariance matrix, which dictates the dependence between measurements in the fitting of the model, will be chosen based on standard criteria in statistics. Post hoc pairwise comparisons of the two treatment arms at different time points will be done using multiple comparisons with adjustment to the Type I error rate. Type I error will be set to 5% (i.e., α = 0.05) for all the calculation of the confidence intervals and performing the various hypothesis-testing procedures. Type I error will be adjusted accordingly for multiple comparisons. Standard diagnostic tools will be used to assess any of the model fittings. An intent-to-treat analysis will be employed—those who do not participate will be included in the analysis in their original group. No interim analyses are planned since there are no safety issues and we wish to ensure full power for both primary and secondary analyses.

We also plan to analyze difference in professional attitudes, evidence access, and use. We have powered our study to allow this analysis. Our subgroup analyses will also describe differences in types of evidence accessed. Our group previously determined that physicians prefer systematic reviews to primacy studies and that access was better through peer-reviewed journals than through Cochrane Collaboration [83]; we will now extend this type of analysis to look at evidence accessed across professions by comparing the types of primary studies (clinical, basic science; quantitative/qualitative), content (prognosis, diagnosis, clinical measurement, treatment effectiveness/outcomes, economic, etiology), and journals (disciplinary/interdisciplinary/other discipline).

Steering committee

The Steering Committee will consist of the four investigators. The study coordinator will prepare a monthly report, detailing the accrual rate, loss to follow-up, withdrawals, and compliance with study protocols. Since this trial does not directly involve patients, there is no need for a Data Safety Monitoring Committee.

Trial status

We have enrolled more than 70% of the sample target. More than 60% of the participants have reached the nine-month milestone (crossover).


This trial should provide unique information on evidence-access skills and professional differences in use of evidence. We expect to demonstrate the effectiveness of push-out targeted evidence (Pain PLUS) and to move the intervention into open access upon trial completion. We expect that some differences in evidence access and application will occur across professional groups, both in terms of types of pain interventions and types of research that are accessed.

It is our intent to ensure this intervention becomes open access to have ongoing impact. We believe this trial will inform our understanding on how to best deliver evidence to clinicians across a variety of professions dealing with pain. Sustainability of KT interventions is a critical issue area. As our findings emerge, we will deal with sustainability issues. Since evidence updates have been created for specific disciplines, including physicians, nurses, and rehabilitation professionals, it may be that folding in a pain resource into these existing resources would be the optimal method to ensure sustainability. Conversely, pain stakeholders may prefer a customized resource. A variety of pain resources are being developed, and strategies to share these across stakeholders are just emerging in stakeholder discussions. This may form an opportunity to tailor the knowledge from this study to emerging resources. Using the knowledge about what pain evidence exists, is accessed, and valued across professions will allow us to develop a pain knowledge strategy. The sustainability plan will be designed to maximize uptake of pain knowledge in a sustainable manner.

One of the benefits of this study is the ability to compare the evidence accessed by different professions when dealing with the same issue. We will use a descriptive approach to convey these differences by classifying the type of pain, the type of clinical questions, and the type of research design of these studies to convey if different types of evidence are valued by different professions. We anticipate, for example, that nurses and therapists might access more qualitative research, whereas physicians might access more RCTs on drug interventions. Psychologists are being included for the first time in this type of intervention, and their information behaviors are relatively unstudied. We expect that this information might help us understand if evidence scanning and filtering needs to be adjusted across professions. For example, decisions about how to filter qualitative studies may be more challenging than are such decisions in quantitative research.

We also acknowledge that the information resources that are available to knowledge users are rapidly expanding. Resources vary in their delivery, quality, customization, source content, and format of information. Knowledge users may become overwhelmed with information resources or stick with ones they have become comfortable with, irrespective of the quality of the information. It may not be clear to end users that the evidence-filtering process used in Pain PLUS protects them against low-quality information. Further, as clinicians often value advice about clinical decision making in actual cases, or implementation issues, they may be drawn to information resources that focus on these rather than the primary or synthesized research available. Since information is always in competition with other information, we cannot be certain that Pain PLUS will win in this competition.


  1. Gummesson C, Atroshi I, Ekdahl C, Johnsson R, Ornstein E: Chronic upper extremity pain and co-occurring symptoms in a general population. Arthritis Rheum. 2003, 49: 697-702. 10.1002/art.11386.

    Article  PubMed  Google Scholar 

  2. Ramage-Morin PL: Chronic pain in Canadian seniors. Health Rep. 2008, 19: 37-52.

    PubMed  Google Scholar 

  3. Verhaak PF, Kerssens JJ, Dekker J, Sorbi MJ, Bensing JM: Prevalence of chronic benign pain disorder among adults: a review of the literature. Pain. 1998, 77: 231-239. 10.1016/S0304-3959(98)00117-1.

    Article  CAS  PubMed  Google Scholar 

  4. Von Korff M, Crane P, Lane M, Miglioretti DL, Simon G, Saunders K: Chronic spinal pain and physical-mental comorbidity in the United States: results from the national comorbidity survey replication. Pain. 2005, 113: 331-339. 10.1016/j.pain.2004.11.010.

    Article  PubMed  Google Scholar 

  5. Phillips CJ, Schopflocher D: The economics of Chronic Pain. Chronic Pain: A Health Policy Perspective. 2008, Wiley-Blackwell, Weinham

    Google Scholar 

  6. McPhillips-Tangum CA, Cherkin DC, Rhodes LA, Markham C: Reasons for repeated medical visits among patients with chronic back pain. J Gen Intern Med. 1998, 13: 289-295. 10.1046/j.1525-1497.1998.00093.x.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  7. Fischer S, Huber CA, Furter M, Imhof L, Mahrer IR, Schwarzenegger C: Reasons why people in Switzerland seek assisted suicide: the view of patients and physicians. Swiss Med Wkly. 2009, 139: 333-338.

    PubMed  Google Scholar 

  8. Webster LR, Bath B, Medve RA: Opioid formulations in development designed to curtail abuse: who is the target?. Expert Opin Investig Drugs. 2009, 18: 255-263. 10.1517/13543780902751622.

    Article  CAS  PubMed  Google Scholar 

  9. Wolfert MZ, Gilson AM, Dahl JL, Cleary JF: Opioid analgesics for pain control: Wisconsin physicians’ knowledge, beliefs, attitudes, and prescribing practices. Pain Med. 2010, 11: 425-434. 10.1111/j.1526-4637.2009.00761.x.

    Article  PubMed  Google Scholar 

  10. Veale DJ, Woolf AD, Carr AJ: Chronic musculoskeletal pain and arthritis: impact, attitudes and perceptions. Ir Med J. 2008, 101: 208-210.

    CAS  PubMed  Google Scholar 

  11. Watt-Watson J, McGillion M, Hunter J, Choiniere M, Clark AJ, Dewar A: A survey of prelicensure pain curricula in health science faculties in Canadian universities. Pain Res Manag. 2009, 14: 439-444.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  12. Douglass MA, Sanchez GM, Alford DP, Wilkes G, Greenwald JL: Physicians’ pain management confidence versus competence. J Opioid Manag. 2009, 5: 169-174.

    PubMed  Google Scholar 

  13. Finestone AS, Raveh A, Mirovsky Y, Lahad A, Milgrom C: Orthopaedists’ and family practitioners’ knowledge of simple low back pain management. Spine (Phila Pa 1976). 2009, 34: 1600-1603. 10.1097/BRS.0b013e3181a96622.

    Article  Google Scholar 

  14. Bishop A, Foster NE, Thomas E, Hay EM: How does the self-reported clinical management of patients with low back pain relate to the attitudes and beliefs of health care practitioners? A survey of UK general practitioners and physiotherapists. Pain. 2008, 135: 187-195. 10.1016/j.pain.2007.11.010.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Todd KH, Sloan EP, Chen C, Eder S, Wamstad K: Survey of pain etiology, management practices and patient satisfaction in two urban emergency departments. CJEM. 2002, 4: 252-256.

    PubMed  Google Scholar 

  16. Zenz M, Zenz T, Tryba M, Strumpf M: Severe undertreatment of cancer pain: a 3-year survey of the German situation. J Pain Symptom Manage. 1995, 10: 187-191. 10.1016/0885-3924(94)00122-2.

    Article  CAS  PubMed  Google Scholar 

  17. Plaisance L, Logan C: Nursing students’ knowledge and attitudes regarding pain. Pain Manag Nurs. 2006, 7: 167-175. 10.1016/j.pmn.2006.09.003.

    Article  PubMed  Google Scholar 

  18. Byrd PJ, Gonzales I, Parsons V: Exploring barriers to pain management in newborn intensive care units: a pilot survey of NICU nurses. Adv Neonatal Care. 2009, 9: 299-306. 10.1097/ANC.0b013e3181c1ff9c.

    Article  PubMed  Google Scholar 

  19. Valente SM: Research dissemination and utilization improving care at the bedside. J Nurs Care Qual. 2003, 18: 114-121. 10.1097/00001786-200304000-00004.

    Article  PubMed  Google Scholar 

  20. Peng P, Stinson JN, Choiniere M, Dion D, Intrater H, LeFort S: Role of health care professionals in multidisciplinary pain treatment facilities in Canada. Pain Res Manag. 2008, 13: 484-488.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Zailinawati AH, Teng CL, Kamil MA, Achike FI, Koh CN: Pain morbidity in primary care - preliminary observations from two different primary care settings. Med J Malaysia. 2006, 61: 162-167.

    CAS  PubMed  Google Scholar 

  22. Scascighini L, Toma V, Dober-Spielmann S, Sprott H: Multidisciplinary treatment for chronic pain: a systematic review of interventions and outcomes. Rheumatology (Oxford). 2008, 47: 670-678. 10.1093/rheumatology/ken021.

    Article  CAS  Google Scholar 

  23. Thomsen AB, Sorensen J, Sjogren P, Eriksen J: Economic evaluation of multidisciplinary pain management in chronic pain patients: a qualitative systematic review. J Pain Symptom Manage. 2001, 22: 688-698. 10.1016/S0885-3924(01)00326-8.

    Article  CAS  PubMed  Google Scholar 

  24. Karjalainen K, Malmivaara A, van Tulder M, Roine R, Jauhiainen M, Hurri H: Multidisciplinary biopsychosocial rehabilitation for subacute low back pain among working age adults. Cochrane Database Syst Rev. 2003, CD002193-2

  25. Karjalainen K, Malmivaara A, van Tulder M, Roine R, Jauhiainen M, Hurri H: Multidisciplinary biopsychosocial rehabilitation for neck and shoulder pain among working age adults. Cochrane Database of Systematic Reviews. 2003, CD002194-update of Cochrane Database Syst Rev. 2000;(3):CD002194; PMID: 10908529]. [Review] [23 refs

    Google Scholar 

  26. Guzman J, Esmail R, Karjalainen K, Malmivaara A, Irvin E, Bombardier C: Multidisciplinary rehabilitation for chronic low back pain: systematic review. BMJ. 2001, 322: 1511-1516. 10.1136/bmj.322.7301.1511.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  27. MacDermid JC, Graham ID: Knowledge translation: putting the “practice” in evidence-based practice. Hand Clin. 2009, 25: 125-143. 10.1016/j.hcl.2008.10.003.

    Article  PubMed  Google Scholar 

  28. Doran DM, Haynes RB, Kushniruk A, Straus S, Grimshaw J, Hall LM: Supporting Evidence-Based Practice for Nurses through Information Technologies. Worldviews Evid Based Nurs. 2010, 7: 4-15. 10.1111/j.1741-6787.2009.00179.x.

    Article  PubMed  Google Scholar 

  29. Dorsch JL, Aiyer MK, Meyer LE: Impact of an evidence-based medicine curriculum on medical students’ attitudes and skills. J Med Libr Assoc. 2004, 92: 397-406.

    PubMed  PubMed Central  Google Scholar 

  30. Bennett S, Hoffmann T, McCluskey A, McKenna K, Strong J, Tooth L: Evidence-based practice forum - Introducing OTseeker - (Occupational therapy systematic evaluation of evidence): a new evidence database for occupational therapists. Am J Occup Ther. 2003, 57: 635-638. 10.5014/ajot.57.6.635.

    Article  PubMed  Google Scholar 

  31. Dysart AM, Tomlin GS: Factors related to evidence-based practice among U.S. occupational therapy clinicians. Am J Occup Ther. 2002, 56: 275-284. 10.5014/ajot.56.3.275.

    Article  PubMed  Google Scholar 

  32. Jette DU, Bacon K, Batty C, Carlson M, Ferland A, Hemingway RD: Evidence-based practice: beliefs, attitudes, knowledge, and behaviors of physical therapists. Phys Ther. 2003, 83: 786-805.

    PubMed  Google Scholar 

  33. Denmark D: The evidence-based advantage. The proper use of EBM in CPOE can enhance physician decision-making and improve patient outcomes. Health Manag Technol. 2008, 29: 22-24–22, 25

    PubMed  Google Scholar 

  34. Garg AX, Adhikari NK, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J: Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005, 293: 1223-1238. 10.1001/jama.293.10.1223.

    Article  CAS  PubMed  Google Scholar 

  35. Gilmore AS, Zhao Y, Kang N, Ryskina KL, Legorreta AP, Taira DA: Patient outcomes and evidence-based medicine in a preferred provider organization setting: a six-year evaluation of a physician pay-for-performance program. Health Serv Res. 2007, 42: 2140-2159. 10.1111/j.1475-6773.2007.00725.x.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Heselmans A, Van de Velde S, Donceel P, Aertgeerts B, Ramaekers D: Effectiveness of electronic guideline-based implementation systems in ambulatory care settings - a systematic review. Implement Sci. 2009, 4: 82. 10.1186/1748-5908-4-82.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Lugtenberg M, Zegers-van Schaick JM, Westert GP, Burgers JS: Why don’t physicians adhere to guideline recommendations in practice? An analysis of barriers among Dutch general practitioners. Implement Sci. 2009, 4: 54. 10.1186/1748-5908-4-54.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Dubouloz CJ, Egan M, Vallerand J, von Zweck C: Occupational therapists’ perceptions of evidence-based practice. Am J Occup Ther. 1999, 53: 445-453. 10.5014/ajot.53.5.445.

    Article  CAS  PubMed  Google Scholar 

  39. Goldhahn S, Audige L, Helfet DL, Hanson B: Pathways to evidence-based knowledge in orthopaedic surgery: an international survey of AO course participants. Int Orthop. 2005, 29: 59-64. 10.1007/s00264-004-0617-3.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Judd MGP: Examining the use of evidence-based practice resources among physiotherapists in Canada. 2004, University of Ottawa, Ottawa, Ontario, MSc

    Google Scholar 

  41. Kamwendo K: What do Swedish physiotherapists feel about research? A survey of perceptions, attitudes, intentions and engagement. Physiother Res Int. 2002, 7: 23-34. 10.1002/pri.238.

    Article  PubMed  Google Scholar 

  42. Palfreyman S, Tod A, Doyle J: Comparing evidence-based practice of nurses and physiotherapists. Br J Nurs. 2003, 12: 246-253.

    Article  PubMed  Google Scholar 

  43. Schaafsma F, Hulshof C, van Dijk F, Verbeek J: Information demands of occupational health physicians and their attitude towards evidence-based medicine. Scand J Work Environ Health. 2004, 30: 327-330. 10.5271/sjweh.802.

    Article  PubMed  Google Scholar 

  44. Thiel L, Ghosh Y: Determining registered nurses’ readiness for evidence-based practice. Worldviews Evid Based Nurs. 2008, 5: 182-192. 10.1111/j.1741-6787.2008.00137.x.

    Article  PubMed  Google Scholar 

  45. Ulvenes LV, Aasland O, Nylenna M, Kristiansen IS: Norwegian physicians’ knowledge of and opinions about evidence-based medicine: cross-sectional study. PLoS One. 2009, 4: e7828. 10.1371/journal.pone.0007828.

    Article  PubMed  PubMed Central  Google Scholar 

  46. O’Donnell CA: Attitudes and knowledge of primary care professionals towards evidence-based practice: a postal survey. J Eval Clin Pract. 2004, 10: 197-205. 10.1111/j.1365-2753.2003.00458.x.

    Article  PubMed  Google Scholar 

  47. Ely JW, Osheroff JA, Chambliss ML, Ebell MH, Rosenbaum ME: Answering physicians’ clinical questions: obstacles and potential solutions. J Am Med Inform Assoc. 2005, 12: 217-224.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Ely JW, Osheroff JA, Ebell MH, Chambliss ML, Vinson DC, Stevermer JJ: Obstacles to answering doctors’ questions about patient care with evidence: qualitative study. BMJ. 2002, 324: 710. 10.1136/bmj.324.7339.710.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Griffiths JM, Bryar RM, Closs SJ, Cooke J, Hostick T, Kelly S: Barriers to research implementation. Br J Community Nurs. 2001, 6: 501-510.

    Article  CAS  PubMed  Google Scholar 

  50. Verbeek JH, van Dijk FJ, Malmivaara A, Hulshof CT, Rasanen K, Kankaanpaa EE: Evidence-based medicine for occupational health. Scand J Work Environ Health. 2002, 28: 197-204. 10.5271/sjweh.665.

    Article  PubMed  Google Scholar 

  51. Krahn J, Sauerland S, Rixen D, Gregor S, Bouillon B, Neugebauer EA: Applying evidence-based surgery in daily clinical routine: a feasibility study. Arch Orthop Trauma Surg. 2006, 126: 88-92. 10.1007/s00402-005-0095-0.

    Article  PubMed  Google Scholar 

  52. Dirschl DR, Tornetta P, Bhandari M: Designing, conducting, and evaluating journal clubs in orthopaedic surgery. Clin Orthop Relat Res. 2003, 413: 146-157.

    Article  PubMed  Google Scholar 

  53. Fliegel JE, Frohna JG, Mangrulkar RS: A computer-based OSCE station to measure competence in evidence-based medicine skills in medical students. Acad Med. 2002, 77: 1157-1158. 10.1097/00001888-200211000-00022.

    Article  PubMed  Google Scholar 

  54. Maher CG, Sherrington C, Elkins M, Herbert RD, Moseley AM: Challenges for evidence-based physical therapy: accessing and interpreting high-quality evidence on therapy. Phys Ther. 2004, 84: 644-654.

    PubMed  Google Scholar 

  55. McCluskey A: Occupational therapists report on low level of knowledge, skill and involvement in evidence-based practice. Aust Occ Ther J. 2003, 50: 3-12. 10.1046/j.1440-1630.2003.00303.x.

    Article  Google Scholar 

  56. Ely JW, Osheroff JA, Ebell MH, Chambliss ML, Vinson DC, Stevermer JJ: Obstacles to answering doctors’ questions about patient care with evidence: qualitative study. BMJ (Clinical research ed ). 2002, 324: 710. 10.1136/bmj.324.7339.710.

    Article  Google Scholar 

  57. Coomarasamy A, Taylor R, Khan KS: A systematic review of postgraduate teaching in evidence-based medicine and critical appraisal. Med Teach. 2003, 25: 77-81. 10.1080/0142159021000061468.

    Article  PubMed  Google Scholar 

  58. Taylor RS, Reeves BC, Ewings PE, Taylor RJ: Critical appraisal skills training for health care professionals: a randomized controlled trial [ISRCTN46272378]. BMC Med Educ. 2004, 4: 30. 10.1186/1472-6920-4-30.

    Article  PubMed  PubMed Central  Google Scholar 

  59. Newman M, Papadopoulos I, Sigsworth J: Barriers to evidence-based practice. Intensive Crit Care Nurs. 1998, 14: 231-238. 10.1016/S0964-3397(98)80634-4.

    Article  CAS  PubMed  Google Scholar 

  60. Stevenson K, Lewis M, Hay E: Do physiotherapists’ attitudes towards evidence-based practice change as a result of an evidence-based educational programme?. J Eval Clin Pract. 2004, 10: 207-217. 10.1111/j.1365-2753.2003.00479.x.

    Article  PubMed  Google Scholar 

  61. Upton D, Upton P: Knowledge and use of evidence-based practice by allied health and health science professionals in the United Kingdom. J Allied Health. 2006, 35: 127-133.

    PubMed  Google Scholar 

  62. Upton D, Upton P: Knowledge and use of evidence-based practice of GPs and hospital doctors. J Eval Clin Pract. 2006, 12: 376-384. 10.1111/j.1365-2753.2006.00602.x.

    Article  PubMed  Google Scholar 

  63. Dahm P, Poolman RW, Bhandari M, Fesperman SF, Baum J, Kosiak B: Perceptions and competence in evidence-based medicine: a survey of the American urological association membership. J Urol. 2009, 181: 767-777.

    Article  PubMed  Google Scholar 

  64. Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, Roth E: Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006, 144: 742-752.

    Article  PubMed  Google Scholar 

  65. Hunt DL, Haynes RB, Hanna SE, Smith K: Effects of computer-based clinical decision support systems on physician performance and patient outcomes: a systematic review. JAMA. 1998, 280: 1339-1346. 10.1001/jama.280.15.1339.

    Article  CAS  PubMed  Google Scholar 

  66. Haynes RB, Holland J, Cotoi C, McKinlay RJ, Wilczynski NL, Walters LA: McMaster PLUS: a cluster randomized clinical trial of an intervention to accelerate clinical use of evidence-based information from digital libraries. J Am Med Inform Assoc. 2006, 13: 593-600. 10.1197/jamia.M2158.

    Article  PubMed  PubMed Central  Google Scholar 

  67. Holland J, Haynes RB: McMaster Premium Literature Service (PLUS): an evidence-based medicine information service delivered on the Web. AMIA Annu Symp Proc. 2005, 2005: 340-344.

    PubMed Central  Google Scholar 

  68. Ho K, Chockalingam A, Best A, Walsh G: Technology-enabled knowledge translation: building a framework for collaboration. CMAJ. 2003, 168: 710-711.

    PubMed  PubMed Central  Google Scholar 

  69. Ho K, Bloch R, Gondocz T, Laprise R, Perrier L, Ryan D: Technology-enabled knowledge translation: frameworks to promote research and practice. J Contin Educ Health Prof. 2004, 24: 90-99. 10.1002/chp.1340240206.

    Article  PubMed  Google Scholar 

  70. Ho K, Lauscher HN, Best A, Walsh G, Jarvis-Selinger S, Fedeles M: Dissecting technology-enabled knowledge translation: essential challenges, unprecedented opportunities. Clin Invest Med. 2004, 27: 70-78.

    PubMed  Google Scholar 

  71. Johnston JM, Leung GM, Fielding R, Tin KY, Ho LM: The development and validation of a knowledge, attitude and behaviour questionnaire to assess undergraduate evidence-based practice teaching and learning. Med Educ. 2003, 37: 992-1000. 10.1046/j.1365-2923.2003.01678.x.

    Article  PubMed  Google Scholar 

  72. Lokker C, Haynes RB, Wilczynski NL, McKibbon KA, Walter SD: Retrieval of diagnostic and treaWilcz; Wtment studies for clinical use through PubMed and PubMed’s Clinical Queries filters. Journal of the American Medical Informatics Association. 2012, 18: 652-659.

    Article  Google Scholar 

  73. MacDermid JC, Solomon P, Law M, Russell D, Stratford P: Defining the effect and mediators of two knowledge translation strategies designed to alter knowledge, intent and clinical utilization of rehabilitation outcome measures: a study protocol [NCT00298727]. Implement Sci. 2006, 1: 14. 10.1186/1748-5908-1-14.

    Article  PubMed  PubMed Central  Google Scholar 

  74. Norman GR, Davis DA, Lamb S, Hanna E, Caulford P, Kaigas T: Competency assessment of primary care physicians as part of a peer review program. JAMA. 1993, 270: 1046-1051. 10.1001/jama.1993.03510090030007.

    Article  CAS  PubMed  Google Scholar 

  75. Salvatori P, Baptiste S, Ward M: Development of a tool to measure clinical competence in occupational therapy: a pilot study?. Can J Occup Ther. 2000, 67: 51-60.

    Article  CAS  PubMed  Google Scholar 

  76. Upton D, Upton P: Development of an evidence-based practice questionnaire for nurses. J Adv Nurs. 2006, 53: 454-458. 10.1111/j.1365-2648.2006.03739.x.

    Article  PubMed  Google Scholar 

  77. Cunnington JP, Hanna E, Turnhbull J, Kaigas TB, Norman GR: Defensible assessment of the competency of the practicing physician. Acad Med. 1997, 72: 9-12.

    Article  CAS  PubMed  Google Scholar 

  78. Goulet F, Jacques A, Gagnon R, Bourbeau D, Laberge D, Melanson J: Performance assessment. Family physicians in Montreal meet the mark!. Can Fam Physician. 2002, 48: 1337-1344.

    PubMed  PubMed Central  Google Scholar 

  79. Soper D: The Free Statistics Calculators Website-Online Software. 2010,,

    Google Scholar 

  80. Gomez E: Tap the Internet to strengthen evidence-based oncology nursing practice. ONS News. 2000, 15: 7-

    Google Scholar 

  81. Leach MJ, Gillham D: Evaluation of the evidence-based practice attitude and utilization SurvEy for complementary and alternative medicine practitioners. J Eval Clin Pract. 2008, 14: 792-798. 10.1111/j.1365-2753.2008.01046.x.

    Article  PubMed  Google Scholar 

  82. Verbeke G, Molengerghs GS: Linear Mixed Models for Longitudinal Data. 2001, Springer-Verlag, Berlin

    Google Scholar 

  83. McKinlay RJ, Cotoi C, Wilczynski NL, Haynes RB: Systematic reviews and original articles differ in relevance, novelty, and use in an evidence-based service for physicians: PLUS project. J Clin Epidemiol. 2008, 61: 449-454. 10.1016/j.jclinepi.2007.10.016.

    Article  PubMed  Google Scholar 

Download references


This study is funded by the Canadian Institutes of Health Research (CIHR). The authors thank Margaret Lomotan for assistance with the manuscript preparation and study coordination.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Joy C. MacDermid.

Additional information

Competing interests

JCM, ML, and RBH were involved in the development of the Pain PLUS service. NB has no competing interests to declare.

Authors’ contributions

JCM drafted the protocol. All authors participated in the study design and provided feedback on the protocol. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

MacDermid, J.C., Law, M., Buckley, N. et al. “Push” versus “Pull” for mobilizing pain evidence into practice across different health professions: A protocol for a randomized trial. Implementation Sci 7, 115 (2012).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: