Skip to main content

Clinicians' evaluations of, endorsements of, and intentions to use practice guidelines change over time: a retrospective analysis from an organized guideline program



Clinical practice guidelines (CPGs) can improve clinical care but uptake and application are inconsistent. Objectives were: to examine temporal trends in clinicians' evaluations of, endorsements of, and intentions to use cancer CPGs developed by an established CPG program; and to evaluate how predictor variables (clinician characteristics, beliefs, and attitudes) are associated with these trends.

Design and methods

Between 1999 and 2005, 756 clinicians evaluated 84 Cancer Care Ontario CPGs, yielding 4,091 surveys that targeted four CPG quality domains (rigour, applicability, acceptability, and comparative value), clinicians' endorsement levels, and clinicians' intentions to use CPGs in practice.


Time: In contrast to the applicability and intention to use in practice scores, there were small but statistically significant annual net gains in ratings for rigour, acceptability, comparative value, and CPG endorsement measures (p < 0.05 for all rating categories). Predictors: In 17 comparisons, ratings were significantly higher among clinicians having the most favourable beliefs and most positive attitudes and lowest for those having the least favourable beliefs and most negative attitudes (p < 0.05). Interactions Time × Predictors: Over time, differences in outcomes among clinicians decreased due to positive net gains in scores by clinicians whose beliefs and attitudes were least favorable.


Individual differences among clinicians largely explain variances in outcomes measured. Continued engagement of clinicians least receptive to CPGs may be worthwhile because they are the ones showing most significant gains in CPG quality ratings, endorsement ratings, and intentions to use in practice ratings.

Peer Review reports


Evidence-based clinical practice guidelines (CPGs) are knowledge products defined as systematically developed statements aimed to assist clinicians and patients in making decisions about appropriate healthcare for specific clinical circumstances [1]. Health service researchers have debated the extent to which CPGs have been effective in influencing practice or clinical outcomes [24]. Systematic reviews by Grimshaw and colleagues suggest that CPGs, or similar statements, do on average influence both the processes and outcomes of care, although the effect sizes tend to be modest [57].

Intentions to use CPG recommendations and their ultimate adoption are complex processes that may depend on many factors in addition to the validity of the recommendations. For example, while faithfulness to evidence-based principles is important, other non-methodological factors believed to influence the uptake of CPGs include adopters' perceptions of the CPG characteristics and messages and the CPG development process, actual and perceived facilitators and barriers to implementation, and factors related to norms and the practice context [2, 815]. For example, consistent with a social influence perspective, evidence has shown greater compliance with CPGs perceived to be compatible with existing norms and not demanding changes in existing practices [14].

In addition, however, Brouwers et al. found that variability in oncologists' endorsement of and intentions to use cancer CPGs could be attributed more to differences among clinicians and variations in their perceptions of the CPG product, rather than to differences in the CPGs themselves [9]. Indeed, attitudes and beliefs can be extremely powerful. Whereas attitudes are evaluations of an object (e.g., like versus dislike), beliefs are the perceived associations between an attitude object and various attributes, which may or may not have evaluative implications [16, 17]. Together, an individual's attitudes and beliefs can have a significant impact on how information is gathered, encoded, and attributed. Indeed, decades-long research in the social psychological fields of social cognition, attitudes, intentions, and behavior demonstrate that the process of deciding what information is relevant and how one interprets information are guided by preexistent expectations [1618]. Further, beliefs often provide the cognitive support for attitudes which can directly influence intentions to act and can influence actions themselves [1618].

Research has often considered issues of guideline quality, users' beliefs and attitudes both independently and at one time. This work has been extremely important in identifying factors that more or less affect how CPGs are perceived by intended users and in predicting their uptake. Further, research examining factors related to the CPG uptake by clinicians has traditionally explored CPGs in contexts separate from a formal healthcare system in which they operate. In contrast, our interests were to design the research paradigm that explored issues of guideline quality, beliefs, and attitudes in an established CPG enterprise that is integrated into a formal healthcare system, and to assess the extent to which various factors are influenced by time. Understanding this will provide greater direction regarding efforts to promote utilization of CPGs into practice and healthcare systems decisions. This is pertinent given there are many CPGs available, and that CPG recommendations can change quickly in response to the proliferation with which new evidence and care options emerge.

The specific study objectives were to: examine temporal trends in clinicians' evaluations of, endorsements of, and their intentions to use cancer CPGs developed by an established cancer CPG program; and evaluate how clinician characteristics and clinician beliefs and attitudes are associated with these trends.



The Cancer Care Ontario Program in Evidence-based Care (PEBC) in Ontario, Canada, a provincial CPG cancer system initiative, served as the context for this study. The PEBC CPGs are used to facilitate practice, guide provincial and institutional policy, and enable access to treatments in the publicly funded provincial healthcare system [1921]. The PEBC is one component of a larger formalized cancer system defined by data and monitoring of system performance, evidence-based knowledge and best practices, transfer and exchange of this knowledge, and strategies to leverage implementation of knowledge. The work of the PEBC targets primarily the knowledge and transfer components of this system.

The PEBC methods include the systematic review of clinical oncology research evidence by teams, i.e., disease site groups (DSGs) comprised of clinicians (medical oncologists, radiation oncologists, surgeons, and other medical specialists) and methodological experts; interpretation and consensus of the evidence by the team; development of recommendations; and formal standardized external review of all draft CPGs [19, 20, 22]. The external review process involves disseminating draft CPGs and a validated survey, Clinicians' Assessments of Practice Guidelines in Oncology (CAPGO), to a sample of clinicians for whom the CPG is relevant. To create an appropriate sample, defining features of the CPG (e.g., topic, modality of care, disease site) are matched with professional characteristics of clinicians held in a comprehensive database of clinicians involved in cancer care in the province. The ultimate number of clinicians invited to review varies considerably; guidelines targeting less common cancers tend to be small (<25 clinicians for sarcoma topics) compared to guidelines targeting more common guidelines (>100 clinicians lung cancer topics). Reminders are sent to non-responders at two weeks (postcard) and four weeks (full package), with closure of the review process typically between weeks seven and eight. During this time period, the average return rate was 51%. The external review methodology has been discussed at length elsewhere [9, 2224].

In this study, a retrospective analysis was conducted on data gathered in the formal external CPG review process using CAPGO between 1999 and 2005, and data gathered in a separate PEBC survey during this time [25]. All respondents were clinicians involved in the care and treatment of patients with cancer.

Outcome variables

Study outcomes were clinicians' perceptions of CPG quality, their endorsement of the CPGs, and their intentions to use the CPGs, and these were measured using the validated survey from the PEBC external review process, the CAPGO instrument, (see Table 1) [9]. Four domains of quality were assessed: rigour, acceptability, applicability, and comparative value. The rigour domain focused on clinicians' perceptions of the CPG rationale, quality of scientific methodology used to develop the CPG, and clarity of the recommendations. The acceptability domain targeted clinicians' perceptions of the acceptability and suitability of the recommendations, belief that they would yield more benefits than harms, and anticipated acceptance of recommendations by patients and colleagues. The applicability domain targeted clinicians' perceptions of the ease of implementing recommendations, considering the capacity to apply recommendations, technical requirements, organizational requirements, and costs. The comparative value domain asked clinicians for their perceptions of the recommendations relative to current standards of care. Clinicians' endorsement of the CPG (i.e., whether it should be approved) and their intentions to use the CPG in practice were assessed with single items. Quality, endorsement, and intentions scores ranged from one to five, with higher scores representing more favorable perceptions, higher endorsement, and greater intentions to use.

Table 1 The Clinicians' Assessments of Practice Guidelines in Oncology (CAPGO) survey

Predictor variables

This study analyzed two sets of predictor variables: clinician characteristics and clinician beliefs and attitudes. Clinician characteristics data, which included clinical discipline, gender, and average number of hours spent per week with research (as primary investigator, co-investigator in any cancer-related research study), were obtained from the PEBC database. Data on clinicians' beliefs about and attitudes towards CPGs were gathered in the Ontario physician survey [25]. This survey considered three belief domains: beliefs that CPGs are linked to change in practice, negative misconceptions regarding CPGs, and beliefs regarding CPGs as tools to advance quality. We also measured clinicians' overall attitudes towards CPGs (negative-positive). See Table 2.

Table 2 Six-year mean, year one mean, and annual change in quality, endorsement and intention scores


Most clinicians in the study rated more than one CPG, although the unit of analysis was the individual CPG. Consequently, the data set has a multilevel structure, and CPGs are nested within clinicians. Multilevel modeling was used to evaluate how CPG characteristics, clinical characteristics, clinical beliefs, and clinical attitudes predicted users' perceptions of CPGs over time, while appropriately accounting for the nested data structure [26]. Multilevel modeling quantifies similarity of ratings within clinicians and appropriately adjusts the statistical tests of the predictors. Specifically, a regression model for the effects of year and any additional predictors is estimated to describe the trends for the average clinician. These are known as the fixed effects. To accommodate variations among clinicians in their overall rating tendencies, each clinician is assumed to have his or her own intercept, reflected as a random deviation from the average intercept. The variance of these 'random effects' is estimated and, as a proportion of the total variance, reflects the percentage of variance accounted for after adjusting for the predictors. To facilitate interpretation of the intercept, analyses involving year were completed with the year centered on the first year of data (1999). Each predictor additional to year was tested in a separate analysis with year, the predictor, and the year × predictor interaction included. The interaction assesses whether the predictor affects change in ratings over time. Variations in the number of ratings per CPG are easily handled within the multilevel modeling framework.



Between 1999 and 2005, 756 physicians participated in the evaluation of 84 specific cancer care CPGs developed in Ontario, yielding 4,091 CAPGO survey responses; more than 70% of clinicians rated more than one CPG. With respect to CPG characteristics, systemic therapy, radiation therapy, and surgery accounted for 58.3%, 15.5%, and 3.6% of the guidelines topics, respectively. The DSG representing the 'big four' cancer sites (breast, gastrointestinal, genitourinary, and lung) authored 54.8% of the CPGs.

With respect to clinician characteristics, medical oncologists, radiation oncologists, and surgeons accounted for 30.4%, 11.6%, and 38.6% of the participant sample, respectively, with other specialists accounting for the remaining 19.5% of the sample. Only 20.7% of the sample was women.

Quality, endorsement, and intention to use in practice scores

Table 2 presents the mean ratings for each of the outcomes. The means for each of the measures were consistently high, and across the quality domains the six-year mean scores ranged from 68.0% to 87.3% of the total possible scores.

Table 2 also reports the estimated scores for each outcome variable for the first year (1999) and the annual changes with each subsequent year. With the exception of the applicability and intentions to use scores, there were small but statistically significant net gains in ratings, with the magnitude of change being between 0.02 (endorsement) and 0.19 (acceptability) per year. In contrast, small but statistically significant net losses were found for applicability ratings (-0.14) and intention to use ratings (-0.03) per year. The proportions of variance in outcomes associated with differences among practitioners are also reported in Table 2.

Impact of predictors

Additional File 1 reports the main effects of each predictor variable and the interaction between time and predictors for each of the outcome variables.

Clinician characteristics

Clinician discipline

A significant main effect of clinician discipline was found for the rigour (p = 0.01) and applicability (p < 0.038) scores. Rigour scores given by medical oncologists were highest, by radiation oncologists and surgeons were in the middle, and by 'other' specialists were lowest. Applicability scores were highest for medical oncologists and radiation oncologists compared to surgeons and 'other' specialists.

A significant time by clinician discipline interaction emerged for the applicability score (p = 0.002). Beginning in 1999, medical oncologists and 'other' clinicians had higher applicability scores in contrast to radiation oncologists and surgeons. However, this pattern reversed over time with medical oncologists and 'other' clinicians showing the largest decline in scores in contrast to radiation oncologists and surgeons, where virtually no change was seen (see Figure 1).

Figure 1
figure 1

Time by clinician discipline interaction on clinicians' ratings of CPG applicability.

Research involvement

A significant time by research involvement interaction was found for the applicability (p < 0.006) and comparative value (p < 0.027) scores. With the comparative value rating, clinicians' initial scores in 1999 were virtually identical but, over time scores varied among the disciplines as a function of the amount of time devoted to research. Specifically, while little change was seen over time with those who devoted little or a moderate amount of time to research, a sharp decline in comparative value scores was seen in those who devoted a large amount of time.

In contrast, with the applicability score, in 1999 these ratings were higher for those who devoted a large amount of time to research compared to those who devoted less, with the inverse emerging by 2005.


There was significant main effect for gender (favouring females) (p = 0.034) and a significant time by gender interaction (p = 0.045) for intention to use CPGs. Females were more likely to report greater intention to use CPGs compared to males in 1999. However, this pattern reversed by 2005.

Impact of clinician perceptions and attitudes

Belief CPGs linked to change

Comparative value scores diverged over time as a function of clinicians' belief that CPGs are linked to change. Specifically, comparative value scores in 1999 were lower for clinicians who believed CPGs were linked to change compared to those who believed practice could remain unchanged. A reverse pattern was found by 2005, with a larger difference found among the groups (p < 0.036).

Misconception beliefs about CPGs

Significant main effects for CPG misconception beliefs and significant time by CPG misconception belief interactions emerged on rigour (p < 0.01 and p = 0.014, respectively), acceptability (p < 0.01 and p = 0.006, respectively), comparative value (p < 0.01 and p ≤ 0.006, respectively), CPG endorsement (p < 0.01 and p = 0.002, respectively), and intention to use CPGs (p < 0.01 and p = 0.003, respectively) scores. Very common patterns of main effects and interactions were found for these outcomes. Specifically, scores were higher among clinicians with more favourable beliefs (i.e., fewest misconceptions), followed by those with moderate beliefs, and lowest for those with more unfavourable beliefs (i.e., most misconceptions). However, in contrast to those clinicians with more favourable or moderate beliefs (where either no difference or only small changes in scores were observed over time), scores increased over time among clinicians who had less favourable beliefs about CPGs. Thus, differences in scores between groups became smaller over time due to increases in quality, endorsement, and intention scores for those holding the most unfavourable beliefs. Figure 2 illustrates this pattern, using the interaction findings related to clinicians' CPG rigour ratings as the exemplar.

Figure 2
figure 2

Time by misconception beliefs about CPGs interaction on clinicians' ratings of CPG rigour.

Beliefs CPGs advance quality

Significant main effects were found for rigour (p < 0.01), applicability (p < 0.01), acceptability (p < 0.01), and intention to use scores (p < 0.01) on clinicians' belief that CPGs advance quality. In all cases, scores were higher among clinicians who were more likely to believe CPGs were good scientific tools to advance quality, followed by those with moderate beliefs, and lowest for those least likely to believe CPGs were good scientific tools to advance quality.

Main effects were subsumed by significant time by beliefs interactions for the rigour (p < 0.036) and intention to use (p < 0.024) scores. The pattern of interaction was similar in both cases. Scores increased over time for clinicians who were least likely to perceive CPGs as good scientific tools to advance quality. In contrast, for clinicians with more favourable or neutral beliefs, rigour and intention to use scores remained stable or changed slightly. Thus, over time, the differences between groups became smaller, again due to increases in scores by those holding the most unfavourable beliefs. Figure 3 illustrates this pattern using the interaction findings of clinicians' CPG Rigour ratings as the exemplar.

Figure 3
figure 3

Time by beliefs that CPGs advance quality interaction on clinicians' ratings of CPG rigour.

Clinician attitudes about CPGs

Significant main effects were found with CPG attitude scores for rigour (p < 0.01), acceptability (p < 0.01), comparative value (p < 0.01), endorsement (p < 0.01), and intention to use CPGs (p < 0.01) scores. In all cases, scores were higher among clinicians who held more positive attitudes, followed by those who held neutral attitudes, and lowest for those who held more negative attitudes.

Main effects were subsumed by significant time by clinician attitude interactions for the acceptability (p < 0.027), comparative value (p < 0.042), and endorsement ratings (p < 0.005). Again, patterns were extremely similar across the outcome measures. Among clinicians with very positive or moderately positive attitudes towards CPGs, there was little change in scores over time (scores remained very high). In contrast, increases in scores were observed over time among clinicians whose general attitudes were less positive. Thus, as has been seen elsewhere, the differences among groups lessened over time. Figure 4 illustrates this pattern using the interaction findings of clinicians' CPG acceptability ratings.

Figure 4
figure 4

Time by clinician CPG attitudes interaction on clinicians' ratings of CPG acceptability.


This study examined the influence of clinician characteristics, beliefs, and attitudes on clinicians' ratings of CPGs over time in a formal integrated healthcare system. PEBC cancer CPGs were evaluated as being of high quality. They were strongly endorsed, and clinicians reported high intention to use them in practice. Scores increased over time for rigour, acceptability, comparative value, and intention to use scores, whereas significant annual declines were found for endorsement and applicability scores. However, the absolute annual changes were small, possibly reflecting a ceiling effect due to the high ratings overall.

The range in variance accounted for by differences among practitioners was 23.8% to 38.3% for the quality domains, 25.5% in the endorsement item, and 18.7% in the intention to use in practice item. These values are similar to those found in previous studies [9], and suggest understanding the characteristics of clinician stakeholders are important to better understand and predict ratings of and intention to use recommendations.

The effects of the predictors were similar across outcome measures. The ratings of specific CPG's were higher among clinicians who held the more favourable beliefs, more positive attitudes, and had fewer negative misconceptions about CPG's. That is, general beliefs and attitudes appear to reflect a general orientation that strongly influences reactions to specific documents. However, we also found ratings of specific CPGs tended to improve over time for clinicians with the least favourable general beliefs and most negative attitudes. These data provide important lessons regarding the application of evidence into practice.

Specifically, the data identify factors that may be useful for interventions or system redesign aimed to promote evidence-informed decisions. For example, our study suggests that continued engagement of clinicians who are least receptive to cancer CPGs may be worthwhile. Perhaps with increased exposure to cancer CPGs through external review processes, the use and application of cancer CPGs in their clinical setting, CPGs as an educational intervention, and/or exposure to clinical policy, clinicians more wary of cancer CPGs become increasingly convinced of the role of these tools. It may also be that the influence of clinicians' negative preconceptions about CPGs is becoming less as evidence-based CPGs become increasingly established in the organizational and clinical culture of cancer care. Purposefully creating repeated opportunities for engagement among stakeholders in the cancer CPG enterprise, including the least supportive stakeholder group, may prove to be an effective component to an overall implementation strategy to facilitate the uptake of evidence. However, our unexpected findings of differences between the intentions of women and men to use CPGs over time, suggest further study is required to be able to adequately tailor interventions so that all stakeholders feel engaged.

These data also highlight the value of the methodology we used to examine, from a longitudinal perspective, the interface between knowledge products (i.e., the guideline) and the users of the knowledge (i.e., the clinicians). We found that ratings of CPG applicability and comparative value declined over time among clinicians who were more involved in research. Low scores on the applicability domain were not particularly surprising, as this has been found elsewhere. For example, in a review of 32 oncology guidelines, Burgers et al. found applicability scores to be extremely low, averaging 25.8% [27]. However, the decline over time was unexpected, and we can only speculate as to why this might be so. More recent cancer CPGs tend to have an increased focus on novel therapeutic agents and technologies, for which there is often an incomplete evidentiary basis or uncertainty regarding issues of implementation and public policy. Thus, this may place into question the value and role of these treatment options.

The dramatic shift in DSG portfolios towards CPGs for novel therapies may also explain the finding that ratings of CPG applicability were more likely to decline over time among medical oncologists than other specialties. Medical oncologists are primarily responsible for the evaluation of novel chemotherapy agents. From a clinical practice perspective, physicians want to advocate for their patients, and CPGs can provide an avenue to enable the evidence to support this goal. However, tension is provoked in the Ontario cancer care system, a publicly funded system, because the CPGs are also formally used by government in decisions about which drugs should be paid and made accessible to patients. Here, failure to get access to promising but not proven care options due to budget constraints or failure to meet evidentiary thresholds can render the CPG irrelevant. These findings highlight the importance of understanding CPGs in a larger healthcare context, changes to the context, and the conflicts that sometimes result.

There are limitations to this work. The findings of this study are constrained to individuals who participate, in some fashion, in the CPG enterprise. We have little data on those who have chosen never to exercise that opportunity. It is not possible, therefore, to predict the beliefs, intentions, and characteristics of the non-responders. It may be useful to explore failure to participate to better understand if it is driven by a lack of support for an evidence-based framework to support decision making or other non-related features (e.g., limited time). A separate project, in progress, is exploring these issues and in particular links between intensity of participation and patterns of CPG quality and intentions to use CPGs.

A second limitation is that the analysis stopped at clinicians' intentions to use CPGs rather than evaluate actual use (e.g., prescription patterns for chemotherapy, radiotherapy regimens as notes in patient file). Previous research has demonstrated reasonably moderate correlations between intention measures and behavioral measures in the healthcare literature, albeit with some significant methodological caveats [28]. Nonetheless, this work gives us some reassurance about the applicability of our findings to contribute the larger evidence utilization and application research literature. Regardless, clinical decisions and clinical outcomes are the desired and gold standard for evaluation; our objectives are to complete that task in the next steps of this program of research by focusing on how these evaluations are related to treatment decisions related to CPGs.


We have successfully examined the temporal trends in clinicians' evaluations of CPGs as well as clinician characteristics that might impact these changes. This study highlights the importance of construing quality in terms of clinicians' perceptions, rather than only the objective properties of guidelines. The results support the view that the quality and effectiveness of CPGs are best understood in terms of the contexts where they are used and the characteristics, beliefs, and attitudes of the users.


  1. Committee to Advise the Public Health Service on Clinical Practice Guidelines, Institute of Medicine: Clinical Practice Guidelines: Directions for a New Program. 1990, Washington: National Academy Press

    Google Scholar 

  2. Grol R: Success and failures in the implementation of evidence-based guidelines for clinical practice. Med Care. 2001, 39 (Suppl 2): 1146-1154.

    Google Scholar 

  3. Woolf SH, Grol R, Hutchinson A, Eccles M, Grimshaw J: Clinical guidelines: potential benefits, limitations, and harms of clinical guidelines. BMJ. 1999, 318: 527-530.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  4. Grol R, Wensing M, Eccles M: Improving Patient Care: The Implementation of Change in Clinical Practice. 2004, Oxford: Elsevier

    Google Scholar 

  5. Grimshaw J, Eccles M, Thomas R, MacLennan G, Ramsay C, Fraser C, Vale L: Toward evidence-based quality improvement. Evidence (and its limitations) of the effectiveness of guideline dissemination and implementation strategies 1966–1998. J Gen Intern Med. 2006, 21 (Suppl 2): S14-S20.

    PubMed  PubMed Central  Google Scholar 

  6. Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay CR, Vale L, Whitty P, Eccles MP, Matowe L, Shirran L, Wensing M, Dijkstra R, Donaldson C: Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess. 2004, 8: iii-iv. 1–72

    Article  CAS  PubMed  Google Scholar 

  7. Grimshaw JM, Shirran L, Thomas R, Mowatt G, Fraser C, Bero L, Grilli R, Harvey E, Oxman A, O'Brien MA: Changing provider behavior: an overview of systematic reviews of interventions. Med Care. 2001, 39 (8 Suppl 2): II2-II45.

    CAS  PubMed  Google Scholar 

  8. Health Services Research Unit Commission (HSRUC): Evaluation of the impact of the HSRUC Clinical Practice Guidelines Program. Summary Report. Saskatoon, SK. 2002

    Google Scholar 

  9. Brouwers MC, Graham ID, Hanna SE, Cameron DA, Browman GP: Clinicians' assessments of practice guidelines in oncology: The CAPGO survey. Int J Technol Assess Health Care. 2004, 20: 421-426. 10.1017/S0266462304001308.

    Article  PubMed  Google Scholar 

  10. Grol R, Dalhuijsen J, Thomas S, in't Veld C, Rutten G, Mokkink H: Attributes of clinical guidelines that influence use of guidelines in general practice: observational study. BMJ. 1998, 317: 858-861.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  11. Grol R, Buchan H: Clinical guidelines: what can we do to increase their use?. Med J Aust. 2006, 185: 301-302.

    PubMed  Google Scholar 

  12. Graham ID, Logan J, Harrison MB, Straus S, Tetroe J, Caswell W, Robinson N: Lost in knowledge translation: time for a map?. J Contin Educ Health Prof. 2006, 26: 13-24. 10.1002/chp.47.

    Article  PubMed  Google Scholar 

  13. Cabana M, Rand CS, Powe NR, Wu AW, Wilson MH, Abboud P-AC, Rubin HR: Why don't physicians follow clinical practice guidelines?: A framework for improvement. JAMA. 1999, 282: 1458-1465. 10.1001/jama.282.15.1458.

    Article  CAS  PubMed  Google Scholar 

  14. Rogers EM: Lessons for guidelines from the diffusion of innovations. Jt Comm J Qual Improv. 1995, 21: 324-328.

    CAS  PubMed  Google Scholar 

  15. Mittman BS, Tonesk X, Jacobson PD: Implementing clinical practice guidelines: social influence strategies and practitioner behavior change. QRB Qual Rev Bull. 1992, 18: 413-422.

    CAS  PubMed  Google Scholar 

  16. Fiske ST, Taylor SE: Social cognition. 1999, New York: McGraw Hill, 2

    Google Scholar 

  17. Eagly AH, Chaiken S: The psychology of attitudes. 1993, Fort Worth: Harcourt Brace Jovanovic

    Google Scholar 

  18. Ajzen I: From intentions to actions: A theory of planned behavior. Action-control: From cognition to behavior. Edited by: Kuhl J, Beckman J. 1985, Heidelberg: Springer, 11-39.

    Chapter  Google Scholar 

  19. Browman GP, Levine MN, Mohide EA, Hayward RS, Pritchard KI, Gafni A, Laupacis A: The practice guidelines development cycle: a conceptual tool for practice guidelines development and implementation. J Clin Oncol. 1995, 13: 502-512.

    CAS  PubMed  Google Scholar 

  20. Brouwers MC, Browman GP: The promise of clinical practice guidelines. Strengthening the Quality of Cancer Services in Ontario. Edited by: Sullivan T, Evans W, Angus H, Hudson A. 2003, Ottawa: CHA Press, 183-203.

    Google Scholar 

  21. Evans WK, Brouwers MC, Bell CM: Commentary: Should cost of care be considered in a Clinical Practice Guideline?. JNCCN March. 2008, 6 (3): 224-6.

    Google Scholar 

  22. Browman GP, Newman TE, Mohide EA, Graham ID, Levine MN, Pritchard KI, Evans WK, Maroun JA, Hodson DI, Carey MS, Cowan DH: Progress of clinical oncology guidelines development using the practice guidelines development cycle: The role of practitioner feedback. J Clin Oncol. 1998, 16: 1226-1231.

    CAS  PubMed  Google Scholar 

  23. Browman GP, Makarski J, Robinson P, Brouwers M: Practitioners as experts: the influence of practicing oncologists 'in-the-field' on evidence-based guideline development. J Clin Oncol. 2005, 23: 113-119. 10.1200/JCO.2005.06.179.

    Article  PubMed  Google Scholar 

  24. Browman G, Brouwers M, De Vito C: Participation patterns of oncologists in the development of clinical practice guidelines. Curr Oncol. 2000, 7: 252-257.

    Google Scholar 

  25. Graham ID, Brouwers M, Davies C, Tetro J: Ontario doctors' attitudes toward and use of clinical practice guidelines in oncology. J Eval Clin Pract. 2007, 13: 607-615. 10.1111/j.1365-2753.2006.00670.x.

    Article  PubMed  Google Scholar 

  26. Snijders T, Bosker R: Multilevel Analysis: An Introduction to Basic and Advanced Multilevel Modeling. 1999, London: Sage

    Google Scholar 

  27. Burgers JS, Fervers B, Cluzeau F, Brouwers M, Philip T, Browman G: Predictors of health quality clinical practice guidelines: examples in oncology. Int J Qual Health Care. 2005, 17: 123-132. 10.1093/intqhc/mzi011.

    Article  PubMed  Google Scholar 

  28. Eccles MP, Hrisos S, Francis J, Kaner EF, Dickinson HO, Beyer F, Johnston M: Do self-reported intentions predict clinicians' behaviour: a systematic review. Implement Sci. 2006, 1: 28-10.1186/1748-5908-1-28.

    Article  PubMed  PubMed Central  Google Scholar 

Download references


This project was supported by Grant 64203 from the Canadian Institutes for Health Research (CIHR). CIHR had no role in the design, analysis, manuscript development or decision to submit the manuscript for publication. The authors would like to thank Carol De Vito for her contributions in preparing the databases for analysis.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Melissa Brouwers.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

MB and SH conceived and designed the project, oversaw the analysis and interpretation of the data, drafted and revised the manuscript, and have given final approval of the submitted manuscript. MA-M and JY contributed to the design of the project, analyzed the data, and contributed to the writing and revision of the manuscript, and have given final approval of the submitted manuscript. MB acquired the data. This project contributed to the Master's degree educational requirements of Mona Abdel-Motagally and Jennifer Yee.

Electronic supplementary material


Additional file 1: Significant predictor main effects (top) and significant predictor by time interactions (bottom) for outcome measures. This table provides the results of the statistical analyses testing the main effects of each predictor variable and the interactions between the predictor variable by time for each of the outcome measures. (DOC 36 KB)

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Brouwers, M., Hanna, S., Abdel-Motagally, M. et al. Clinicians' evaluations of, endorsements of, and intentions to use practice guidelines change over time: a retrospective analysis from an organized guideline program. Implementation Sci 4, 34 (2009).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • Medical Oncologist
  • Endorsement Rating
  • Moderate Belief
  • Ontario Cancer Care
  • Belief Interaction