Open Access
Open Peer Review

This article has Open Peer Review reports available.

How does Open Peer Review work?

Can patient decision aids help people make good decisions about participating in clinical trials? A study protocol

  • Jamie C Brehaut1, 2Email author,
  • Alison Lott5,
  • Dean A Fergusson1, 2,
  • Kaveh G Shojania6,
  • Jonathan Kimmelman4 and
  • Raphael Saginur1, 3
Implementation Science20083:38

DOI: 10.1186/1748-5908-3-38

Received: 01 May 2008

Accepted: 23 July 2008

Published: 23 July 2008

Abstract

Background

Evidence shows that the standard process for obtaining informed consent in clinical trials can be inadequate, with study participants frequently not understanding even basic information fundamental to giving informed consent. Patient decision aids are effective decision support tools originally designed to help patients make difficult treatment or screening decisions. We propose that incorporating decision aids into the informed consent process will improve the extent to which participants make decisions that are informed and consistent with their preferences. A mixed methods study will test this proposal.

Methods

Phase one of this project will involve assessment of a stratified random sample of 50 consent documents from recently completed investigator-initiated clinical trials, according to existing standards for supporting good decision making. Phase two will involve interviews of a purposive sample of 50 trial participants (10 participants from each of five different clinical areas) about their experience of the informed consent process, and how it could be improved. In phase three, we will convert consent forms for two completed clinical trials into decision aids and pilot test these new tools using a user-centered design approach, an iterative development process commonly employed in computer usability literature. In phase four, we will conduct a pilot observational study comparing the new tools to standard consent forms, with potential recruits to two hypothetical clinical trials. Outcomes will include knowledge of key aspects of the decision, knowledge of the probabilities of different outcomes, decisional conflict, the hypothetical participation decision, and qualitative impressions of the experience.

Discussion

This work will provide initial evidence about whether a patient decision aid can improve the informed consent process. The larger goal of this work is to examine whether study recruitment can be improved from (barely) informed consent based on disclosure-oriented documents, towards a process of high-quality participant decision-making.

Background

Informed consent is a cornerstone of ethical healthcare research, and is a required component of virtually all clinical studies conducted in modern institutions. The basic principles of informed consent were first documented as the Nuremberg Code [1] in response to Nazi war crimes. Later, these principles were refined and expanded as part of the World Medical Association's Declaration of Helsinki [2] in 1964, and its subsequent updates [3]. These fundamental documents, as well as substantial philosophical, clinical, legal, and regulatory debate [4, 5] have led to a general consensus regarding key criteria for informed consent, which include: the decision to participate in clinical research must be made voluntarily and free from coercion; the decision-maker must be competent to make the decision; full disclosure of relevant information must be given; and the relevant information must be understood by the decision-maker [4].

Recent work has identified a tension between the latter two core criteria [68]. On the one hand, researchers, institutions, and industry sponsors seek to disclose all potentially relevant information and to ensure that legal disclosure requirements are clearly met. Such disclosure is being implemented with increasingly long and complicated patient materials [7]. On the other hand, there is increasing evidence that the existing consent process often leads to poor participant understanding. Examples abound of participants not understanding even the most basic components of the studies in which they are involved [922], including participants not understanding that they had been randomly assigned to treatment [23], and participants believing that their treatment was already proven effective [24]. To an extent, pressures to disclose and to aid understanding can be opposed; to date, it appears that pressures towards disclosure have been stronger than those towards ensuring participant understanding [25, 26].

While the practice of informed consent has emphasized disclosure and increasing complexity, there is considerable literature on how to improve knowledge and understanding when making tough health care decisions. It is well known, for example, that simply providing clear information does not ensure that good decisions will be made. Many factors not directly related to the actual information presented can affect decision-making. Irrational and/or emotional factors can be important determinants of patient decisions [27, 28]. Misunderstanding or misinterpretation of even clearly presented information can contribute to poor decisions [29, 30]. Presenting the same information in different ways can result in different decisions, suggesting that how the information is presented, as well as what is presented, is important [3133]. Furthermore, psychological states such as feeling unsure or unprepared correlate with decision quality [34, 35]. To facilitate high quality decision-making, information must be presented in a way that reduces the likelihood of misinterpretation, reduces uncertainty, and increases a feeling of being prepared for the decision [35, 36].

Decision quality can be a difficult concept in situations where there is no objectively correct answer. The treatment decision literature [37, 38] distinguishes between two kinds of decisions. Effective care decisions are those where clinical evidence suggests a course of action that has a benefit/harm ratio superior to all other available options. In such situations, a 'good' decision typically involves choosing the most effective option. In contrast, 'preference-sensitive' decisions have no clinically correct course of action, either because evidence on treatment effectiveness is unavailable, or because the benefits and harms of different treatments need to be evaluated in the context of patient values. It is for these preference-sensitive decisions where defining decision quality can be challenging. However, more than 20 years of work on the issue points to three critical components: a knowledge of the key aspects of the decision, accurate perceptions of the probabilities of outcomes under the different options, and a match between what outcomes patients value and the treatment options they choose [37, 39].

The decision to participate in a clinical trial is an excellent example of a preference-sensitive decision. The pros and cons associated with participation (including, but not limited to, the benefits and harms of offered treatments) are frequently not well known; this is the reason for conducting the trial. As a result, decisions about whether to participate depend entirely on how individuals value the potential benefits (e.g., incentives, potential health benefits, altruism) and harms (e.g., side effects, clinic visits, travel) of participation. It is precisely for preference-sensitive decisions like these that patient decision aids (DAs) have been developed.

DAs are tools designed to help people make specific and deliberative choices among options by providing, at a minimum, information on the options and outcomes relevant to the person's health status. They can also include exercises to help people explicate choice predisposition, preference for role in decision-making, and how they value the different options [40]. DAs are intended to be used prior to, and in conjunction with, decision-making counselling sessions, and are thus consistent with the notion that consent should involve a process, not just a document.

The effectiveness of DAs has been tested extensively, with over sixty trials completed or in progress [40]. DAs have been shown to improve the quality of preference-sensitive patient decisions, in comparison to both standard care information documents and standard counselling strategies [35, 36, 39, 41, 42]. Specifically, they reduce uncertainty surrounding decisions (often termed decisional conflict [40, 43]), enhance knowledge of key aspects of the decision and outcome probabilities [40, 44, 45], improve satisfaction with choices made, and improve the likelihood that selected treatments will be consistent with valued outcomes [44, 46]. We propose that similar benefits might be attained when deciding whether to participate in a clinical trial. Furthermore, the related findings that DAs improve understanding, that improved understanding can increase trial participation rates [4750], and that DAs can increase selection of underused treatment options [51, 52] lead to the intriguing possibility that DAs may increase trial participation in situations where benefits compare favourably to harms.

Patient DAs have a strong theoretical foundation in the Ottawa Decision Support Framework [53, 54], an evidence-based framework informed by cognitive, social, and organizational psychological theory, components of which have been validated in at least twelve studies [54]. This framework guided the development of the International Patient Decision Aid Standards (IPDAS) [36]. These standards were developed using an extensive evidence-based consensus process that included input from patients, practitioners, policy-makers, and decision support experts from fourteen countries worldwide. The IPDAS standards describe detailed recommendations about the content and delivery of information to facilitate high quality decisions. These standards are often consistent with, but sometimes more specific than, consent form guidelines. For example, while consent form guidelines require general information on benefits and harms of trial participation, IPDAS standards necessitate consistent denominators, time periods, and multiple (positive and negative) frames for outcome probabilities [36, 5557]. Furthermore, the IPDAS criteria also describe other exercises, such as requiring decision-makers to clarify which outcomes (positive and negative) they value most (e.g., How important to you is an X% chance of improvement? How important is a Y% chance of a side effect?). Such exercises are commonly used in the patient DA literature, but rarely in the context of informed consent documents.

The decision support literature is increasingly focused on the development of computer-based (i.e., 'online') decision aids. For information producers, the benefits of presenting DAs online include easy updating compared to print media, and easy dissemination via the internet [55]. For patients, advantages include accessibility and the potential for improved learning if multimedia tools are employed correctly [58]. Multimedia approaches as a class have met with limited success [36, 48], but our preliminary research suggests that multimedia DAs can be effective when informed by a theoretical framework [59]. Therefore, the DAs developed for this study will be designed for presentation online.

To summarize, we propose that many failures of the existing informed consent process stem from an inappropriate focus on disclosure of information, rather than on facilitating high quality decision-making among potential research participants. In order for the informed consent process to allow both disclosure and understanding, innovative ways of presenting increasingly complex information to decision-makers are required. Patient DAs, which have been shown to improve decision-making in other contexts, may improve the quality of trial participation decisions. The current study will investigate this issue.

Objectives

This study has four main objectives:

First, to examine whether consent forms of recently completed randomized controlled trials (RCTs) conform to standards for promoting high quality decision-making. Specifically, we hypothesize that there will be considerable variation in adherence to existing standards, even among a relatively homogeneous sample of consent forms drawn from investigator-initiated health research RCTs, and that many consent forms will lack key components necessary to facilitate high quality decision-making, as indicated by existing standards.

Second, to learn about the experience of trial recruitment from participants. Specifically, we will interview trial participants about: how they were recruited to participate in the trial; what factors they considered when deciding whether to participate; their impressions and reported use of any decision support materials provided; suggestions about how the recruitment process might have been improved; and overall impressions of trial participation.

Third, to employ a treatment DA template and user testing via the user-centered design (UCD) approach to develop a DA for people deciding whether to participate in a clinical trial. Specifically, we hypothesize that a template designed to inform development of patient DAs can be effectively used to develop a DA about whether to participate in a clinical trial, and that DA development via UCD can result in a DA that meets previously determined usability goals

Fourth, to test whether trial participation decisions based on a user-tested patient DA (as opposed to a standard consent form) will result in measurable differences in decision quality among hypothetical candidates for clinical trials. Specifically, we hypothesize that people using a DA will be less uncertain about the decision [6063]; better remember the key aspects of the decision [45, 6471]; better understand probabilities of key outcomes [44, 45, 63, 7274]; show a higher correlation between outcomes valued and choice made [40, 44, 46]; and, be more likely to participate in the clinical trial [47, 51, 52, 75].

Methods

Objective One: comparing consent forms to standards

Before developing a tool to help people decide whether to participate in a clinical trial, it will be important to investigate the effectiveness of the current process. Objective one will examine how well existing consent forms conform to empirically developed standards for promoting high quality decisions.

The primary tool for this assessment will be a checklist recently developed as part of the IPDAS [36]. Designed by an international collaboration of experts on patient decision-making, this checklist includes 74 criteria from 12 quality domains; each of these criterion are considered important for helping patients make difficult decisions about treatment or screening. The IPDAS criteria overlap with guidelines for informed consent documents (e.g., use of plain language, reading level requirements, disclosure of conflicts of interest, presenting both positive and negative outcomes associated with the different options). As such, evaluating consent forms using this checklist will also assess requirements laid out in consent form guidelines. For completeness, consent form recommendations from other resources (e.g., U.S. National Cancer Institute, National Cancer Institute of Canada, Tri-Council Policy Statement [76, 77]) will be examined and any identified missing items will be appended to the checklist.

We will then assess a random sample of consent forms from approved investigator-initiated trials completed within the last six to 24 months at two institutions. The random sample of clinical trials will be drawn from local research ethics boards (REB) databases. Although these databases contain information on all institution-specific research projects, only non-industry studies labelled as clinical trials involving adults will be eligible for inclusion. Principal investigators of included studies will be contacted directly for consent forms and assured that identifying information (e.g., investigator and proprietary drug names) will be removed before assessment. They will be informed that results will be reported in aggregate, meaning that individual studies will not be identified. Principal investigators will also be asked for information regarding overall enrolment rates; this should be known since the sample of consent forms will be limited to studies completed within the last six to 24 months. If consent forms for any of the target trials cannot be obtained, a replacement study will be randomly selected from the same review board database.

Study investigators who are approached to provide consent material for this study may feel pressured to comply because some of the authors are members of the local REBs to which they may later submit protocols. An analogous situation is common in clinical research, where physician-investigators recruit their patients into their own studies. In that situation, recruitment materials commonly include information designed to reassure patients that their care will not be affected by their decision to accept or decline trial participation. Similarly, we will reassure investigators that subsequent REB reviews will be unaffected by their decision to participate in our study. Furthermore, no investigator will review consent materials until all identifying information has been redacted from the documents. One investigator's name (RS) will be left off all Ottawa recruitment letters; it was felt his name may carry particular weight as he is the chair of the Ottawa REB. Furthermore, we will ensure that RS does not review any Ottawa consent materials, even after redaction.

A research coordinator and graduate student will be designated as coders and asked to rate all target consent forms with respect to the IPDAS checklist, using a Yes (2), Partly (1) and No (0) response scale for each criterion. For each consent form, the coders will also extract several descriptive factors that will later become the focus of post hoc exploratory analyses. For example, each study will be coded according to medical discipline (e.g., oncology), and trial phase (e.g., phase one, phase two). Exploratory analyses will then be used to look for correlations between consent form quality and these descriptive factors, as well as the relation between quality and true recruitment rate.

Sample size and analyses

Consent forms will be randomly selected (25 from each institution) for application of the standards checklist. Assuming that compliance with 60% of the IPDAS items suggests a reasonable level of compliance, a sample of fifty consent forms allows for the detection of an overall compliance of 60% (30 of 50) with 95% confidence intervals of ± 15% [78]. This sample size will allow us to quantify the certainty of our estimates of the overall compliance with IPDAS criterion in the larger population of consent forms in the two databases.

Although the IPDAS checklist was developed according to a rigorous Delphi methodology, this document has not yet been validated as an assessment tool [36]. As a result, the investigator team will first 'pilot' the rating of several consent forms, thereby evaluating the checklist for overlapping, unclear, or missing items. These piloted consent forms will come from a database of publicly accessible consent documents already in the possession of the authors [79]. Once the items in the checklist have been agreed upon, the investigator team will train the two coders using these same pilot consent forms. This training will proceed until the consistency of coder agreement exceeds 80% on various components of the checklist. While coding the target consent forms, the two coders will resolve disagreements by consensus or confer with the investigator team when there is uncertainty. Inter-rater agreement for each item will be assessed using Kappa scores [80, 81]. Because the checklist has not been validated overall as a scale of consent form quality, we will not compute overall assessment scores, but instead only examine descriptively the presence or absence of specific criteria.

Descriptive analyses will be used to evaluate the number and variation of checklist items present across the different consent forms (hypothesis one). Also, descriptive analyses will be used to identify which specific IPDAS components are more or less likely to be included in consent forms (hypothesis two). Further post hoc exploratory analyses will examine whether consent forms from oncology trials (an area where a significant work on consent form ethics has been conducted) include more components conducive to good decision-making than trials from other areas, and determine the relationship between consent form quality, as indicated by items on the IPDAS checklist, and true enrolment rates.

Objective Two: interviews of trial participants

Objective one seeks to assess the current practice of trial recruitment by evaluating existing written materials. However, studying trial recruitment should not be limited to the written materials; other factors, such as consultation with study personnel, often play an important role in this process. Despite attempts to improve the informed consent process [48], relatively few studies have described the experiences of those individuals who must understand the complex information presented in consent documents; those that have focus on specific clinical areas [7]. Objective two will elicit the experiences of participants from a variety of studies, to identify themes that may be broadly applicable to improving the quality of participation decisions.

We will interview recent recruits from a convenience sample of ongoing clinical trials at local institutions. Our aim is not to document an exhaustive list of recruitment issues for each study, but rather to elicit themes that are common across trial recruitment situations. The authors will target eight to ten adult participants from multiple studies in five disciplines (oncology, thrombosis, emergency medicine, transfusion research, cardiology). Study investigators will contact the lead investigators of the selected RCTs and ask them to distribute recruitment letters to participating patients, if ethical circumstances allow. Our purposive sample will include both low- and high-risk studies (as determined by the local REB records) from each discipline, to elicit opinions about a range of studies.

Our phenomenological approach will involve semi-structured interviews approximately 45 minutes in length consisting of questions focused on trial recruitment, provided materials, decision-making, and how the overall process could have been improved. Three pilot interviews will be conducted to test the appropriateness and flow of the interview guide; the interview questions will be modified accordingly before proceeding with the remaining interviews. Participants will be prompted to provide clarification and elicit more detail, and all interviews will be recorded and transcribed. Participants will be offered $20 as a token of appreciation and to cover any attendant costs. Qualitative analysis will use NUD*IST software, applying the constant comparison method described by Strauss and Corbin [82] to elicit clusters of meanings from the narrative data that describe the experience of participants and inform the design of subsequent DAs.

Objective Three: iterative development of a decision aid

Considerable work has examined how best to present complex information via computers [8386]. Problems with online information can be characterized in terms of two dimensions: usability and usefulness [86]. Usability refers to the ease with which specified users can locate and interpret the information, while usefulness describes the degree to which the right information is presented at the right time. UCD is a qualitative, multi-stage procedure, and is one of the most well studied, efficient, and cost-effective methods for improving both the usability and the usefulness of complex, online materials [87]. It is an iterative process of design, evaluation, analysis, and re-design intended to create a final product that meets predetermined usability goals (e.g., 90% of the time, patients should be able to read and complete the DA in less than 30 minutes, and score 80% or better on a knowledge test of the key aspects of the decision). This process has been shown in a variety of contexts to improve user satisfaction [88], reduce errors in navigation and the resulting confusion [87, 89], and to increase the efficiency with which the information can be found [86]. However, this technique has not yet been applied to decision support materials for people making health care decisions.

We propose to employ UCD as a qualitative methodology designed to optimize the IPDAS DA template for decisions involving participation in clinical trials. This template was developed for screening and treatment decisions, where the benefits of using DAs have been clearly demonstrated. However, neither the template nor the generalized DA technology has been tested in the context of clinical trial participation. As a result, some detailed, qualitative pilot testing is required to examine how a DA based on the IPDAS template mediates decision making in this context.

We will develop DAs for two target studies from the set of trials assessed in objective one. Although UCD testing is labour intensive, developing two DAs instead of one will help identify which issues can be generalized and which issues are idiosyncratic to specific studies. The choice of which two trials to focus on will be determined by two main criteria. First, studies whose inclusion criteria are extremely strict, or where the relevant population does not exist locally, will be avoided to allow enough participants to be recruited for objective four. Second, if there is significant variation in the extent to which consent forms adhere to standards (objective one, hypothesis one), the investigator group will choose one trial that meets relatively few criteria and one that meets more. This selection process will allow us in objective four to study whether a DA only affords a benefit over poorly designed consent forms. Note that analysis for this objective will be exploratory in nature, since any differences in the number of criteria met will be confounded with clinical condition.

Risk information will be consistent for both consent forms and DAs (i.e., the DA will not introduce any new risk information). While both the DA literature and the IPDAS criteria recommend providing specific numbers associated with the risks of different outcomes, such specific outcome probabilities are often not available for clinical trials. Therefore, for the purposes of this project, specific outcome probabilities will not be included in the DA if they are not provided in the associated consent form. Instead, the DA will contain standardized descriptors, such as those recommended by the National Cancer Institute of Canada (e.g., common = > 200 per 1000, very rare =< 1 per 1000) [90].

Data collection for this objective will consist of two phases of qualitative UCD testing: expert testing and user testing. Phase one will involve experts (three DA experts and three content areas, drawn from the investigator team and colleagues) working through the DA to ensure that all information relevant to the decision is present. They will examine the DA to ensure formatting conforms to basic principles or 'heuristics' of good design (heuristic evaluation [91]). These experts will also identify potential stumbling blocks in the material by working through the entire tool; this technique is referred to as a 'cognitive walkthrough' [92].

Once the expert evaluations are complete, phase two will subject the updated version of the DA to a series of 'user tests' involving adult participants 'talking aloud' [93] as they work through the tool. The user tests will be videotaped and evaluated for user misunderstandings, expressions of frustration or confusion, and the specific areas of the DA where these occurred. These 'usability problems', as well as items that multiple users identify as challenging, will become target areas for improvements on subsequent iterations. The DA will be revised after each iteration of five or six participants [86]. This iterative approach provides (in the first iteration) baseline measures of user satisfaction and performance (time required to read, comprehension, misunderstandings), as well as (in later iterations) the degree to which the current version of a DA meets pre-specified usability goals. Each session will take approximately 45 to 60 minutes, for which participants will be offered $20 as a token of appreciation and to cover any attendant costs.

Sample size and analyses

Participants in this phase of the study will be naïve volunteers age-matched to typical patients with the condition discussed in the DA. Based on previous experience and the usability testing literature [93], four to five iterations of five to six participants each will be sufficient to meet the usability goals described above (i.e., twenty to thirty participants will be required).

Objective Four: prospective observational study

Objective four will compare the experiences of people using consent forms and DAs to assist hypothetical decisions about trial participation. This objective will consist of a prospective observational study designed to collect both qualitative and quantitative data relevant to whether this approach warrants further evaluation with a pilot RCT.

Participants will be naïve individuals who meet the inclusion criteria of the target study, and thus could have been approached to participate in the original study. However, those who actually were approached to participate in the target study, regardless of their decision to participate, will be excluded from our study. In addition to the type of decision tool (consent form, DA), the two consent forms that were subjected to user testing from objective three will be the focus of this study. Participants will be eligible for our study if they speak English, are over 18 years of age, and meet the inclusion criteria of one of the target studies.

Potential participants that meet one of the two sets of inclusion criteria will be approached to enrol in our study (i.e., non-random allocation to target study), and standard consent will be obtained. Participants will work through one of the two decision support tools. Data will first be collected on consent forms, then later for DAs. We have chosen this approach for two reasons. First, by collecting data for consent forms initially, we will not need to wait until the end of DA user testing to begin data collection for objective four. Participants may need to fit strict inclusion criteria for the relevant studies chosen for objective four; this approach adds flexibility to our timeline. Second, we will incorporate information gleaned from the consent form participants (particularly their qualitative responses) to further improve the DA. This approach sacrifices the experimental rigour of an RCT, but adds a richness of qualitative and quantitative data that is most likely to result in both a tool that maximally improves the informed consent process, and in a better understanding of what outcomes are affected by the newer decision support tool.

After working through the decision support tool, participants will complete a paper-based questionnaire. This questionnaire will include validated measures of constructs related to decision quality, as well as qualitative questions about their impressions of recruitment process and materials. Quantitative outcomes measured will include decisional conflict [94], memory for key aspects of the decision, knowledge of the probabilities of different outcomes, values associated with different outcomes for comparison with the participation choice, and participation choice. We will also measure satisfaction with the decision support materials, satisfaction with the informed consent process [49], and anticipated regret of key negative outcomes. In addition, participants will be asked to make a hypothetical decision about whether or not they would participate in the target trial (yes, no, unsure). Of note, participants will not have access to the decision support materials when completing the questionnaire. At the end of the session, participants will be provided with a debriefing form, explaining the purpose of the study and how their data will contribute to towards improving the consent process. The entire session will take 45 to 60 minutes, for which participants will be offered $20 as a token of appreciation and to cover any attendant costs.

Sample size and analyses

The sample size calculation was based on detecting differences on the continuous decisional conflict scale [43]. The authors selected this scale as the primary outcome for this analysis because it is considered a key correlate of good decision quality and has been well validated in the context of many treatment decisions. Sample size calculations were carried out by simulation using the AOV function of R statistical software [95]. We conducted a simulation with 50,000 iterations, detecting a 10% difference on a continuous outcome. Results of the simulation showed that a sample size of 30 individuals per group, or 120 in total, yields a proportion of rejecting the null when it is true of 0.048 (i.e., alpha level is approximately 0.05), while the proportion of incorrectly accepting the null hypothesis is less than 0.01 (i.e., power is greater than 0.99).

Analyses for this study will consist of linear (for continuous outcomes) and logistic regressions, with type of decision support (consent form/DA), target study, and their interaction predicting the different outcomes. For example, when predicting decisional conflict, a significant effect of type of decision support will indicate whether those making decisions on the basis of consent form and DA differ in terms of how unsure they remain about the decision. Similarly, a significant effect of target study will demonstrate whether satisfaction was higher for one condition regardless of type of decision support, and their interaction will show whether the two effects are independent. The collected demographic characteristics of respondents (e.g., age, sex) will also be included as covariates.

Hypotheses for this objective were principally derived from literature on the effects of DAs on treatment decisions, i.e., that they improve the quality of decision-making. The authors hypothesize that using a DA will result in: reduced indecision and decisional conflict [39, 40, 45, 61, 62]; improved memory for key aspects of the decision [64, 96]; improved knowledge of key outcome probabilities [40]; and a higher correlation between self-identified important outcomes and the selected treatment choice [40, 97]. Literature has shown that DAs can affect behavioural outcomes, such as increased use of underused treatments [51, 52], and that that confusion arising from consent forms may contribute to non-participation [48]. As a result, we further hypothesize that DAs may increase participation in trials where risk/benefit ratios are favourable.

Finally, we will collect and analyze the number and content of questions that potential participants ask after working through the decision support materials. These post-consent form discussions will not only serve to make the consent process more representative of real world recommended practice [48], they will also serve as a valuable data collection opportunity. The DA will explicitly ask people to record any unanswered questions about the associated trial, while encouraging systematic thinking about the various possible outcomes. As a result, we expect that more questions, and more detailed questions, will stem from those working through the DA as opposed to the consent form. The enrolment rates of the sample consent participants will be compared to the reported trial enrolment rates, to estimate how closely hypothetical recruitment mimics real life situations.

Limitations

A number of limitations of this study warrant consideration. First, the development of a DA to better inform trial participation decisions does not address all the ethical concerns related to informed consent. It has also been argued that the existing informed consent process lends itself to problems by focusing on specific, isolated decisions, rather than larger concepts such as overall autonomy of the individual (e.g., see Kukla [98]). While the current approach does not directly address these larger issues, we believe that the development of improved decision tools will serve such larger goals. For example, improved decision tools will encourage thinking about informed consent as a process rather than a discrete event. Furthermore, DAs may elicit benefits beyond the immediate aims of this study by explicitly addressing issues such as the balance between benefits and harms, and prompting potential participants to think about what further information they require and their preferred role in decision-making [7]. Since memory for information presented during the consent process can fade throughout participation [49], the DAs developed for this study will include a take home one-page summary that can be used to periodically review key trial information. Future work, perhaps involving a larger study examining the entire time course of trial participation, will be required to consider these larger ethical concerns.

Second, the IPDAS checklist used in objective one has not been validated as an assessment tool. This checklist was developed according to a modified Delphi method [99], and constitutes the consensus of an international consortium of experts on which items comprise high quality decision support. Because the checklist has not been formally validated, we decided to incorporate a pilot testing phase designed to identify overlapping or problematic items and describe item-specific results; an overall 'quality' score will not be computed. Future work should involve formal psychometric analysis of the IPDAS checklist as a measure of DA quality in treatment decisions, and separately as an indicator of the ability to improve informed consent.

Third, objectives three and four will make use of hypothetical decision-makers rather than actual patients making real world decisions. This characteristic is common in the literature, but has been argued to adversely affect study generalizability [48]. However, increasing evidence shows that decision-making based on hypothetical, written scenarios is highly correlated with real world decisions [100, 101]. This study is designed to determine whether incorporating DAs into real informed consent decisions is worthwhile; as such, we felt that it would be inappropriate to use actual patients until it is known whether DAs are at least as effective as standard practice in assisting decision-making. However, it may be that in this context, hypothetical decisions are not predictive of actual decisions. This issue will be addressed by examining the calibration of hypothetical enrolment rates from objective four with true enrolment rates collected in objective one. Determining the usefulness of DAs for true participation decisions will be the subject of another study.

Fourth, objective four will compare online DAs to existing, paper-based consent forms, which are still the current norm for most clinical trials. This comparison leaves open the possibility that any observed variation could stem from the difference in media (paper-based versus online) rather than differences in the decision support tool itself (consent form versus DA). However, a recent systematic review of interventions designed to improve informed consent documents showed that presenting the information online is not enough to ensure better understanding, and a meta analysis of all multimedia manipulations showed a null effect on consent form knowledge [48]. As a result, we expect that any response differences between the two types of decision material will be primarily due to variations in the support framework, rather than any implicit advantage in the display medium.

Fifth, blinding, allocation concealment and randomization for objective four are impossible or impractical, and thus strong claims about the relative benefits of DAs versus consent forms cannot be made. The next step in this research program will address this issue by comparing the performance of DAs and consent forms in the context of a real world RCT.

Sixth, this work will focus only on investigator-initiated trials, and not industry-sponsored trials. Practical and legal aspects of studying industry trials led us to limit our samples for this project; however, this subgroup of trials is clearly of interest and will be the subject of a separate investigation.

Finally, examination of long term effects of the informed consent process, such as dropout rates, satisfaction or regret with participation in the trial, willingness to participate in a similar trial, etc., will not be possible in this short term project, and will be the subject of future work. The current study will examine only immediate outcomes that in the treatment decision literature are known to be correlated with the longer term outcomes [102].

Declarations

Acknowledgements

This protocol has been peer-reviewed and funded by the Canadian Institutes of Health Research (CIHR). Ethics approval is being sought from the Ottawa Hospital Research Ethics Board (RS, who is chair of the board, recused himself from the process of decision-making). JCB is a Career Scientist with the Ontario Ministry of Health and Long term Care.

Authors’ Affiliations

(1)
Ottawa Health Research Institute, Ottawa Hospital
(2)
Department of Epidemiology and Community Medicine, Faculty of Medicine, University of Ottawa
(3)
Research Ethics Board, The Ottawa Hospital
(4)
Biomedical Ethics Unit, McGill University
(5)
Canadian Institutes of Health Research (CIHR)
(6)
Sunnybrook Health Sciences Centre

References

  1. Trials of War Criminals before the Nuremberg Military Tribunals under Control Council Law. 1949, Washington, D.C., U.S. Government Printing Office, 10 (2): 181-182.Google Scholar
  2. World Medical Organization: Declaration of Helsinki. British Medical Journal. 1996, 313: 1448-9.Google Scholar
  3. World Medical Association: Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects. 2006, [http://www.wma.net/e/policy/b3htm]Google Scholar
  4. Beauchamp T, Childress J: Principles of biomedical ethics. 1994, Oxford: Oxford University PressGoogle Scholar
  5. Faden R, Beauchamp T: A history and theory of informed consent. 1986, Oxford: Oxford University PressGoogle Scholar
  6. Cox K, Avis M: Psychosocial aspects of participation in early anticancer drug trials: Report of a pilot study. Cancer Nursing. 1996, 19: 177-86. 10.1097/00002820-199606000-00004.View ArticlePubMedGoogle Scholar
  7. Cox K: Informed consent and decision-making: patients' experiences of the process of recruitment to phases I and II anti-cancer drug trials. Patient Educ Couns. 2002, 46: 31-8. 10.1016/S0738-3991(01)00147-1.View ArticlePubMedGoogle Scholar
  8. Ellis P: Attitudes towards and participation in randomised clincial trials in oncology: A review of the literature. Ann Oncol. 2000, 11: 939-45. 10.1023/A:1008342222205.View ArticlePubMedGoogle Scholar
  9. Bergler J, Pennington A, Metcalfe M, Freis E: Informed consent: how much does the patient understand?. Clin Pharmacol Ther. 1980, 27: 435-40.View ArticlePubMedGoogle Scholar
  10. Daugherty C, Kiolbasa T, Sieger M: Informed Consent (IC) in clinical research: A study of cancer patient (pt) understanding of consent forms and alternatives of care in phase I clinical trials. Am Soc Clin Oncol. 1997Google Scholar
  11. Daugherty C, Banik D, Janish L, Ratain M: Quantitative analysis of ethical issues in phase 1 trials: a survey interview of 144 advanced cancer patients. IRB. 2000, 22: 6-14. 10.2307/3564113.View ArticlePubMedGoogle Scholar
  12. Davis TC, Holcombe RF, Berkel HJ, Pramanik S, Divers SG: Informed consent for clinical trials: A comparative study of standard versus simplified forms. J Natl Cancer Inst. 1998, 90: 668-78. 10.1093/jnci/90.9.668.View ArticlePubMedGoogle Scholar
  13. Grossman SA, Piantadosi S, Covahey C: A re informed consent forms that describe clinical oncology research protocols readable by most patients and their families?. J Clin Oncol. 1994, 12: 2211-5.PubMedGoogle Scholar
  14. Hopper KD, TenHave TR, Hartzel J: Informed consent forms for clinical and research imaging procedures: how much do patients understand?. AJR. 1995, 164: 493-6.View ArticlePubMedGoogle Scholar
  15. Mader TJ, Playe SJ: Emergency medicine research consent form readability assessment. Ann Emerg Med. 1997, 29: 539-10.1016/S0196-0644(97)70229-4.View ArticleGoogle Scholar
  16. Miller C, Searight HR: Comprehension and recall of the informational content of the informed consent document: an evaluation of 168 patients in a controlled clinical trial. J Clin Res Drug Dev. 1994, 8: 237-48.Google Scholar
  17. Morrow GR: How readable are subject consent forms?. JAMA. 1980, 244: 56-8. 10.1001/jama.244.1.56.View ArticlePubMedGoogle Scholar
  18. Powers RD: Emergency department patient literacy and the readability of patient-directed materials. Ann Emerg Med. 1988, 17: 124-6. 10.1016/S0196-0644(88)80295-6.View ArticlePubMedGoogle Scholar
  19. Preziosi MP, Yam A, Ndiaye M, Simaga A, Simondon F, Wassilak SG: Practical experiences in obtaining informed consent for a vaccine trial in rural Africa. N Engl J Med. 1997, 336: 370-3. 10.1056/NEJM199701303360511.View ArticlePubMedGoogle Scholar
  20. Riecken HW, Ravich R: Informed consent to biomedical research in Veterans Administration hospitals. JAMA. 1982, 248: 344-8. 10.1001/jama.248.3.344.View ArticlePubMedGoogle Scholar
  21. van Stuijvenberg M, Suur MH, de Vos S, Tjiang GC, Steyerberg EW, Derksen-Lubsen G, Moll HA: Informed consent, parental awareness, and reasons for participating in a randomised controlled study. Arch Dis Child. 1998, 79: 120-5.View ArticlePubMedPubMed CentralGoogle Scholar
  22. Williams MV, Parker RM, Baker DW, Parikh NS, Pitkin K, Coates WC, Nurss JR: Inadequate functional health literacy among patients at two public hospitals. JAMA. 1995, 274: 1677-82. 10.1001/jama.274.21.1677.View ArticlePubMedGoogle Scholar
  23. Howard JM, DeMets D: How informed is informed consent: the BHAT experience. Control Clin Trials. 1981, 2: 287-303. 10.1016/0197-2456(81)90019-2.View ArticlePubMedGoogle Scholar
  24. Joffe S, Cook EF, Cleary PD, Clark JW, Weeks JC: Quality of informed consent in cancer clinical trials: a cross-sectional survey. Lancet. 2001, 358: 1772-7. 10.1016/S0140-6736(01)06805-2.View ArticlePubMedGoogle Scholar
  25. Breese P, Burman W, Rietmeijer C, Lezotte D: The Health Insurance Portability and Accountability Act and the Informed Consent Process. Ann Intern Med. 2004, 141: 897-8.View ArticlePubMedGoogle Scholar
  26. Federman DD, Hanna KE, Rodriguez LL: Responsible Research: A Systems Approach to Protecting Research Participants. 2002, Washington, DC: National Academic PressGoogle Scholar
  27. Anderson CJ: The psychology of doing nothing: Forms of decision avoidance result from reason and emotion. Psychol Bull. 2003, 129: 139-67. 10.1037/0033-2909.129.1.139.View ArticlePubMedGoogle Scholar
  28. Redelmeier DA, Rozin P, Kahneman D: Understanding patients' decisions. Cognitive and emotional perspectives. JAMA. 1993, 270: 72-6. 10.1001/jama.270.1.72.View ArticlePubMedGoogle Scholar
  29. Mellers B, Schwartz A, Ho K, Ritov I: Decision affect theory: Emotional reactions to the outcomes of risky options. Psychol Sci. 1997, 8: 423-9. 10.1111/j.1467-9280.1997.tb00455.x.View ArticleGoogle Scholar
  30. Ubel PA: Is information always a good thing? Helping patients make "good" decisions. Med Care. 2002, 40: V-39-V-40. 10.1097/00005650-200209001-00006.View ArticleGoogle Scholar
  31. Forrow L, Taylor WC, Arnold RM: Absolutely relative: How research results are summarized can affect treatment decisions. Am J Med. 1992, 92: 121-4. 10.1016/0002-9343(92)90100-P.View ArticlePubMedGoogle Scholar
  32. McNeil BJ, Pauker SG, Sox HC, Tversky A: On the elicitation of preferences for alternative therapies. N Engl J Med. 1982, 306: 1259-62.View ArticlePubMedGoogle Scholar
  33. Naylor CD, Chen E, Strauss B: Measured enthusiasm: does the method of reporting trial results alter perceptions of therapeutic effectiveness?. Ann Intern Med. 1992, 117: 912-21.View ArticleGoogle Scholar
  34. O'Connor A, Graham I: Validating a scale measuring satisfaction with preparation for decision making. Manuscript in preparation. 2003Google Scholar
  35. O'Connor AM, Legare F, Stacey D: Risk Communication in Practice: The contribution of Decision Aids. BMJ. 2003, 327: 736-40. 10.1136/bmj.327.7417.736.View ArticlePubMedPubMed CentralGoogle Scholar
  36. Elwyn G, O'Connor A, Stacey D, Volk R, Edwards A, Coulter A, Thomson R, Barratt A, Barry M, Bernstein S, Butow P, Clarke A, Entwistle V, Feldman-Stewart D, Holmes-Rovner M, Llewellyn-Thomas H, Moumjid N, Mulley A, Rul C: International Patient Decision Aids Standards (IPDAS) Collaboration. Developing a quality criteria framework for patient decision aids: online international Delphi consensus process. BMJ. 2006, 333: 417-10.1136/bmj.38926.629329.AE.View ArticlePubMedPubMed CentralGoogle Scholar
  37. O'Connor AM, Mulley AG, Wennberg JE: Standard consultations are not enough to ensure decision quality regarding preference-sensitive options. J Natl Cancer I. 2003, 95: 570-1.View ArticleGoogle Scholar
  38. Wennberg JE: Unwarranted variations in healthcare delivery: implications for academic medical centres. BMJ. 2002, 325: 961-4. 10.1136/bmj.325.7370.961.View ArticlePubMedPubMed CentralGoogle Scholar
  39. O'Connor AM, Rostom A, Fiset V, Tetroe J, Entwistle V, Llewellyn-Thomas H, Holmes-Rovner M, Barry M, Jones J: Decision aids for patients facing health treatment or screening decisions: systematic review. BMJ. 1999, 319: 731-4.View ArticlePubMedPubMed CentralGoogle Scholar
  40. O'Connor AM, Stacey D, Entwistle V, Llewellyn-Thomas H, Royner D, Holmes-Rovner M, Tait V, Tetroe J, Fiset V, Barry M, Jones J: Decision aids for people facing health treatment or screening decisions. The Cochrane Library. 2003, 1:Google Scholar
  41. O'Connor AM, Stacey D: Should patient decision aids (PtDAs) be introduced in the health care system?. 2005, Health Evidence Network, World Health Organization Regional Office for EuropeGoogle Scholar
  42. Trevena L, Davey HM, Barratt A, Butow PM, Caldwell P: A systematic review on communicating with patients about evidence. J Eval Clin Pract. 2006, 12: 13-23. 10.1111/j.1365-2753.2005.00596.x.View ArticlePubMedGoogle Scholar
  43. O'Connor AM: Validation of a decisional conflict scale. Med Decis Making. 1995, 15: 25-30. 10.1177/0272989X9501500105.View ArticlePubMedGoogle Scholar
  44. Dodin S, Legare F, Daudelin G, Tetroe J, O'Connor A: Making a decision about hormone replacement therapy. A randomized controlled trial. Can Fam Physician. 2007, 47: 1586-93.Google Scholar
  45. Man-Son-Hing M, Laupacis A, O'Connor A, Biggs J, Drake E, Yetisir E, Hart RG: A patient decision aid regarding anti-thrombotic therapy for stroke prevention in atrial fibrillation: A randomized controlled trial. JAMA. 1999, 282: 737-43. 10.1001/jama.282.8.737.View ArticlePubMedGoogle Scholar
  46. Holmes-Rovner M, Kroll J, Schmitt N, Rothert M, Padonu G, Talarczyk G: Patient decision support intervention: increased consistency with decision analytic models. Med Care. 1999, 37: 270-84. 10.1097/00005650-199903000-00007.View ArticlePubMedGoogle Scholar
  47. Epstein LC, Lasagna L: Obtaining informed consent: form or substance. Arch Intern Med. 1969, 123: 682-8. 10.1001/archinte.123.6.682.View ArticlePubMedGoogle Scholar
  48. Flory J, Emmanuel E: Interventions to Improve Research Participants' Understanding in Informed Consent for Research: A Systematic Review. JAMA. 2004, 292: 1593-601. 10.1001/jama.292.13.1593.View ArticlePubMedGoogle Scholar
  49. Raich P, Plomer K, Coyne C: Literacy, Comprehension, and Informed Consent in Clinical Research. Cancer Invest. 2001, 19: 437-45. 10.1081/CNV-100103137.View ArticlePubMedGoogle Scholar
  50. Weston J, Hannah M, Downes J: Evaluating the benefits of a patient information video during the informed consent process. Patient Educ Couns. 1997, 30: 239-45. 10.1016/S0738-3991(96)00968-8.View ArticlePubMedGoogle Scholar
  51. Montori VM, Weymiller AJ, Jones LA, Bryanet SC, Christianson TH, Smith S: A decision aid enhanced adherence with statins in patients with type 2 diabetes – report from a randomized trial of a decision aid in a specialty setting. Society for Medical Decision Making 28th Annual Meeting. 2006Google Scholar
  52. Oakley S, Walley T: A pilot study assessing the effectiveness of a decision aid on patient adherence with oral bisphosphonate medication. Pharmaceutical Journal. 2006, 276: 536-8.Google Scholar
  53. O'Connor AM, Tugwell P, Wells GA, Elmslie T, Jolly E, Hollingworth G, McPherson R, Bunn H, Graham I, Drake E: A decision aid for women considering hormone therapy after menopause: decision support framework and evaluation. Patient Educ Couns. 1998, 33: 267-79. 10.1016/S0738-3991(98)00026-3.View ArticlePubMedGoogle Scholar
  54. O'Connor AM, Jacobsen MJ, Stacey D: An evidence-based approach to managing women's decisional conflict. J Obstet Gynecol Neonatal Nurs. 2002, 31: 570-81. 10.1111/j.1552-6909.2002.tb00083.x.View ArticlePubMedGoogle Scholar
  55. Edwards A, Elwyn G, Covey J, Matthews E, Pill R: Presenting Risk Information A Review of the Effects of Framing and other Manipulations on Patient Outcomes. J Health Commun. 2001, 6: 61-82. 10.1080/10810730150501413.View ArticlePubMedGoogle Scholar
  56. Schwartz LM, Woloshin S, Black WC, Welch HG: The role of numeracy in understanding the benefit of screening mammography. Ann Intern Med. 1997, 127: 966-72.View ArticlePubMedGoogle Scholar
  57. Woloshin S, Schwartz LM, Byram S, Fischhoff B, Welch HG: A new scale for assessing perceptions of chance: A validation study. Med Decis Making. 2000, 20: 307-10.1177/0272989X0002000306.View ArticleGoogle Scholar
  58. Mayer R: Multi-Media Learning. 2001, New York: Cambridge University PressView ArticleGoogle Scholar
  59. Brehaut JC, O'Connor AM, Tugwell P, Lindgaard G, Santesso N, Lott A, Saarimaki A, Cranney A, Graham I: Measuring the usability and usefulness of online patient decision aids: A demonstration of methods. Ottawa. International Shared Decision Making Conference. 2005Google Scholar
  60. Davison BJ, Kirk P, Degner LF, Hassard TH: Information and patient participation in screening for prostate cancer. Patient Educ Couns. 1999, 37: 255-63. 10.1016/S0738-3991(98)00123-2.View ArticlePubMedGoogle Scholar
  61. Murray E, Davis H, Tai SS, Coulter A, Gray A, Haines A: Randomized controlled trial of an interactive multimedia decision aid on benign prostatic hypertrophy in primary care. BMJ. 2001, 323: 493-6. 10.1136/bmj.323.7311.493.View ArticlePubMedPubMed CentralGoogle Scholar
  62. Murray E, Davis H, Tay SS, Coulter A, Gray A, Haines A: Randomized controlled trial of an interactive multimedia decision aid on hormone replacement therapy in primary care. BMJ. 2001, 323: 490-3. 10.1136/bmj.323.7311.490.View ArticlePubMedPubMed CentralGoogle Scholar
  63. O'Connor AM, Tugwell P, Wells GA, Elmslie T, Jolly E, Hollingworth G, McPherson R, Drake E, Hopman W, MacKenzie T: Randomized Trial of a Portable, Self-administered Decision Aid for Postmenopausal Women Considering Long-term Preventive Hormone Therapy. Med Decis Making. 1998, 18: 295-303. 10.1177/0272989X9801800307.View ArticlePubMedGoogle Scholar
  64. Barry M, Cherkin J, Chang Y, Fowler F, Skates S: A randomized trial of a multimedia shared decision making programme for men facing a treatment decision for benign prostatic hyperplasia. Dis Manag Clin Out. 1997, 1: 5-14. 10.1016/S1088-3371(96)00004-6.Google Scholar
  65. Bernstein SJ, Skarupski KA, Grayson CE, Starlin MR, Bates ER, Eagle KA: A randomized controlled trial of information-giving to patients referred to coronary angiography: effects on outcomes of care. Health Expect. 1998, 1: 50-61. 10.1046/j.1369-6513.1998.00007.x.View ArticlePubMedGoogle Scholar
  66. Dunn RA, Shenouda PE, Martin DR, Schultz AJ: Videotape increases parent knowledge about poliovirus vaccines and choices of polio vaccination schedules. Pediatrics. 1998, 102 (2): 1-6. 10.1542/peds.102.2.e26.View ArticleGoogle Scholar
  67. Green MJ, Biesecker BB, McInerney AM, Mauger D, Fost N: An interactive computer program can effectively educate patients about genetic testing for breast cancer susceptibility. Am J Med Genet. 2001, 103: 16-23. 10.1002/ajmg.1500.View ArticlePubMedGoogle Scholar
  68. Lerman C, Biesecker B, Benkendorf JL, Kerner J, Gomez-Caminero A, Hughes C, Reed MM: Controlled trial of pretest education approaches to enhance informed decision-making for BRCA1 gene testing. J Natl Cancer I. 1997, 89: 148-57. 10.1093/jnci/89.2.148.View ArticleGoogle Scholar
  69. Morgan MW: A randomized trial of the ischemic heart disease shared decision making program: an evaluation of a decision aid. 1997, University of TorontoGoogle Scholar
  70. Schwartz MD, Benkendorf J, Lerman C, Isaacs C, Ryan-Robertson A, Johnson L: Impact of educational print materials on knowledge, attitudes, and interest in BRCA1/BRCA2: testing among Ashkenazi Jewish women. Cancer. 2001, 92: 932-40. 10.1002/1097-0142(20010815)92:4<932::AID-CNCR1403>3.0.CO;2-Q.View ArticlePubMedGoogle Scholar
  71. Volk RJ, Cass AR, Spann SJ: A randomized controlled trial of shared decision making for prostate cancer screening. Arch Fam Med. 1999, 8: 333-40. 10.1001/archfami.8.4.333.View ArticlePubMedGoogle Scholar
  72. McBride CM, Bastian LA, Halabi S, Fish L, Lipkus IM, Bosworth HB, Rimer BK, Siegler IC: A tailored intervention to aid decision making about hormone replacement therapy. Am J Public Health. 2002, 92: 1112-4.View ArticlePubMedPubMed CentralGoogle Scholar
  73. Schapira MM, van Ruiswyk J: The effect of an illustrated pamphlet decision-aid on the use of prostate cancer screening tests. J Fam Pract. 2000, 49: 418-24.PubMedGoogle Scholar
  74. Wolf AMD, Schorling JB: Does informed consent alter elderly patients preferences for colorectal cancer screening. J Gen Intern Med. 2000, 15: 24-30. 10.1046/j.1525-1497.2000.01079.x.View ArticlePubMedPubMed CentralGoogle Scholar
  75. O'Connor AM, Drake ER, Fiset V, Graham ID, Laupacis A, Tugwell P: The Ottawa patient decision aids. Eff Clin Pract. 1999, 2: 163-70.PubMedGoogle Scholar
  76. Canadian Institutes of Health Research: CIHR Best Practices for Protecting Privacy in Health Research. 2005, [http://www.cihr-irsc.gc.ca/e/29072.html]Google Scholar
  77. Interagency Secretariat on Research Ethics, Public Works and Government Services Canada: Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans. 2005, [http://www.pre.ethics.gc.ca/english/pdf/TCPS%20October%202005_E.pdf]Google Scholar
  78. Newcombe R: Two-Sided Confidence Intervals for the Single Proportion: Comparison of Seven Methods. Stat Med. 1998, 17: 857-72. 10.1002/(SICI)1097-0258(19980430)17:8<857::AID-SIM777>3.0.CO;2-E.View ArticlePubMedGoogle Scholar
  79. Kimmelman J, Palmour N: Therapeutic optimism in the consent forms of phase I gene transfer trials: an empirical analysis. J Med Ethics. 2005, 31: 209-14. 10.1136/jme.2003.006247.View ArticlePubMedPubMed CentralGoogle Scholar
  80. Cohen J: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychol Bull. 1968, 70: 213-20. 10.1037/h0026256.View ArticlePubMedGoogle Scholar
  81. Cook RJ: The Encyclopedia of Biostatistics. Edited by: Armitage TP. 1998, New York: Wiley, 2160-6.Google Scholar
  82. Strauss A, Corbin J: Basics of qualitative research: grounded theory procedure and techniques. 1998, Sage Publications IncGoogle Scholar
  83. Forsyth C, Grosw E, Ratner J: Human Factors and Web Development. 1998, London: Lawrence Erlbaum AssociatesGoogle Scholar
  84. Hammerich I, Harrison C: Developing Online Content: The Principles of Writing and Editing for the Web. 2002, New York: WileyGoogle Scholar
  85. Lazar J: The World Wide Web. The Human-Computer Interaction Handbook. Edited by: Jacko J, Sears A. 2003, London: Lawrence Erlbaum Associates, 715-30.Google Scholar
  86. Lindgaard G: Usability testing and system evaluation: A guide for designing useful computer systems. 1994, London: Chapman & HallGoogle Scholar
  87. Vredenburg K, Isensee S, Righi C: User-Centered Design: An Integrated Approach. 2002, Upper Saddle River, NJ: Prentice HallGoogle Scholar
  88. Lindgaard G, Dudek C: What is this evasive beast we call user satisfaction?. Interact Comput. 2003, 15: 429-54. 10.1016/S0953-5438(02)00063-2.View ArticleGoogle Scholar
  89. Nielson J: Designing web usability: The practice of simplicity. 2000, Indianapolis: New Riders PublishingGoogle Scholar
  90. National Cancer Institute of Canada: 2006, [http://www.ncic.cancer.ca/ncic/internet/home/0%2C%2C84658243___langId-en%2C00.html]
  91. Neilsen J, Molich R: Heuristic evaluation of user interfaces. Proc ACM CHI'90 Conf. 1990, 249-256.Google Scholar
  92. Sears A, Hess DJ: Cognitive walkthroughs: Understanding the effect of task-description detail on evaluator performance. Int J Hum-Comput Int. 1999, 11: 185-2000. 10.1207/S15327590IJHC1103_1.View ArticleGoogle Scholar
  93. Neilsen J: Guerilla HCI: Using discount usability engineering to penetrate the intimidation barrier. Cost-Justifying Usability. Edited by: Bias RG, Mayhew DJ. 1994Google Scholar
  94. O'Connor A: Decisional Conflict. Nursing Diagnosis and Intervention: Planning for Patient Care. Edited by: McFarland GK, McFarlane EA. 1997, Toronto: The C.V. Mosby Company, 486-96.Google Scholar
  95. The R Project for Statistical Computing: 2008, [http://www.r-project.org/]
  96. Man-Son-Hing M, Laupacis A, O'Connor A, Biggs J, Drake E: A randomized trial of a decision aid for pateints with atrial fibrillation. JAMA. 1999, 282: 737-43. 10.1001/jama.282.8.737.View ArticlePubMedGoogle Scholar
  97. O'Connor AM, Well GA, Tugwell P, Laupacis A, Elmslie T, Drake E: The effects of an 'explicit' values clarification exercise in a women's decision aid regarding postmenopausal hormone therapy. Health Expect. 1999, 2: 21-32. 10.1046/j.1369-6513.1999.00027.x.View ArticlePubMedGoogle Scholar
  98. Kukla R: Conscientious autonomy: displacing decisions in health care. Hastings Cent Rep. 2005, 35: 34-44. 10.1353/hcr.2005.0025.PubMedGoogle Scholar
  99. Linstone HA, Turoff M: The Delphi Method: Techniques and Applications. 2002, [http://is.njit.edu/pubs/delphibook/]Google Scholar
  100. Peabody JW, Luck J, Glassman P, Dresselhaus TR, Lee M: Comparison of vignettes, standardized patients and chart abstraction. A prospective validation study of 3 methods for measuring quality. JAMA. 2000, 283: 1715-22. 10.1001/jama.283.13.1715.View ArticlePubMedGoogle Scholar
  101. Peabody JW, Luck J, Glassman P, Jain S, Hansen J, Spell M, Lee M: Measuring the Quality of Physician Practice by Using Clinical Vignettes: A Prospective Validation Study. Ann Intern Med. 2004, 141: 771-80.View ArticlePubMedGoogle Scholar
  102. Brehaut JC, O'Connor A, Wood T, Gordon E, Hack T, Siminoff LA, Feldman-Stewart D: Validation of a decision regret scale. Med Decis Making. 2003, 23: 281-92. 10.1177/0272989X03256005.View ArticlePubMedGoogle Scholar

Copyright

© Brehaut et al; licensee BioMed Central Ltd. 2008

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.