There are few randomized, controlled trials of training interventions for best practices in mental health, and still fewer outcome studies of web-based training methods. While web-based interventions have been shown to be effective in changing multiple health behaviors in diverse patient populations , few studies to date have examined their use in the context of training for trauma intervention among practitioners. We present here the design, rationale, and methods for the first randomized, controlled trial of web-based training for mental health providers providing clinical care to veterans with PTSD. The central question to be addressed is whether web-based training, with or without individual supervision, may provide an effective means to train increasing numbers of mental health providers in relevant, evidence-based clinical techniques. The lack of controlled outcome studies of evidence-based training in mental health practice is of special concern, given the urgent need for increased clinical services in this area . Web-based training interventions, in particular, offer substantial potential for cost-effective application to a broad, multi-disciplinary audience of mental health providers.
The training component of the study is innovative in being entirely web- and telephone-based, and therefore easily accessed and completed by participants. Importantly, it delivers instruction in core techniques (e.g., motivational interviewing) that have traditionally been acquired via one-to-one or small-group in-person training, methods that effectively limit ability to train mental health providers in adequate numbers to address current clinical needs. Moreover, this study addresses the important comparative effectiveness analysis initiative put forth by the U.S. Federal Government , which promotes the rigorous evaluation of different interventions to ensure that patients receive the most effective and cost efficient medical care. Recognizing that effective diffusion of robust evidence-based interventions into everyday practice is as essential as development of successful interventions themselves , the approach was developed to allow maximum transportability and accessibility for research or training purposes. To ensure evaluation of a supervision model that is maximally transportable, we chose not to include audio or video recording of participant clinical sessions for review by supervisors. Although such review is likely to improve the quality of supervision, it requires time and effort on the part of providers and supervisors that raises the burden of the training process and may be difficult to implement widely on a routine basis. The trial itself combines elements of effectiveness (e.g., participants were providers operating in routine client care settings, supervision procedures were designed to be easy to implement and limited in terms of their time and effort burden) and efficacy research (e.g., careful measurement of skills improvement, random assignment to training conditions). We plan to report analyses of both intent-to-train and completer analyses, with the latter providing more information about the efficacy of the training interventions.
Training addressed three core techniques for achieving behavior change: motivational interviewing, goal-setting, and behavioral task assignment. The three techniques were selected as applying broadly across multiple problem areas. We developed an online self-report assessment of knowledge and perceived self-efficacy and a telephone-administered standardized patient simulated treatment interview methodology for technique assessment. Study design allowed an independent evaluation of the effects of web-based training, as well as web-based training plus supervision delivered by an experienced cognitive-behavioral supervisor. Our study is also innovative in the development and utilization of standardized patients to measure acquisition of treatment techniques. Standardized patient methodology is widely used in assessing clinical diagnostic techniques in other areas of medicine [35, 36], but this approach has been rarely used as an outcome measure in studies of training in mental health treatment methods. Our multi-dimensional assessment design facilitated our collection of systematic data across domains of assessment.
The study included relatively few exclusion criteria. We intentionally sought to optimize external validity and generalizability to the broadest population of mental health providers. Participants consisted of mental health providers from a variety of disciplines, including psychologists, social workers, nurse clinicians, psychiatrists, and others, with varying levels of familiarity with CBT methods.
It is important to recognize that while this training was specifically developed for VHA providers, this approach to providing internet-based training and telephone supervision seems feasible to offer to other groups. Use of volunteer research participants is a potential limitation of the study. However, all trials that consent and randomize individuals run the risk of volunteer bias.
Other design aspects were also aimed to increase generalizability, as recommended by Tunis et al.  and Glasgow et al.  for conducting ‘practical clinical trials’ (PCT) in areas of high public health need . In addition to recruiting participants broadly representative of the target training population, the representativeness of organizational settings in which the study is conducted is a component of PCT design. The range of organizational settings in the present study was intentionally broad and included mental health clinics and non-medical center programs such as community-based ‘vet centers.’ A limitation, however, is that not all VHA clinical settings could be included, and it should be noted that many veterans continue to receive health care outside of VHA facilities.
A final PCT principle concerns the use of broad and clinically-relevant outcome measures [37, 38]. We attempted to assess a range of outcomes (knowledge, perceived self-efficacy, self-reported implementation, independent rating of technique competence) possible for a study of this type. This was a major focus of our study design. On the other hand, we were required to make choices as to the selection of measures that will inevitably limit our conclusions. Our focus on clinical technique assessment via standardized patient interviews, in particular, was a critical choice for outcomes assessment, the results of which will be evident when the study is unblinded. There are some significant limitations to our assessment procedures. Most of the assessment measures used were study-specific, and have not been validated. We were unable to locate, for example, established measures of provider knowledge that mapped onto the techniques we targeted. We constructed simple self-efficacy ratings linked to elements of the techniques. Because we focused on limited aspects of motivational interviewing, we did not use existing motivational interviewing measurement instrumentation. We also did not include measures of patient outcomes related to the training content (e.g., rates of engagement in the PTSD treatment process as an indicator of the effectiveness of training in motivational interviewing). This was judged too difficult to achieve given the resources of the study, but should be included whenever possible in studies of this type. Overall, in this study, we attempted to balance traditional knowledge-based measures with more potentially valid ‘cutting edge’ measures involving standardized patient interviews and behavioral ratings of clinical techniques [39–41].