Human subjects protection issues in QUERI implementation research: QUERI Series
© Chaney et al; licensee BioMed Central Ltd. 2008
Received: 14 August 2006
Accepted: 15 February 2008
Published: 15 February 2008
Human Subjects protections approaches, specifically those relating to research review board oversight, vary throughout the world. While all are designed to protect participants involved in research, the structure and specifics of these institutional review boards (IRBs) can and do differ. This variation affects all types of research, particularly implementation research.
In 2001, we began a series of inter-related studies on implementing evidence-based collaborative care for depression in Veterans Health Administration primary care. We have submitted more than 100 IRB applications, amendments, and renewals, and in doing so, we have interacted with 13 VA and University IRBs across the United States (U.S.). We present four overarching IRB-related themes encountered throughout the implementation of our projects, and within each theme, identify key challenges and suggest approaches that have proved useful. Where applicable, we showcase process aids developed to assist in resolving a particular IRB challenge.
There are issues unique to implementation research, as this type of research may not fit within the traditional Human Subjects paradigm used to assess clinical trials. Risks in implementation research are generally related to breaches of confidentiality, rather than health risks associated with traditional clinical trials. The implementation-specific challenges discussed are: external validity considerations, Plan-Do-Study-Act cycles, risk-benefit issues, the multiple roles of researchers and subjects, and system-level unit of analysis.
Specific aspects of implementation research interact with variations in knowledge, procedures, and regulatory interpretations across IRBs to affect the implementation and study of best methods to increase evidence-based practice. Through lack of unambiguous guidelines and local liability concerns, IRBs are often at risk of applying both variable and inappropriate or unnecessary standards to implementation research that are not consistent with the spirit of the Belmont Report (a summary of basic ethical principles identified by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research), and which impede the conduct of evidence-based quality improvement research. While there are promising developments in the IRB community, it is incumbent upon implementation researchers to interact with IRBs in a manner that assists appropriate risk-benefit determinations and helps prevent the process from having a negative impact on efforts to reduce the lag in implementing best practices.
Human Subjects protections approaches, specifically those relating to research review board oversight, vary throughout the world. While all are designed to protect participants involved in research, the structure and specifics of these institutional review boards (IRBs) can and do differ. Central review boards, dual systems of central and local review, review boards affiliated with universities or hospitals, or un-affiliated independent review boards are used to review research protocols . Within the U.S., the IRB system is struggling to effectively manage an increase in workload, specialization of scientific protocols, increased regulatory burden, and increased responsibilities beyond risk-benefit assessments to include conflict of interest, environmental impact review, HIPAA (Health Insurance Portability and Accountability Act), and others . Nowhere is the result of these issues more prevalent than in IRB review of implementation research.
Implementation research in health care is characterized by a focus on evaluating evidence-based quality improvement  and often involves partnerships between researchers and clinicians across multiple sites . As a result, implementation research faces many of the same human subjects challenges as other types of multi-site studies and research involving a quality improvement (QI) component .
The defining issue of human subjects protection within implementation research is the decision of whether or not implementation research should undergo IRB review. This is not always clear, particularly in studies with a QI element. There are several key determinations regarding implementation studies, beginning with a determination of whether the activity is actually research and, therefore, reviewable. Subsequent risk-benefit determinations include: whether human subjects are involved, whether the criteria for minimal risk definitions are met, what type of consent mechanisms are appropriate, and how to weigh individual risk of breach of confidentiality (often the primary risk) with potential societal benefit. These determinations derive from ethical, regulatory, and liability-related principles and demands, as well as from each particular IRB's interpretation of the risk- benefit issues given local issues and concerns. Currently, there is no official consensus on whether and how to review QI research, or other research types with QI elements, such as implementation research [5–10]. Whether appropriate or not, once a QI research protocol begins IRB review, it is thrust into a process that may not be optimal for balancing risk-benefit issues.
The number of complex, multi-site studies has increased over the past three decades, imposing a greater burden on IRBs and potentially slowing review of all types of studies. Researchers in the United States (U.S.) and elsewhere report great variability in the way IRBs evaluate multi-site protocols, time required for approval, and documentation of consent in studies [11–16]. Greene and Geiger's literature review of IRB approvals for multicenter trials and observational studies within the U.S., outlines the variability in IRB review processes and its potential effects on research .
McWilliams reported that 31 IRBs generated study-risk determinations from minimal to high for the same genetic epidemiology protocol, resulting in 7 expedited reviews and 24 full reviews across the sites. They reported a mean of 32 (range 9–72) days for those sites undergoing expedited review of their multi-site study, and a mean of 82 days (range 13–252) for the same protocol experiencing full review at other sites. In their study of IRB variation, Stair et al. experienced a median of 38 days for review and approval (range 26–62) . This variability can result in disparities in application, be detrimental to the conduct of the study, increase the time needed to conduct the study, and increase costs [7, 11, 12]. The use of Centralized IRBs may remediate the issue of variability, but researchers should remain cautiously optimistic, as the literature from countries with a centralized review mechanism, such as the United Kingdom (U.K.), suggest that not all challenges faced by multi-site studies are resolved. Tod et al. report that while multi-site research in the U.K. is now overseen by Multi-centre Research Ethics Committees, this addition of central review has not eliminated the need to also seek Local Research Ethics Committee approval. Critics of the system feel that this has created "an inefficient two-tier system of ethical review" . Long delays have been reported from the time of initial submission until approval. Cassell and Young raise similar concerns in their discussion of U.K. Human Subjects review .
Research protocols within the U.S. Department of Veterans Affairs (VA) are not immune from these challenges . The VA healthcare system shares some similarities with the Canadian or British Health Care systems, in that it provides care to all veterans, but differs in that it has a tiered payment system. Some veterans qualify for total care, or total care for specific service-related conditions or disabilities, at no expense to the individual, while other veterans receive care through the VA's integrated healthcare network on a fee-for-service basis.
The VA Quality Enhancement Research Initiative (QUERI)
The U.S. Department of Veterans Affairs' (VA) Quality Enhancement Research Initiative (QUERI) was launched in 1998. QUERI was designed to harness VA's health services research expertise and resources in an ongoing system-wide effort to improve the performance of the VA healthcare system and, thus, quality of care for veterans.
QUERI researchers collaborate with VA policy and practice leaders, clinicians, and operations staff to implement appropriate evidence-based practices into routine clinical care. They work within distinct disease- or condition-specific QUERI Centers and utilize a standard six-step process:
1) Identify high-risk/high-volume diseases or problems
2) Identify best practices
3) Define existing practice patterns and outcomes across the VA and current variation from best practices
4) Identify and implement interventions to promote best practices
5) Document that best practices improve outcomes
6) Document that outcomes are associated with improved health-related quality of life.
Within Step 4, QUERI implementation efforts generally follow a sequence of four phases to enable the refinement and spread of effective and sustainable implementation programs across multiple VA medical centers and clinics. The phases include:
1) Single site pilot,
2) Small scale, multi-site implementation trial,
3) Large scale, multi-region implementation trial, and
4) System-wide roll-out.
Researchers employ additional QUERI frameworks and tools, as highlighted in this Series, to enhance achievement of each project's quality improvement and implementation science goals.
Description of our inter-related QUERI supported projects
Action research partnership with formative evaluation
◆ Primary care, mental health, nursing and administrative leaders from three regional veterans health networks;
◆ Experienced depression care researchers; and
◆ Depressed veterans.
Study feasibility and safety of evidenced-based quality improvement for depression care. Steps included: 1) Adapt depression collaborative care models to VA settings through Evidence-Based Quality Improvement for Depression (EBQID); 2) Support and evaluate depression collaborative care implementation; and 3) Prepare for dissemination of depression improvement methods and materials throughout the VHA.
Site-randomized controlled trial
◆ TIDES sites and matched control sites, and
◆ Veterans enrolled in primary care clinics screened for depression.
Implement & test depression collaborative care using TIDES evidence-based model.
Mixed method; administrative data analysis and qualitative interviews
◆ Stakeholders (patients, clinicians, administrators), and
◆ Administrators of databases of veterans' records.
Understand cost and value tradeoffs of TIDES implementation using VA cost information and semi-structured interviews with TIDES stakeholders.
Regional demonstration project with program evaluation
All levels of VHA administrative structure including:
◆ Central Office,
◆ 4 regional health networks and administrative control networks,
◆ VA medical centers,
◆ Administrators, and
Support progress of TIDES collaborative care model toward national VHA implementation.
Sustain and spread TIDES evidence-based depression care model to additional VHA sites
Implementation research themes, challenges and process aids
External validity considerations
é Clearly explain & support issues.
é Use layperson language.
é Implementation Evaluation Theory: Stetler Model of Research Utilization & RE-AIM
PDSA intervention adaptation cycles
é Discuss with IRB and select component approach or modification approach.
é Request exemption or expedited review.
é Consider opinion letters.
é Clearly explain & support issues.
é Use lay person language.
Multiple roles of researchers/facilitators & subject/participants
é Clearly explain roles and risk appropriate to each.
System-level unit of analysis
é Clearly delineate various levels, risk appropriate, and justification for each.
Multiple health care systems/sites
Transparency of system/researcher ethical responsibilities
é Specify division of responsibilities between researchers and site clinical staff.
é Project Implementation Charter & Site Memorandum of Agreement
Relationships among IRB and participating organizations
é Research through websites or personal contacts.
é Inquire about nature of oversight relationships.
é IRB Contact Questionnaire
Maintaining accountability across systems/sites
é Centralize records retention at a single administrative site.
é IRB Relational Database
Multiple local site PIs
é Communicate responsibilities, requirements, and level of commitment.
é Project Implementation Charter & Site Memorandum of Agreement
Varying levels of experience & limited time
é Orient to role responsibilities.
é Provide high level of administrative support.
é Site PI Orientation Checklist
Varying ethics certification requirements
é Investigate required trainings.
é Document approved trainings.
é IRB Relational Database
Varying levels of involvement
é Invite participation in project related activities.
Lack of standardized procedures and forms
é Create template text to distribute to research team.
Communication with IRB/R&D Administrators
é Consider administrators integral members of research team.
é Communicate multi-site nature of project.
é IRB Contact Questionnaire
Multiple, competing renewal deadlines
é Budget time for form changes.
é Create deadline reports from database.
é IRB Relational Database
Variable AE definitions and reporting requirements
é Collect definitions, reporting requirements, etc.
é IRB Contact Questionnaire
Coordination & consistency in IRB related tasks
é Consider budgeting for an IRB Specialist position.
é Budget for sufficient resources.
é Use regular conference calls.
é IRB Specialist Position Description
é IRB Call Minute Template
Results: The implementation research IRB experience
1. Implementation-specific issues
In this section, we present IRB challenges specific to implementation research. Implementation research uses a variety of methodologies at different stages and often requires collaboration among disciplines . Some implementation research issues may not fit within the traditional human subjects paradigm used to assess clinical trials. Risks in implementation research are generally related to loss of confidentiality, rather than risks associated with drug treatments, as in traditional clinical trials. The implementation-specific challenges introduced below are: external validity considerations, Plan-Do-Study-Act cycles, risk-benefit issues, the multiple roles of researchers and subjects, and system-level unit of analysis.
a. External validity considerations
Most IRBs tend to be much more familiar with the rationales for biomedical research procedures than with those for implementation research. Two conceptual models can be helpful in providing clear explanations for specific aspects of implementation research that promote external validity, and also bear on human subjects protection issues – the Stetler Model of Research Utilization [25, 26] and the RE-AIM framework [27–29]. These models can be incorporated in IRB applications by researchers, as appropriate, to inform their interactions with IRBs, as they describe external validity considerations. While these frameworks came to our attention after we had put much thought into our approach to IRB issues, they may be helpful to other researchers. These two models are presented briefly in Additional file 1: Implementation Evaluation Theory: Stetler Model of Research Utilization and RE-AIM.
b. Plan – Do – Study – Act (PDSA) intervention adaptation cycles
PDSA cycles are often a cornerstone of implementation research. The nature of PDSA cycles means that implementation researchers may not be able to specify all aspects of the project ahead of time for the IRB to judge risk-benefit issues in the initial application. To change a care system in a sustainable fashion, an "intervention" must be both individualized to the resources and goals of each site, and dynamic enough to take into account the inevitability of changing conditions. PDSA cycles within a particular study may incorporate multiple levels of formative evaluation . When formative evaluation results are applied to the project, the equivalent of Plan Do Study Act cycles  result in changes that may be unforeseen at the beginning of the project.
We utilized two different approaches to address IRB concerns resulting from PDSA cycles, depending upon the preferences of each IRB. At the majority of our sites, we submitted an initial protocol that was as complete as possible, given our site knowledge and expectations, and then submitted modifications as needed. In situations where the protocol was considered exempt, we consulted with our IRB representative to ensure that the proposed modification would not alter the IRB's original decision that the protocol was exempt. However after extensive discussion and collaboration, other IRBs requested dividing the initial protocol into key component submissions, each individually well specified.
Under this approach, we submitted the first study component along with an overview of the entire project. We then submitted additional project activities for individual review in the context of the project overview. This approach allowed IRB reviewers to judge project activities in context without having to judge all components at once. Reviewers had complete information for each component at the time of submission. It allowed the level of IRB review (i.e., exempt versus full) to be appropriate to a specific component. At one study site, four consecutive components were submitted individually and received different IRB approvals.
Component 1 involved carrying out an expert panel to decide how to best administer a depression collaborative care project and was exempted. Component 2, staff qualitative interviews, was also exempted. Component 3, a survey of site clinician opinions, was not exempted but received a standard review and Waiver of Documentation of Consent. The final component, a clinical decision support software tool developed by study participants (not researchers), was acknowledged by the IRB as having been received, but not reviewable, i.e., not research.
Over time, this component solution proved to be administratively burdensome to both the IRB and the research team due to the need to track each component as a separate application, complicating modifications and renewals. We anticipate that future IRB developments will identify the best methods for introducing the unique features of individual components to the IRB and, thereby, will decrease the burden on both researchers and the IRB .
c. Risk-benefit issues
Implementation research often involves both evidence-based interventions carried out by clinical partners and evaluation activities carried out by researchers. In QUERI implementation research, the best practices to be implemented are not experimental as earlier QUERI research phases have already documented the need for clinical improvement and provided an evidence base (Table 1). Unless carefully explained and supported, we have found that it may not be obvious to IRBs that the "research" issue is not the efficacy of the best practice, but the ability of a quality improvement strategy to introduce or improve a system's ability to apply the evidence-based practice. IRBs may not realize the extent to which quality improvement principles and requirements guide clinical activities carried out in implementation projects and, therefore, risk-benefit issues might be most appropriately considered in a QI framework. Currently, the determination as to whether an activity is reviewable research or non-reviewable QI is made by the IRB. Unfortunately there is not yet a standard method for assisting an IRB to make this decision.
In preparing our applications, to assist the IRB, we sought to use clear layperson terminology to explain our project. For example:
There are two aspects to this project that could be described as "intervention:" 1) the system-level evidence-based quality improvement process that has the goal of changing healthcare provider behavior, and 2) the depression care that the providers deliver. This project studies the former. The latter, described as collaborative care for depression, has been extensively studied and is accepted as best practice care. No experimental treatments are employed in this project.
TIDES/RETIDES IRB initial application information
IRB required changes?
Average = 35.55
Average = 14.25
Risks in implementation research are often elusive, and the regulatory definition of "minimal risk" is imprecise. There often are no physiological risks in implementation research and, in the case of our studies, minimal psychosocial risk, including psychological stress from completing staff interviews or surveys and evaluation discomfort. Thus, implementation research might often be categorized as minimal risk. However, breach of confidentiality due to a security breach or human error remains a risk even in minimal risk studies. Researchers should accurately assess the associated risks of the research, including the evidence-based practice.
Some systems provide expedited review for research judged to be minimal risk. For example, our protocols include evaluation through surveys, medical record reviews, and stakeholder interviews. However, these multiple activities may result in complex IRB decision-making and risk to the implementation activity timeline, if after initial review the IRB decides that the study is inappropriate for expedited review, and the study must be transferred to the full-review mechanism. For example, one site (Table 4, IRB C) stated, "If the survey was the only thing that came to the IRB board, it could have been exempt or if the chart data had come to the IRB separately, it too, could have been expedited. But because it came all together, it would be hard to expedite it as a whole with all the different components presented."
Finally, if an IRB judged the initial proposal to be exempt, it became difficult to review a later modification of the proposal. For example, this occurred when a staff survey planned as an anonymous QI activity could not be feasibly carried out without participant identifiability. It then required extra effort to return to IRBs that had judged the survey as exempt, in order to notify them of the new circumstances.
d. Multiple roles of researchers/facilitators & multiple subject/participant classes
Sometimes it is not the implementation research practice itself that is the risk, but ill-defined relationships among researchers and the system with which they are working that may add risk. Implementation researchers may best be understood as facilitators or consultants within their research projects, with no direct decision-making authority in the intervention. Within our projects, researchers were available as consultants to local site champions who provided leadership for customizing and adapting the intervention to the sites. Researchers also functioned as consultants at the national level to encourage national rollout of improved depression care. These different roles (and risks) need to be clearly communicated with administrators and IRBs. For example, because our projects involved depression treatment, it was important to establish suicidality risk-response protocols specifying activities of researchers and clinicians of which all parties were aware.
Subjects, more appropriately called "participants," in implementation research have different relationships with researchers depending on their relationship to the health care system in which the implementation takes place. Participants may include providers, regional and/or medical center leadership, and/or patients. Patients, for example, may not encounter researchers, but have specific aspects of their medical records examined for evidence that is relevant to the implementation. Providers are crucial in the success of clinical improvement efforts, however their role as a "subject" must be carefully considered. If they see themselves as subjects from whom consent is required for their participation, their buy-in and interest in sustaining the improvement after the investigative phase is over is likely to be vitiated. All classes of participants, or their representatives, may be involved as decision-makers in the specifics of the implementation. For example, our project start-up meetings often included nurse depression care managers, primary care physicians, mental health practitioners, leadership/management personnel, and patient representatives. Clearly defining these different roles and the risk appropriate to each will enable the IRB to make an informed decision.
e. System-level unit of analysis
In these projects, like similar implementation efforts, patients, providers and/or sites can be units of analysis to measure/support the external validity of the implementation effort, as well as to monitor its clinical and organizational outcomes. The impact of implementation research is often assessed on the patient level through patient administrative records; at the provider level through survey results and aggregated, de-identified data; and at the site level though cost analyses, site-level performance measurements, interview results, and project records (i.e., conference call minutes, emails, etc.) analyses. IRBs that are familiar with evaluating clinical trials may not understand the necessity of collecting information at each of these levels. Each of these levels has its own set of concerns and ethical issues. For example, whereas breach of confidentiality may be a major risk for patients, concern that results of a survey may be accessible to supervisors and used for evaluative purposes may be an issue for staff. It is important to clearly delineate these various roles, the risk appropriate to each, and the justification for each when communicating with the IRB. Using clear layperson language and/or tables to present relationships may be useful.
2. Multiple health care systems/sites
This section focuses on concerns surrounding working with multiple sites or health care systems, namely the ethical responsibilities of researchers, preserving accountability across sites, and relationships among IRBs and other organizational entities.
a. Transparency of system/researcher ethical responsibilities
As consultants, researchers often function at the behest of the organizational entity. Organizational participants may hold considerably more power than the researchers by virtue of their decision-making authority. Clinicians may encounter implementation researchers as consultants, purveyors of Continuing Medical Education and/or as gatherers of information regarding professional opinions and knowledge that may be relevant to the implementation project. IRBs are appropriately concerned when researcher/clinician roles are complex, in that this can raise issues of possible coercion in recruitment of patients and staff, and/or confidentiality problems in keeping clinical, job performance, and research data separate.
We developed an Implementation Charter (see Additional file 2: Project Implementation Charter and Site Memorandum of Agreement) with participating VA regional networks that specified the division of responsibilities between researchers and site clinical staff. For example, the Charter indicated that we were responsible for training depression care managers within the clinics, but that the care managers were hired and supervised by appropriate clinical personnel. It also specified our adherence to clinical privacy rules for aspects of our project that were quality improvement, and the need for site participants who wished to utilize any QI data from the project for research purposes to seek IRB approval. This process aid was successful in clarifying our roles and expectations with participating sites.
b. Relationships among IRBs and participating organizations
Implementation researchers often need to satisfy multiple sets of organizational reviews and requirements before beginning a project. For example, within the VA system, a research study conducted by VA personnel and/or involving VA patients cannot begin without both IRB approval and approval by the appropriate local VA Research & Development (R&D) Committee. While the IRB reviews the protocol to ensure adequate protection of human subjects, the local R&D committee provides the primary executive review for their site, assessing the overall scientific merit of the protocol. R&D application procedures and protocols are not uniform across sites.
We researched a new study site's local IRB and R&D through web sites or personal contacts. To confirm that we had the correct IRB forms for a particular submission, we informally "interviewed" the IRB administrative contact using our IRB Contact Questionnaire (see Additional file 3: IRB Contact Questionnaire). Through this process we obtained necessary information about the particular IRB process, including their relationship with the local R&D Committee. This process enabled us to effectively submit our applications with minimal problems and demonstrated our willingness to cooperate with local administrators and procedures.
c. Maintaining accountability across systems/sites
As a site was brought onto a particular project, responsibility for that site was assigned to specific research personnel within the project. This created continuity within the submission process and in interactions between the IRB and the project as a whole. Record retention has been centralized for the projects. Study personnel forward electronic and/or hard copies of all IRB paperwork and communications to the administrative site in addition to keeping copies at their respective sites. As the number of sites increased across projects, it became clear that a relational database was necessary to efficiently track all the paperwork and deadlines.
Using Microsoft ACCESS, information gathered from the IRB Contact Questionnaire is entered into the database (see Additional file 4: IRB Relational Database). As applications are submitted and approvals received, additional key information is entered. The structure allows monitoring expiration deadlines, duration between submission and IRB approval, and time elapsed since submission. Information for all sites or specific to project, site, or region can be extracted and presented as reports. The database is especially useful to track information at sites where the component submission approach is utilized.
3. Multiple local site PIs
Enthusiastic, informed and active Site PIs are crucial to the success of a multi-site study. The study staffs' ability to finesse individual Site PI situations, accommodate the desired level of Site PI involvement, and, generally, limited local support staff availability is crucial for the project to be successful. As would be expected, we experienced several challenges collaborating with multiple Site PIs, such as recruitment, varying levels of experience and limited time, inconsistent ethics certification requirements, and varying levels of commitment to the study.
We have used a variety of strategies when identifying and recruiting local Site PIs. Through our contacts as researchers and established Site PI contacts, we were able to obtain information on a potential Site PIs' availability and overall interest. The initial conversation between the project PI and potential Site PI covered background information on the project, noting other clinicians who may be participating in the project, and a brief overview of responsibilities. An assigned member of the research staff followed up with the Site PI to further address questions. Communicating Site PI responsibilities, requirements, and level of commitment clearly and accurately is necessary for success. We used the Implementation Charter mentioned earlier to help clarify the role of the Site PI in relation to the researchers (See Additional file 2: Project Implementation Charter and Site Memorandum of Agreement). It is critical that the involvement of the site PI is supported by their organizational supervisors and relevant to their own career goals.
b. Varying levels of experience & limited time
After recruitment, the research team must assess the experience level of Site PIs with regard to their local IRB, the IRB process and responsibilities, and our project(s). One consistent factor among our Site PIs was their busy schedules and limited availability. As a result, we developed an efficient, concise protocol to gather this information through our Site PI Orientation Checklist (see Additional file 5: Site PI Orientation Checklist). Upon completion of the Checklist, roles and responsibilities of the research staff and Site PI were clearly delineated, including who would handle the majority of IRB paperwork. At some sites, research staff effectively became the research coordinator for the local Site PI. A Site PI's willingness to assist in problem-solving proved to be critical to the timeliness with which each site progressed through the IRB process. For example, some IRBs do not include remotely-based research staff on correspondence relating to the project. Site PIs are encouraged to proactively forward those documents to their research staff contact person to ensure optimal project responsiveness to IRB requirements and to minimize project delays.
d. Varying ethics certification requirements
All researchers are responsible for fulfilling any necessary ethics certification and training requirements prior to the start of any research. However, different sites may require renewals of these trainings at different times and may require additional certification. It is in the project's interest to monitor these requirements to avoid any project delay due to a lapse in training certification. The relational database mentioned above can be used for this purpose (see Additional file 4: IRB Relational Database).
d. Varying levels of involvement
Site PIs demonstrate varying levels of involvement to their role within the project. Some take an active role in communicating new ideas and suggestions regarding implementation, or independently promoting and presenting the project at conferences or meetings with clinicians. To help provide incentive, we try to involve interested Site PIs in project-related activities, such as workgroup meetings. Several of these working groups have developed and published manuscripts.
4. Multiple IRBs
For researchers who have the need, the logistics of working with multiple IRBs are discussed below. These include: lack of standardized procedures and forms, communication with IRB administrators, multiple competing renewal deadlines, variable adverse event reporting requirements and definitions, and issues with coordination and consistency of IRB-related tasks.
a. Lack of standardized procedures & forms
Our experience is that IRB procedures, forms, deadlines and meeting schedules vary from one IRB to the next. Areas of variability include: whether the IRB will communicate directly with staff other than the Site PI, structure and organization of application forms, deadlines and meeting schedules, and reporting requirements for adverse events. This diversity increases the burden on research staff. We developed a consistent approach to submissions. One senior member of our research staff composed the "template" text that was subsequently distributed to other research staff members, who tailored the text to best fit a particular IRB.
b. Communications with IRB and R&D administrators
The multi-site nature of our projects means that in the majority of cases, the project PI, Site PI, and research staff are in different geographical locations. Communicating this to respective administrators enables them to understand the constraints of the situation and, in some cases, resulted in their exceptional assistance in the submission process. For instance, one R&D Administrator offered to collect the necessary Site PI signatures for submissions, hand deliver the paperwork to the IRB, and inform the research team via e-mail.
At the start of the projects in 2001, many IRBs did not have websites, or if they did, all necessary forms and instructions were not available on the site. This required the assistance of the site's IRB staff in order to obtain all forms and any additional relevant information on the submission process. Today, even with many forms and instructions available online, it is crucial to communicate with IRB staff to ensure the correct forms are used to fit a particular activity (see Additional file 3: IRB Contact Checklist). Considering IRB and R&D staff as integral members of a QI effort helps minimize communication difficulties and facilitate the sometimes cumbersome IRB process.
c. Multiple competing renewal deadlines
Carrying out projects involving multiple IRBs creates a revolving door of renewal deadlines. This requires that the research team monitor the approval status of each site. Renewal forms change over time as IRBs revise old questions and add new ones. It is important to take this into consideration when budgeting the amount of time needed for IRB activities.
All of our IRB submissions are tracked through the relational database, and bi-weekly submission deadline reports are created that are shared with other coordinating administrative sites (see Additional file 4: IRB Relational Database) using group emails and IRB-specific conference calls.
d. Variable adverse event definitions & reporting requirements
Implementation projects involve the possibility of adverse events that, although not directly connected with study activities, may need to be reported to involved IRBs. Since our projects work with a depressed population, we prepared for the eventuality of a serious or successful suicide attempt. We discussed this potential risk in IRB applications and collected different IRB definitions, reporting requirements, and related items for Adverse Events (AEs) and Serious Adverse Events (SAEs) (see Additional file 3: IRB Contact Questionnaire).
There is a significant degree of variation in AE/SAE definitions and reporting requirements illustrated by the following hypothetical situation of a completed suicide by a study patient. One IRB would consider this a reportable SAE, because a subject died. A second IRB would consider it a reportable SAE because suicide is an expected risk among this study population. A third IRB would not consider it a reportable event because it was believed not to be related to the study procedures, since all established medical center and research policies and procedures were appropriately followed.
e. Coordination & consistency in IRB-related tasks
All the process aids and approaches mentioned above can not be fully realized without coordination, consistency, advanced planning and consultation. For example, while developing the IRB portion of our grant applications, we consulted with several IRB experts. IRB work for implementation projects is a significant resource allocation and budget issue. It may be cost-effective to budget for an IRB specialist to assist research staff. This specialist should have extensive knowledge of Human Subjects protection issues and the IRB system in which the study takes place (see Additional file 6: IRB Specialist Position Description). In our case, other project staff devoted approximately 1.5 FTEE (Full time employment equivalent) to IRB-related tasks over all study years (6 years, at present), with significant involvement both by investigators and research coordinators.
Regular conference calls were established to anticipate and discuss IRB issues within the projects. These calls ensure that research staff can discuss and resolve IRB concerns (see Additional file 7: IRB Call Minutes). While this effort is substantial, similar projects should anticipate relevant IRB challenges and budget appropriately.
The IRB system in the United States is overburdened, affecting all types of research, particularly implementation research. Specific aspects of implementation research interact with variations in knowledge, procedures and regulatory interpretations across IRBs to impact the implementation and study of best methods to increase evidence-based practice. Through lack of unambiguous guidelines and local liability concerns, IRBs are at risk of applying both variable and inappropriate or unnecessary standards that may impede the conduct of evidence-based QI implementation research because they are not consistent with the spirit of the Belmont Report, a summary of the three basic ethical principles (Respect for persons, Beneficence, and Justice) identified by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. However, there are promising developments in the IRB community. For instance, in November 2005, the Office for Human Research Protections (OHRP) sponsored a workshop on alternative models for IRB review, including Centralized IRBs, with a follow-up conference in 2006 [33, 34]. Efforts to accredit IRBs may increase uniformity and the IRB's ability to be a full partner in improving care. In the meantime, it is incumbent upon implementation researchers to interact with IRBs in a manner that assists appropriate risk/benefit determinations and helps prevent the process from having a negative impact on efforts to reduce the lag in implementing best practices.
IRBs are essential participants in moving implementation research forward. Our experience shows that ethical dilemmas posed by implementation research can be solved if the level of commitment and communication among stakeholders is sufficient. Our experience also highlights the high costs of working with IRBs in multi-site projects, and the necessity of developing organized, efficient ways of dealing with these issues if researchers are to participate in improving clinical quality.
The research reported here and the production of this manuscript was supported by U.S. Department of Veterans Affairs, Veterans Health Administration, Health Services Research and Development Service, MNT 03–215. All authors are (or were) VA employees at the time this paper was written. In order to receive VA funding, all VA studies must undergo scientific review internal to the VA (which includes a general review of the study design, plan for data collection, etc). Data analysis, the decision to publish, etc are left to the discretion of the study's Investigators. However, it should be noted that the findings and conclusions in this document are those of the authors, who are responsible for its contents, and do not necessarily, represent the views of the U.S. Department of Veterans Affairs.
- Fitzgerald MH, Phillips PA: Centralized and non-centralized ethics review: a five nation study. Account Res. 2006, 13: 47-74.View ArticlePubMedGoogle Scholar
- Greene SM: Alternative Models of IRB Review for Multi-center Studies: The Evidence Base & On-the-Ground Implications. Group Health Center for Health Studies Colloquium. 2007Google Scholar
- Rubenstein LV, Mittman BS, Yano EM, Mulrow CD: From understanding health care provider behavior to improving health care: the QUERI framework for quality improvement. Quality Enhancement Research Initiative. MedCare. 2000, 38 (6 Suppl 1): I129-I141.Google Scholar
- Stetler CB, Legro MW, Rycroft-Malone J, Bowman C, Curran G, Guihan M, Hagedorn H, Pineros S, Wallace CM: Role of "external facilitation" in implementation of research findings: a qualitative evaluation of facilitation experiences in the Veterans Health Administration. Implement Sci. 2006, 1: 23-10.1186/1748-5908-1-23.View ArticlePubMedPubMed CentralGoogle Scholar
- Bellin E, Dubler NN: The quality improvement-research divide and the need for external oversight. Am J Public Health. 2001, 91: 1512-1517.View ArticlePubMedPubMed CentralGoogle Scholar
- Daly BJ, Rosenfeld K: Maximizing benefits and minimizing risks in health services research near the end of life. J Pain Symptom Manage. 2003, 25: S33-42. 10.1016/S0885-3924(03)00059-9.View ArticlePubMedGoogle Scholar
- Greene SM, Geiger AM, Harris EL, Altschuler A, Nekhlyudov L, Barton MB, Rolnick SJ, Elmore JG, Fletcher S: Impact of IRB requirements on a multicenter survey of prophylactic mastectomy outcomes. Ann Epidemiol. 2006, 16: 275-278. 10.1016/j.annepidem.2005.02.016. Epub 2005 Jul 2006View ArticlePubMedGoogle Scholar
- Christian MC, Goldberg JL, Killen J, Abrams JS, McCabe MS, Mauer JK, Wittes RE: A central institutional review board for multi-institutional trials. N Engl J Med. 2002, 346: 1405-1408. 10.1056/NEJM200205023461814.View ArticlePubMedGoogle Scholar
- Baily M, Bottrell M, Lynn J, Jennings B: The Ethics of Using QI Methods to Improve Health Care Quality and Safety. 2006, Garrision, NY, S1-S39.Google Scholar
- Lo B, Groman M: Oversight of quality improvement: focusing on benefits and risks. Arch Intern Med. 2003, 163: 1481-1486. 10.1001/archinte.163.12.1481.View ArticlePubMedGoogle Scholar
- McWilliams R, Hoover-Fong J, Hamosh A, Beck S, Beaty T, Cutting G: Problematic variation in local institutional review of a multicenter genetic epidemiology study. Jama. 2003, 290: 360-366. 10.1001/jama.290.3.360.View ArticlePubMedGoogle Scholar
- Burman WJ, Reves RR, Cohn DL, Schooley RT: Breaking the camel's back: multicenter clinical trials and local institutional review boards. Ann Intern Med. 2001, 134: 152-157.View ArticlePubMedGoogle Scholar
- Silverman H, Hull SC, Sugarman J: Variability among institutional review boards' decisions within the context of a multicenter trial. Crit Care Med. 2001, 29: 235-241. 10.1097/00003246-200102000-00002.View ArticlePubMedPubMed CentralGoogle Scholar
- Department of Health and Human Services: Institutional Review Boards: A Time for Reform. 1998, 1-90.Google Scholar
- Shen JJ, Samson LF, Washington EL, Johnson P, Edwards C, Malone A: Barriers of HIPAA regulation to implementation of health services research. J Med Syst. 2006, 30: 65-69. 10.1007/s10916-006-7406-z.View ArticlePubMedGoogle Scholar
- Stair TO, Reed CR, Radeos MS, Koski G, Camargo CA: Variation in institutional review board responses to a standard protocol for a multicenter clinical trial. Acad Emerg Med. 2001, 8: 636-641. 10.1111/j.1553-2712.2001.tb00177.x.View ArticlePubMedGoogle Scholar
- Greene SM, Geiger AM: A review finds that multicenter studies face substantial challenges but strategies exist to achieve Institutional Review Board approval. J Clin Epidemiol. 2006, 59: 784-790. 10.1016/j.jclinepi.2005.11.018.View ArticlePubMedGoogle Scholar
- Tod AM, Nicolson P, Allmark P: Ethical review of health service research in the UK: implications for nursing. J Adv Nurs. 2002, 40: 379-386. 10.1046/j.1365-2648.2002.02385.x.View ArticlePubMedGoogle Scholar
- Cassell J, Young A: Why we should not seek individual informed consent for participation in health services research. J Med Ethics. 2002, 28: 313-317. 10.1136/jme.28.5.313.View ArticlePubMedPubMed CentralGoogle Scholar
- Green LA, Lowery JC, Kowalski CP, Wyszewianski L: Impact of institutional review board practice variation on observational health services research. Health Serv Res. 2006, 41: 214-230. 10.1111/j.1475-6773.2005.00458.x.View ArticlePubMedPubMed CentralGoogle Scholar
- McQueen L, Mittman BS, Demakis JG: Overview of the Veterans Health Administration (VHA) Quality Enhancement Research Initiative (QUERI). J Am Med Inform Assoc. 2004, 11: 339-343. 10.1197/jamia.M1499.View ArticlePubMedPubMed CentralGoogle Scholar
- Demakis JG, McQueen L, Kizer KW, Feussner JR: Quality Enhancement Research Initiative (QUERI): A collaboration between research and clinical practice. Med Care. 2000, 38: I17-25. 10.1097/00005650-200006001-00003.View ArticlePubMedGoogle Scholar
- Stetler CB, Mittman BS, Francis J: Overview of the VA Quality Enhancement Research Initiative (QUERI) and QUERI theme articles: QUERI Series. Implementation Science. 2008, 3: 8-10.1186/1748-5908-3-8.View ArticlePubMedPubMed CentralGoogle Scholar
- Grol R, Jones R: Twenty years of implementation research. Fam Pract. 2000, 17 (Suppl 1): S32-35. 10.1093/fampra/17.suppl_1.S32.View ArticlePubMedGoogle Scholar
- Stetler CB: Updating the Stetler Model of research utilization to facilitate evidence-based practice. Nurs Outlook. 2001, 49: 272-279. 10.1067/mno.2001.120517.View ArticlePubMedGoogle Scholar
- Stetler CB: Refinement of the Stetler/Marram model for application of research findings to practice. Nurs Outlook. 1994, 42: 15-25. 10.1016/0029-6554(94)90067-1.View ArticlePubMedGoogle Scholar
- Glasgow RE, McKay HG, Piette JD, Reynolds KD: The RE-AIM framework for evaluating interventions: what can it tell us about approaches to chronic illness management?. Patient Educ Couns. 2001, 44: 119-127. 10.1016/S0738-3991(00)00186-5.View ArticlePubMedGoogle Scholar
- Green LW, Glasgow RE: Evaluating the relevance, generalization, and applicability of research: issues in external validation and translation methodology. Eval Health Prof. 2006, 29: 126-153. 10.1177/0163278705284445.View ArticlePubMedGoogle Scholar
- Glasgow RE, Vogt TM, Boles SM: Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999, 89: 1322-1327.View ArticlePubMedPubMed CentralGoogle Scholar
- Institute for Healthcare Improvement TestingChanges. [http://www.ihi.org/IHI/Topics/Improvement/ImprovementMethods/HowToImprove/testingchanges.htm]
- Lomas J: Using research to inform healthcare managers' and policy makers' questions: from summative to interpretive synthesis. Healthcare Policy. 2005, 1: 55-71.PubMedPubMed CentralGoogle Scholar
- Institute for Healthcare Improvement: IHI collaborative model for achieving breakthrough improvement. 2003, Boston, MA, 1-20.Google Scholar
- Alternative Models of IRB Review – Workshop Summary Report. [http://www.hhs.gov/ohrp/sachrp/documents/AltModIRB.pdf]
- Alternative IRB Models: Optimizing Human Subject Protection – Conference Summary Report. [http://www.aamc.org/research/irbreview/irbconf06rpt.pdf]
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.