[Christine Grady, "Do IRBs protect human research participants?," JAMA 304 (2010):1122-3; James Feldman, "Institutional Review Boards and Protecting Human Research Participants," and Christine Grady, "Institutional Review Boards and Protecting Human Research Participants—Reply," JAMA 304 (2010): 2591-2592.]
In the September 8 issue, Christine Grady of the Department of Bioethics, National Institutes of Health Clinical Center, quotes David Hyman's charge that "Despite their prevalence, there is no empirical evidence IRB oversight has any benefit whatsoever—let alone benefit that exceeds the cost." Grady is less blunt, but her message is the same:
Without evaluative data, it is unclear to what extent IRBs achieve their goal of enhancing participant protection and whether they unnecessarily impede or create barriers to valuable and ethically appropriate clinical research. This lack of data is complicated by the reality of no agreed-on metrics or outcome measures for evaluating IRB effectiveness. Although available data suggest a need for more efficiency and less variation in IRB review, neither efficiency nor consistency directly gauges effectiveness in protecting research participants. Protection from unnecessary or excessive risk of harm is an important measure of IRB effectiveness, yet no systematic collection of data on research risks, no system for aggregating risks across studies, and no reliable denominator of annual research participants exist. Even if aggregate risk data were easily available, it may be difficult to quantify the specific contribution of IRB review to reducing risk because protection of research participants is not limited to the IRB.
Serious efforts are needed to address these concerns and provide evidence of IRB effectiveness.
The December 15 issue features a reply by James Feldman of the Boston University School of Medicine. Feldman makes two points.
First, he doubts that IRBs cause that much trouble:
The critique of IRBs by Bledsoe et al, which was cited as evidence that they stifle research without protecting participants, is based on a single-site report of the results of an e-mail survey mailed to 3 social science departments with a total of 27 respondents. The evidence that IRBs have "disrupted student careers [and] set back tenure clocks" should also meet a reasonable standard of evidence.
OK, but what is that standard of evidence? In the absence of federal funding to study systematically a problem created by federal regulations, how much are frustrated researchers expected to do to demonstrate the problem? In other words, how many horror stories would Feldman need to change his views?
Having insisted that evidence is necessary to show the costs of IRB review, Feldman then asserts that no evidence is needed to show its benefit:
I believe that the effectiveness of IRBs in protecting human participants from research risks is analogous to preventive medicine. It is difficult to derive evidence that can quantify the effectiveness of a specific preventive intervention (new cases of HIV prevented? new injuries prevented?). However, evidence of preventable injury or illness makes a case for the need for effective prevention. Similarly, the tragic and prevalent cases of research abuse and injury make a compelling case for more rather than less review by IRBs that are independent, experienced, and knowledgeable.
As Grady points out in her reply to the letter, even if we accept the analogy, the IRB system does not meet the standards we impose on preventive medicine. She writes, "clinicians and public health officials do rely on evidence of the risks, benefits, and effectiveness of an intervention in preventing HIV or injuries or other conditions to justify adopting one particular preventive intervention rather than another and to defend the necessary investment of resources."
Exactly. As it stands, IRBs are the Avandia of ethics.