Inexpert, Subjective Judgments
The NRC report notes the problem of subjective assessment of protocols:
To avoid subjectivity and enhance continuity within and across institutions, IRBs could draw on established scientific and professional knowledge in their determination of the probability and magnitude of research harms in daily life and in routine medical, psychological, or educational examinations, tests, or procedures of the general population. However, care is needed to avoid confusing evidence-based probability estimates with the subjective possibility that harms and discomforts of high magnitude are likely to be produced by the research. For example, IRBs could consider adopting procedures that appropriately balance the probability and magnitude of research harms, in order to avoid subjectively judging research as having a greater than minimal risk in cases where there is a very small probability that the research may produce harm of high magnitude or where there is a high probability that research may produce harms or discomfort of small magnitude.
Research Needed: To build a stronger evidence base, research is needed for identifying the probability and magnitude of harms and discomfort in daily life and the nature of age- indexed, routine medical, psychological, or educational examinations, tests, or procedures of the general population. In addition research is needed to examine appropriate algorithms for determining whether the calculus of probability and magnitude of harms and discomfort meets minimal-risk criteria . . .
There may be little awareness by IRBs and investigators of the growing body of published empirical evidence describing participant perspectives on research risks and benefits of social and behavioral research, as well as biomedical research. (49)
The report goes on to note “an abundance of investigator reports of survey studies for research on sexuality, drug use, and other health-relevant behaviors in which IRBs have created barriers to research implementation based on the empirically unsupported claim that surveys or interviews on such topics may harm participants by encouraging them to engage in the behaviors being studied.” (50)
And, “research is needed to properly address nonphysical risks of research and the methods that create them, rather than having IRBs rely on anecdote or moving to make drastic changes based on efficiency.” (62)
(Is the concern about reliance on anecdotes a reference to Laura Stark’s “local precedents”?)
While accurately diagnosing the problems of subjectivity and inconsistency, the report offers little on how IRBs might be induced to base their decisions on evidence. The Belmont Report, now 35 years old, already calls for the “systematic, nonarbitrary analysis of risks and benefits,” but—as NRC committee member Celia Fisher and others have documented-—that hasn’t stopped IRBs from acting idiosyncratically and arbitrarily. How does the NRC panel hope to change this?
Possible Incremental Reforms
The report hints at a few possibilities. Some of these could certainly help more conscientious boards, but it's not clear how they could address the worst IRB abuses.
1. More research about the effects of participation in research.
The report recommends that "Research is needed to study the effects of social and behavioral science research on research participants so that evidence-based assessments of “known and foreseeable” risk are more feasible. In particular, research is needed to properly address nonphysical risks of research and the methods that create them, rather than having IRBs rely on anecdote or moving to make drastic changes based on efficiency. Research is also needed on the effectiveness of confidentiality strategies in reducing risks of physical, social, economic, and legal harm." (62) Obviously, this would all be for the good. But how to get IRBs to read the research?
2. More guidance from OHRP.
Many passages in the report (e.g, 48, 61) suggest the need for more guidance from OHRP on various issues. That said, the recommendations mostly call for guidance on interpreting the regulations, not on protecting research participants. And the report obliquely notes OHRP’s record of not responding promptly to previous calls for guidance (28).
3. An appeals process.
The report recommends that the “IRB process should allow appeals for review by an authoritative committee. This committee may exist either within the institution or within an outside agency.” But it does not explain why this is needed or, more importantly, what effect the committee expects an appeals process would have. Do the members think that an IRB that was regularly overruled might begin to change its ways?
Bolder AlternativesThe most intriguing ideas come not in a discussion of human subjects research in general but rather in the discussions of more specific investigations.
First, the report suggests the development of “an institutional or organizational entity such as a national center to define and certify the levels of information risk of different types of studies and corresponding data protection plans to ensure risks are minimized.” (92)
Second, it suggests alternative boards to review “quality assurance/ improvement (QA/QI) in the field of healthcare care and investigations into the nature, causes, and effectiveness of responses to natural disasters.”
“Why, the report asks, “is IRB review not suitable in these fields? Studies in the field of QA/QI are characterized by frequent changes in the interventions utilized in the healthcare setting. IRBs, in general, lack the expertise to assess the methods employed to evaluate these interventions. Moreover, if each of these changes in the interventions must be reviewed at a convened meeting of the IRB, it would take much too much time to go through the technical IRB process of approval of amendments.” (109)
The report then suggests a different form of committee for these other sorts of investigation:
While procedural alternatives to IRB oversight are discussed elsewhere in this report, two suggestions related to the examples above are considered here. First, a committee could be established that was made up of experts in QA/QI as well as experts in the cognate medical specialties, ethicists, patient advocates, and persons who have no connection with the institution apart from membership on the committee . . . Second, studies of the nature, causes, and effectiveness of responses to natural disasters could be overseen by similarly constructed committees. (109)
Why should a national center only investigate data protection, as opposed to research risks in general? And when the committee states that IRBs lack the expertise to evaluate QA/QI, is it suggesting that IRBs possess the expertise to review all the many methods used in human subjects research? If so, where’s the evidence for that? Or that the IRB process is nimble enough to keep up with the research process?
And given that existing federal requirements for IRB expertise have been ineffective, what new rules could ensure a higher level of expertise for QA/QI and disaster research? If we can create new, independent boards of experts, might not human subjects research benefit from the same arrangement?
In short, if we can construct expert review of data research and QI by establishing regional or national bodies, why not extend that model to all research, and get rid of local IRB review?
To its credit, the NRC report does not assume that IRBs do more good than harm. (18) But I don’t see a full discussion of “procedural alternatives to IRB oversight” for all the forms of research now covered by the Common Rule.