A former chair of the University of Wisconsin-Madison Education and Social/Behavioral Science IRB guesses that IRB review “most likely” protects subjects from harm, but concedes that “nobody really knows.” He also notes that it consumes tens of thousands of hours of work, mostly by researchers, at his university each year.
[Kenneth R. Mayer, “Working through the Unworkable? The View from Inside an Institutional Review Board,” PS: Political Science & Politics 49, no. 02 (April 2016): 289–93, doi:10.1017/S1049096516000226.]
Kenneth Mayer raises some significant methodological and ethical challenges facing social scientists. For instance,
A PI proposed a study involving interviews with people who have been charged with a crime or are the target of a criminal investigation to discuss the activities that led to the charges or investigations. Should the IRB be concerned about the risk that participants could be subject to prosecution or hurt their ability to defend themselves if a district attorney demands the researcher’s notes? Is informed consent sufficient to address these risks?
How should we evaluate risks in a project that studies attitudes toward government in a country with an authoritarian regime? Can we rely on the proposition that people in those regimes understand the risks of criticizing their government and that those risks are therefore (in the Common Rule definition) “not greater in and of themselves than those ordinarily encountered in daily life” (45 CFR 102(j))?
Unfortunately, Mayer’s IRB was not able to give clear answers to such questions. Rather, it offered its best guess about what to do, while acknowledging that “another IRB, quite reasonably, may have come to different conclusions.”
This practice marks Mayer and his colleagues as among those “well-meaning amateurs” identified by John Lantos, whose idiosyncratic decisions endanger research participants and research alike. As Mayer concedes,
The fact that the IRB process confronts ambiguities and competing standards is little comfort to researchers who believe their work is being delayed unnecessarily. If anything, this is a reason to clarify and address these ambiguities and tensions rather than to simply shrug our shoulders and say “it’s complicated.” A meaningful reform proposal would take far more space than available in this article, but if I were to offer one pragmatic suggestion for improving the IRB process, it would be to plead for better data. Even decades after the regulations have been in place, in social-science research we lack the most basic information about what we are doing and whether it actually does any good …
Are we really protecting subjects from harm? Do all of the compliance and reporting requirements make a material contribution to the welfare of research subjects? Does the IRB process impede research that would improve society? My answers are most likely yes, no, and possibly, respectively. However, the real answer is “nobody really knows”—and there is little prospect of significant improvement until we do.
Mayer does know that the process does harm, consuming vast amounts of time that could be better put to use:
A “back-of-the-envelope” calculation provides a rough idea of the time costs involved in the process. It is a reasonable estimate that preparing and submitting a medium-complexity protocol for IRB review takes about 10 hours; more detailed projects take longer. Factor in another 2 to 5 hours responding to IRB questions even for an expedited protocol, and another approximate 10 hours for protocols that are deferred or require modifications. A conservative estimate is an average of 15 to 20 person-hours for each new protocol; however, complex protocols with multiple PIs or sites easily could be four or five times higher, which suggests about 12,000 to 16,000 hours annually of researcher time for initial submissions. Submitting materials for the protocols undergoing continuing review or involving changes each will take at least 1 to 2 hours—an additional 1,800 hours. IRB committee meetings and proposal reviews collectively add perhaps another 750 hours annually for the 14 non-staff IRB members; factor in another 10,000 hours annually for five full-time IRB staff. Allowing for additional consultation time, it is plausible that the UW spends annually, conservatively, at least 25,000 to 30,000 person-hours on the basics, with most of that burden falling on researchers. Unfortunately, we do not know the actual number because there is no mechanism for tracking how much time is spent on the process.
If we know that IRBs are doing harm, and can only guess if they are doing good, why continue? Mayer hints at the answer: outside pressure from the federal government and from accrediting institutions.
The 1998–2000 OPRR reign of terror is fresh in IRB memory. Mayer writes,
What I told my frustrated colleagues is that they cannot use a “reasonable person” standard to judge the IRB process. They had to apply an “unreasonable bureaucrat” standard: that someone authorized to punish the university might question the IRB’s work years later, perhaps after a serious adverse event that questions our entire process; who demands justification for each decision that the IRB has made; and who is not likely to give us the benefit of any doubts. When something goes wrong, the individual researcher is not the one who alone bears the cost. It is the institution and other researchers that face the most serious harm if the federal government imposes the nuclear option of shutting down all research.
In fact, federal enforcement has dropped significantly in recent years, so a nuclear option seems unlikely. But with hundreds of millions of federal dollars at stake ($660 million in 2013–2014), I can understand why the University of Wisconsin keeps its doomsday clock.
Mayer also alludes obliquely to “external review” which, he laments, imposes unnecessary burdens:
During one external review, we faced pressure to require all foreign-language consent forms to be back-translated into English by independent professional translators. Such a rule, we were told, was necessary to ensure that consent forms contained what PIs claimed. The IRB Chair objected in the strongest terms, arguing that it would do nothing to protect subjects, would only serve to impose a monetary cost as a condition of submitting a protocol (it was understood that PIs would have to pay for it themselves), and that the UW trusted researchers to act ethically. The UW policy remains unchanged. Another review faulted us for returning too many protocols for minor changes that could be reviewed by a subcommittee of two IRB members rather than deferrals that had to go back to the full IRB, as well as for having too many unanimous votes. Still another review objected to the fact that some consent forms were signed in pencil rather than pen and wanted a requirement that all consent forms be signed in ink. Such a rule would do nothing but cause PIs to roll their eyes in exasperation.
Who were these nitpicking reviewers? Mayer doesn’t say, but he does mention that Wisconsin-Madison is accredited by the Association for the Accreditation of Human Research Protection Programs.