Just in time for the NPRM comment period, Society has published my review of Robert Klitzman’s book, The Ethics Police?: The Struggle to Make Human Research Safe (New York: Oxford University Press, 2015). I note that “By offering the subjective worldview of IRB members, Klitzman shows how good intentions combine with ethical ineptitude to produce arbitrary decisions.”
Per my agreement with Springer, what follows is the accepted manuscript of the review. The final publication is available at Springer via http://dx.doi.org/10.1007/s12115–015–9935-x.
Robert Klitzman. The Ethics Police?: The Struggle to Make Human Research Safe. New York: Oxford University Press, 2015.
In theory, institutional review boards, or IRBs, constitute a thin red line between the rights and welfare of participants in research studies and their exploitation by researchers who, intoxicated by scientific curiosity and their own ambition, can easily forget that their actions can harm the people they study. Since the 1960s, federal rules have required an increasing number of scholarly studies to be reviewed by IRBs, which operate independently in universities and research hospitals across the United States. The hope is that they will dispassionately referee between the scientist and the subject.
In practice, IRBs have gained a reputation for impeding science without protecting anyone. “Almost every researcher I know,” reports psychiatrist, bioethicist, and scholar Robert Klitzman, “has complained at some point, often vociferously, about an IRB having unnecessarily blocked or delayed his or her research.” (6) Klitzman himself has been kicked around by IRBs; early in his career, he lost eight months of a twelve-month fellowship trying to get approval from an IRB that, in the end, found nothing of concern in his protocol. (8)
After decades of such encounters, most of them less disruptive, Klitzman asked why a system so disliked by researchers made sense to those who run it. After choosing institutions from a list of NIH grant recipients, he created a sample of 46 IRB chairs, directors, administrators, and members, identified by pseudonyms in the text. Klitzman interviewed each informant for an hour or two, then supplemented these interviews with surveys of additional IRB members and staff and with some focus groups. He did not interview any researchers not affiliated with the IRB, any federal officials, or research participants. He did not observe IRBs in action or study their minutes, communications with researchers, or other written records. (364)
Though Klitzman claims inspiration from anthropologist Clifford Geertz’s call for “thick description,” his interviews produce a fairly thin record. Indeed, Klitzman bravely includes criticism of his methodology by Douglas Diekema, a physician, professor of bioethics, and IRB chair, voiced in comments on a 2011 article in which Klitzman laid out some of his early findings. In that commentary, Diekema complained that
To truly understand why different IRBs make disparate decisions will likely require an anthropologic methodology where trained observers embed themselves within IRBs in multiple institutions and evaluate the deliberations and decisions of those IRBs … We will need to go beyond surveys and interviews to a systematic evaluation of the actual work that IRBs do. (361)
Klitzman does not disagree; replying only that “many IRBs have consistently thwarted such efforts.”
That may be true of “many IRBs,” but other researchers—including Maureen Fitzgerald, Jan Jaeger, Charles Lidz, Laura Stark, and Will van den Hoonard—have succeeded in getting permission to observe IRBs in action and have achieved useful findings as a result. Klitzman is wrong to blame IRBs for his own disinclination to observe IRBs at work, and for not drawing more from the work of those who have observed IRBs in action. Still, Klitzman’s interviews and surveys have produced more useful information that Diekema suspects. By offering the subjective worldview of IRB members, Klitzman shows how good intentions combine with ethical ineptitude to produce arbitrary decisions.
Klitzman’s main finding is that IRB members, however accomplished in their primary professional fields, have scant expertise in ethical reasoning and regulatory compliance. “Given the importance of the work they do,” he writes, “and the potentially grave consequences of IRB lapses and oversights, the lack of preparedness for the work is especially striking. Both general members and chairs have been found to have little if any formal training in ethics.” (36) They are equally lost when it comes to regulatory interpretation. “Committees,” Klitzman notes, “may thus grope, ‘smelling their way,’ rather than using explicit rational formulas.” (100) In that sense, they are not so much an “ethics police” as an ethics posse: a group of virtuous but untrained citizens, recruited off the street, given lethal weapons, and told to stop the bad guys.
Among these “well-meaning amateurs” (a phrase Klitzman approvingly borrows from bioethicist John Lantos), the most amateurish are the “community members,” required by regulation to include at least one non-scientist and at least one person not affiliated with the institution conducting the research. But even the professionals who make up the bulk of IRB membership are unlikely to be expert in the particulars of the varied proposals that come before them. When a truly problematic case comes before an IRB member with real expert knowledge, it’s by luck rather than design. One IRB reviewed a protocol that looked fine to everyone except a hematologist, who noted that the drug involved can cause bleeding and insisted on a monitoring plan. (151) Count that as a win, but, as Klitzman notes, only “happenstance” brought that protocol before a board with a hematologist. Had the same protocol come before a board with, say, an endocrinologist but no hematologist, then too bad for the bleeders.
More typical are the cases in which no one on the IRB knows what’s at stake, and everyone must fall back on guesswork and panic. Olivia, an IRB chair, describes a sleep study whose participants would stay awake for 60 hours:
The IRB thinks it must be terrible to stay up for 60 hours. The rectal thermometer probe is everybody’s greatest concern, but participants say it’s the thing they get used to most quickly. Our community member was beside herself because of the description of the actograph. It looks like a wristwatch, and measures movement. But the protocol made it sound like it was a significant risk. (154)
As Klitzman notes about this case, IRB fears may not be “realistic or reality-based.” (154—his italics.)
Klitzman’s informants report few attempts to learn more about the reality-based risks of research. A few have attended conferences hosted by IRBs’ professional organization, Public Responsibility in Medicine and Research (PRIM&R), but these are expensive and not terribly informative. (62) Klitzman seems not to have asked about another potential source of enlightenment: the library. Consider, for instance, the question of compensating research participants, which, Klitzman’s informants report, is a constant puzzle. If researchers pay participants too much, might they risk giving poor people the feeling they have no real choice but to join the study? But if researchers pay only a token amount, are they not asking those same participants to give up time and take trouble—and perhaps bear risks—without fair compensation? As it turns out, both ethicists and empirical researchers have studied this question, and they have answers. (91) But Klitzman, who has published some of his work in the Journal of Empirical Research on Human Research Ethics, seems not to have asked IRB members whether they ever consult that or other scholarly journals in ethics when making their decisions.
IRBs do occasionally try to get help interpreting the federal regulations that govern their conduct. Canadian research ethics boards (REBs) with such questions can get written, public interpretations from the Secretariat on Responsible Conduct of Research, plus periodic updates to their nation’s Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans. Americans have no such resource. When IRBs query the federal Office for Human Research Protections (OHRP), officials “essentially just read back the regulations” or offer “vague generalities.” (185) Jeff, an IRB chair, complains, “I’ve found OHRP communications to social scientists to be dishonest.” (184)
Cut off from any national discussion, IRBs turn inward. In her book Behind Closed Doors, sociologist Laura Stark claimed that IRBs are guided less by national scholarly debate than by “local precedents,” so they are at least internally consistent. Klitzman finds some evidence for this view (149), but it is outweighed by the evidence of local inconsistency. As one IRB chair tells him, “Investigators may get quite different and inconsistent advice from the committee depending on what it feels like that day.” (93) Louis, an IRB chair, notes that a board may approve the same consent form for seven years, then turn around and reject it as too technical. (134)
The only defense for such inconsistency is that IRBs are somehow attuned to the surrounding communities, and thus may make appropriately distinct decisions. But in his most consistent challenge to his informants’ assertions, Klitzman notes that IRBs make little effort to ascertain what those communities think, and that inconsistency is better explained by internal dynamics. “IRBs even within the same community and institution often differ widely,” depending in part on who attends a given meeting. (143, 157) One IRB member confesses, “I can look at something one day, and then the next day: what the heck was I thinking?” (146) IRBs act differ not because they are expert in local conditions but because their members “use ‘gut feelings’ and the ‘sniff test,’ not careful ‘ethical analysis.’” (166) Feelings and smells are fickle.
Though IRB members seem frustrated by what they don’t know, and incapable of finding help, they don’t let their ignorance stop them from deriding others’ decisions or imposing significant changes on researchers. To the contrary, one of Klitzman’s informants boasts of her IRB’s rejection of a study design that other IRBs had approved. Cynthia, an IRB administrator, explains:
The researcher wanted to have a national database where any parent would have to have their child’s name entered before they would be eligible to go into a clinical trial. If you disagreed with putting your child’s name into this national database, you were not allowed to enroll in any trials. I said ‘You cannot take away people’s right to enroll in a research study if they say ‘I don’t want to place my child’s name in a national registry.’ You’ve suddenly made registration an eligibility requirement.” We were the only IRB that found issue with that—which amazes me. They thought it was not an IRB issue because it was a registry—not a research study. But isn’t this an invasion of a parent’s right to choice and confidentiality? They then rescinded it. A lot of institutions would have OK’d it. (152)
Cynthia thinks of herself as playing the Henry Fonda role in Twelve Angry Men, bravely standing up for the weak when no one else will. Less clear is why she thinks that way. What is so exploitative about asking a participant to join a registry? Why does Cynthia think subjects have a “right” to participate in one part of a study but not another? Does she understand the confidentiality procedures of disease registries? How did the investigators feel about the restriction? How did study participants? What scientific knowledge was lost? Would greater participation in the registry speed research about a serious childhood disease?
And even if we assume that Cynthia’s IRB was right when everyone else was wrong, did it not have a duty to persuade the national research community that participants should not be required to join registries? Yet the IRB system includes no mechanism for such alleged ethical wisdom to be shared. For anyone concerned with the rights and welfare of research participants, this should be terrifying. But rather than pressing Cynthia with follow-up questions, or providing extensive analysis, Klitzman takes at face value Cynthia’s claim to have “discover[ed] points not found by other reviewers.” Thicker description would have helped.
Klitzman claims that IRBs “grapple with decisions that involve profound philosophical, moral, and political dilemmas,” and he offers some examples, mostly involving research abroad. (316, 351) But most of the episodes his informants report concern superficial minutiae. How many expected non-English speakers does it take to require a consent form to be translated? (127) Can “four weeks” be stretched to “four weeks and three days”? (331) What punishment to impose on a researcher who misplaced a videotape, found it, and confessed his crime? (284) In some cases, procedural concerns are significant; one IRB blocked further research funds until an investigator had cleaned up a filing system that threatened subject confidentiality. (278) But other IRB requirements seem to lack any ethical content. One administrator relates,
Urban myths are out there—things that the committee has interpreted in the past that are not regulations but are just comfort levels. Our IRB says, “You should say ‘fill out,’ not ‘complete,’ a survey. ‘Complete’ is coercive.” But that’s not in the regulations. Our IRB also likes consent forms to be on institutional letterhead. (150)
Klitzman’s own IRB required him to tell each potential interviewee for this book, “the alternative to participating is not to participate.” (138) Another Tuskegee averted!
That a renowned Columbia University medical professor would accept such indignities is an indication of IRBs’ power, and one of Klitzman’s most surprising findings is how little IRB members understand how much power they wield. As he notes, “Local IRBs serve as their own police, judge, jury and Supreme Court.” (72) Told by regulation that their job is to balance risks and benefit, they instead seek to eliminate risk, including the risk of “bad press.” (75) And they can kill careers, banning researchers from doing research. “Researchers certainly recognize the power of these boards,” Klitzman finds, “and for IRB members to deny it is philosophically and sociologically naïve.” (356)
The weakest section of the book consists of the final two chapters, in which Klitzman ventures into the ongoing policy debates about IRBs. Here his own interviews with IRB members and staffers are of limited help, for, as he concedes, his informants “receive support—even if it is only part of their salary—for their work in the status quo, which they may therefore be invested in continuing.” (339) Don’t ask your barber if you need a haircut. And Klitzman mischaracterizes some proposals for reform. He claims, for instance, that the Obama administration’s 2011 proposals would allow researchers to declare any project to be “minimal risk” and proceed without prior review. (19) In fact, the 2011 proposals suggest allowing this streamlined track only for “specified types of benign interventions,” not just anything a researcher claims to be low-risk.
Klitzman himself is painfully ambivalent about the worth of the present system. “The authors of the regulations 40 years ago embarked on a grand experiment,” he crows. “Now, almost a half century later, it is clear that they have succeeded in many ways.” (323) Yet he also concedes that “boards frequently appear to try to justify their power, arguing that it helps researchers and human subjects. But no clear data exist to support that claim. At times, an IRB’s power may actually delay or impede research, causing harm that the committee may insufficiently recognize or acknowledge.” (268) He offers no reconciliation of these apparently contradictory statements.
Klitzman concludes his work with an quotation from Winston Churchill and the suggestion that the IRB system, like democracy, is “the worst system … except for all the others.” (368) But whereas Churchill’s listeners knew full well what other systems had been tried from time to time, Klitzman does not compare today’s system to the less heavy-handed IRB system in place in the 1980s; or to ethics oversight regimes in continental Europe, which review much narrower ranges of research; or to Canada’s system, which offers its boards considerably more advice, and gives researchers the right to appeal.
And while Churchill spoke despairingly of other forms of government that “will be tried in this world of sin and woe,” Klitzman believes that better alternatives to the present IRB system remain to be found. He calls for additional work “to examine to what degree, and how exactly, different models might operate.” (335) He insists that “boards must somehow be encouraged and even incentivized to reduce … idiosyncratic differences,” perhaps through the publication of their decisions. (167, 347) He repeatedly calls for OHRP or the Institute of Medicine to provide detailed guidance, something they have in the past failed to do. He hopes for some system of “checks and balances” to restrain IRB nitpicking. (140) He even endorses a “small scale” test of self-regulation by researchers conducing minimal risk studies. (346) Though he does so with more ambivalence than in his earlier writings, Klitzman is willing, at times, to imagine radically different structures.
“This is not,” Klitzman tells his readers, “an ‘anti-IRB’ book.” (30). Yet any critic of the current system will find in it plenty of evidence that drastic reform is needed. Indeed, in The Censor’s Hand, the most anti-IRB book ever written, Carl Schneider repeatedly cites Klitzman’s articles to illustrate some of IRBs’ worst pathologies, and their members’ foggiest thinking. It is harder to imagine any of Klitzman’s readers taking comfort in learning that researchers, research participants, and science itself must rely on the good intentions of these groping, sniffing amateurs.
For further reading
Fitzgerald, Maureen H. “Punctuated Equilibrium, Moral Panics and the Ethics Review Process.” Journal of Academic Ethics 2, no. 4 (2005): 315–38.
National Research Council. Committee on Revisions to the Common Rule for the Protection of Human Subjects in Research in the Behavioral and Social Sciences. Proposed Revisions to the Common Rule for the Protection of Human Subjects in the Behavioral and Social Sciences. 2014.
Stark, Laura. Behind Closed Doors: IRBs and the Making of Ethical Research. Chicago: University of Chicago Press, 2011.
Schneider, Carl E. The Censor’s Hand: The Misregulation of Human-Subject Research. Cambridge, Massachusetts: The MIT Press, 2015.
Van den Hoonaard, Will C. Seduction of Ethics: Transforming the Social Sciences. Toronto: University of Toronto Press, 2011.
No comments:
Post a Comment