Monday, December 24, 2007

Law & Society Review, continued

As I noted earlier, the December 2007 issue of Law & Society Review features five items concerning IRBs and the social sciences.

Malcolm M. Feeley, "Legality, Social Research, and the Challenge of Institutional Review Boards"



The section on IRBs begins with Malcolm M. Feeley's 2006 presidential address to the Law & Society Association. Feeley presents an impassioned critique of IRBs, complaining, "in the name of minimizing risks, IRBs subject researchers to petty tyranny. Graduate students and junior scholars are particularly likely to be caught in their web—and for them IRB tyranny is often more than petty. Senior scholars are generally more adept at avoidance, evasion, and adaptation, but they too are hardly exempt from this tyranny. A number of prominent social scientists, including some members of this Association, know all too well the harms of running afoul of campus IRBs. . . Entire research areas and methodologies are in jeopardy, insofar as the difficulties of obtaining IRB approval affect research priorities for funding agencies and universities' willingness to support researchers.”

Feeley then raises a number of specific problems, such as the ill fit between the beneficence encoded in regulation and the kind of social research that aspires to produce "tarnished reputations and forced resignations" of evil-doers.

To remedy this situation, Feely proposes three modes of action:

1. "Join [IRBs]; subvert them—or at least curtail them. Serve on them and do all you possibly can to facilitate the research of your colleagues rather than act as a censor."

2. Follow Richard Schweder's call to get your university to apply federal regulations only to federally funded research.

3. "Ask about estimates of how much actual harm to subjects in social science research has been prevented by IRB actions. And ask for documentation."

I am a bit skeptical about the first suggestion, for two reasons. First, few universities have IRBs strictly for the social sciences. This means that a sociologist, anthropologist, political scientist, or historian would spend most of her time on an IRB reviewing (or abstaining from reviewing) psychological experiments. That's an unfair price to pay to have some power over one's own research. Second, it assumes that IRBs are run by IRB members. As Caroline H. Bledsoe et al. report in "Regulating Creativity: Research and Survival in the IRB Iron Cage," the size of human protections staffs has ballooned in recent years. If the staff have the real power, IRB members will have little chance to facilitate research.

Laura Stark, "Victims in Our Own Minds? IRBs in Myth and Practice."



The first comment is Laura Stark's. It draws in part on Stark's 2006 Princeton dissertation, "Morality in Science: How Research Is Evaluated in the Age of Human Subjects Regulation." I am glad to learn of this work, and I hope to comment on it in a later post.

Stark suggests trying to improve, rather than restrict, IRBs, because “ethics review in some form is here to stay because of institutional inertia, and [because of her] belief as a potential research subject that ethics review is not an entirely bad idea, even for social scientists.” She advocates "changing local practices to suit the local research community, rather than refining federal regulations."

One intriguing example is the establishment of "IRB subcommittees, which can review lower-risk studies [and] have moved ethics review into academic departments. In so doing, these subcommittees of faculty members (who presumably understand the methods in question) have taken over the task of evaluating low-risk studies from board administrators." This sounds a lot like the departmental review that the AAUP suggested as an alternative to IRB control, and like the Macquarie model I described in August. I hope that Stark will publicize the name of the university that uses such subcommittees, so that it can better serve as an example to others. Stark does not explain why this model is appropriate only for low-risk studies. It seems to me the higher the risk, the more reason to have research reviewed by people who understand its methods.

Significantly, neither in her article nor in her dissertation does Stark take up Feeley's challenge to document cases in which IRBs have prevented actual harm to participants in social science research. Her research offers important insights about how IRBs reach decisions, but no evidence that those decisions do more good than harm, or that they are consistent with norms of academic freedom.

Finally, Stark claims, "the social science victim narrative—by which I mean the story that human subjects regulations were not meant to apply to us—is pervasive among academics, and it is particularly central to qualitative researchers as a justification for their criticisms of IRBs. Yet this victim narrative does not stand up to historical scrutiny, as I have shown." Yes and no. Stark's use of the passive voice (were not meant to apply) is telling; the question is who meant the regulations to apply to social scientists, and who did not. I am working on a full-scale history of the imposition of human subjects regulations on the social scientists, and I can tell Stark that more scrutiny will complicate her story.

Robert Dingwall, "Turn off the oxygen …"



The second comment is Robert Dingwall's "Turn off the oxygen …," the oxygen here referring to the legitimacy granted to IRBs by university faculty.

Dingwall is skeptical of legal challenges, given the cost, the possibility of failure, and the fact that the First Amendment only applies to the United States (Dingwall works in the UK.) He argues instead that “if we can show that ethical regulation does not actually contribute to a better society, but to a waste of public funds, serious information deficits for citizens, and long-term economic and, hence, political decline, then we may have identified a set of arguments that might lead to a more skeptical approach to the self-serving claims of the philosopher kings who sustain that system.” For example, we must continue to document ethical wrongs like the insistence by a British medical journal that two historians falsify the names of their oral history narrators, despite the wishes of most of the narrators to be named. [Graham Smith and Malcolm Nicolson, "Re-expressing the Division of British Medicine under the NHS: The Importance of Locality in General Practitioners' Oral Histories," Social Science & Medicine 64 (2007): 938–48.] I hope Professor Dingwall has a chance to read Scott Atran's essay, "Research Police – How a University IRB Thwarts Understanding of Terrorism," posted on this blog in May. It is an excellent example of the way that IRB interference can disrupt vitally important work.

Jack Katz, "Toward a Natural History of Ethical Censorship"



The third comment, by Jack Katz, is the most shocking, for it is the most thoroughly documented. (It even cites this blog, thanks.) Katz lists several cases, all recent, in which IRBs have derailed potentially important social research. Unlike the 2006 AAUP report, he gives names, universities, dates and citations for most of his horror stories. Among them:

* "In Utah, Brigham Young University's IRB blocked an inquiry into the attitudes of homosexual Mormons on their church. When the same anonymous questionnaire study design was transferred to another researcher, the IRB at Idaho State University found the study unproblematic."

* "A proposed study of university admissions practices [was] blocked by an IRB at a Cal State campus. The study had the potential to reveal illegal behavior, namely affirmative action, which was prohibited when Proposition 209 became California law."

* "At UCLA, a labor institute developed a white paper lamenting the health benefits that Indian casinos offered their (largely Mexican and Filipino) workers. Despite the university's support for the labor institute when anti-union legislators at the state capitol have sought to eliminate its funding, publication was banned by the IRB after a complaint by an advocate for Indian tribes that the study had not gone through IRB review."

Stark would have us believe that "the local character of board review does not mean that IRB decisions are wrong so much as that they are idiosyncratic." But Katz shows that IRBs' idiosyncracies can be hard to distinguish from viewpoint-based censorship.

In contrast to these identifiable harms, Katz finds "no historical evidence that the social science and humanistic research now pre-reviewed by IRBs ever harmed subjects significantly, much less in ways that could not be redressed through post hoc remedies." I don't think I would go quite this far, given Carole Gaar Johnson's description of the harms caused to the residents of "Plainville" by the inept anonymization of their town ("Risks in the Publication of Fieldwork," in Joan E. Sieber, ed., The Ethics of Social Research: Fieldwork, Regulation, and Publication (New York: Springer, 1982). But the rarity of such cases means we should weigh IRB review against other methods of prevention, such as departmental review of projects or better certification of researchers.

Katz reiterates his call, previously set forth in the American Ethnologist, for a "culture of legality," in which IRBs would be forced to explain their decisions and "publicly disseminate proposed rules before they take the force of law." He believes that "were IRBs to recognize formally that they cannot properly demand the impossible, were they to invite public discussion of policy alternatives, and were they to open their files to public oversight, they would fundamentally alter the trajectory of institutional development by forcing confrontation with the central value choices currently ignored in the evolution of ethical research culture."

But what do we do when we confront those value choices? We get statements like Stuart Plattner's: “no one should ever be hurt just because they were involved in a research project, if at all possible," a position clearly at odds with Katz's applause for "the American tradition of critical social research." (Plattner, “Human Subjects Protection and Cultural Anthropology,” Anthropological Quarterly, 2003) The problem with IRBs' value choices is not that they are hidden, but that they are often wrong. The Belmont Report is the most public and widely cited rule used by IRBs, and it is a terrible guide for the kind of critical research Feeley and Katz want done.

Feeley, "Response to Comments"



The most interesting part of Feeley's response comes at the very end. Noting that, with the AAUP's encouragement, some universities have ceased promising to review all human subjects research in favor of the regulatory minimum of federally funded research, he points out that we will soon know if the lack of IRB review of social science at those universities yields a flood of unethical research. "If there are few reports of negative consequences . . . they might encourage national officials to rethink the need for such an expansive regulatory system . . . On the other hand, if opt-out results in increased problems, the findings might help convince Katz, Dingwall, me, and still others of the value of IRBs." This strikes me a very fair bet, and the experiment can't begin soon enough.

1 comment:

Anonymous said...

In the UK, the introduction of IRB-type systems has been uneven and led by less research-intensive universities so that there are fewer documented cases of censorship akin to those described by Jack Katz. However, my attention has recently been drawn to a 2007 paper ( R. Roberts et al., UK students and sex work:current knowledge and issues, Journal of Community and Applied Social Psychology, 17: 141-146) which resembles several of Katz's cases and suggests that even a 'voluntary' system of institutional review easily slides into censorship. In this case, the authors report on their efforts to investigate the claim that changes in government funding for English undergraduate students have led a significant number of young women to take up employment in the sex industry. Their own university approved the study only on the basis that they did not generate a sample from that institution's students and promised support from the National Union of Students did not materialize, apparently because of the close ties between the union and the political party responsible for the funding changes. This has similarities to both the Mormon case and the Indian casino workers case reported by Katz.