Thursday, December 30, 2010

NIH Bioethicist Grady Questions IRB Effectiveness

JAMA has published an interesting exchange concerning the lack of data about IRB effectiveness.

[Christine Grady, "Do IRBs protect human research participants?," JAMA 304 (2010):1122-3; James Feldman, "Institutional Review Boards and Protecting Human Research Participants," and Christine Grady, "Institutional Review Boards and Protecting Human Research Participants—Reply," JAMA 304 (2010): 2591-2592.]

In the September 8 issue, Christine Grady of the Department of Bioethics, National Institutes of Health Clinical Center, quotes David Hyman's charge that "Despite their prevalence, there is no empirical evidence IRB oversight has any benefit whatsoever—let alone benefit that exceeds the cost." Grady is less blunt, but her message is the same:


Without evaluative data, it is unclear to what extent IRBs achieve their goal of enhancing participant protection and whether they unnecessarily impede or create barriers to valuable and ethically appropriate clinical research. This lack of data is complicated by the reality of no agreed-on metrics or outcome measures for evaluating IRB effectiveness. Although available data suggest a need for more efficiency and less variation in IRB review, neither efficiency nor consistency directly gauges effectiveness in protecting research participants. Protection from unnecessary or excessive risk of harm is an important measure of IRB effectiveness, yet no systematic collection of data on research risks, no system for aggregating risks across studies, and no reliable denominator of annual research participants exist. Even if aggregate risk data were easily available, it may be difficult to quantify the specific contribution of IRB review to reducing risk because protection of research participants is not limited to the IRB.

Serious efforts are needed to address these concerns and provide evidence of IRB effectiveness.


The December 15 issue features a reply by James Feldman of the Boston University School of Medicine. Feldman makes two points.

First, he doubts that IRBs cause that much trouble:


The critique of IRBs by Bledsoe et al, which was cited as evidence that they stifle research without protecting participants, is based on a single-site report of the results of an e-mail survey mailed to 3 social science departments with a total of 27 respondents. The evidence that IRBs have "disrupted student careers [and] set back tenure clocks" should also meet a reasonable standard of evidence.


OK, but what is that standard of evidence? In the absence of federal funding to study systematically a problem created by federal regulations, how much are frustrated researchers expected to do to demonstrate the problem? In other words, how many horror stories would Feldman need to change his views?

Having insisted that evidence is necessary to show the costs of IRB review, Feldman then asserts that no evidence is needed to show its benefit:


I believe that the effectiveness of IRBs in protecting human participants from research risks is analogous to preventive medicine. It is difficult to derive evidence that can quantify the effectiveness of a specific preventive intervention (new cases of HIV prevented? new injuries prevented?). However, evidence of preventable injury or illness makes a case for the need for effective prevention. Similarly, the tragic and prevalent cases of research abuse and injury make a compelling case for more rather than less review by IRBs that are independent, experienced, and knowledgeable.


As Grady points out in her reply to the letter, even if we accept the analogy, the IRB system does not meet the standards we impose on preventive medicine. She writes, "clinicians and public health officials do rely on evidence of the risks, benefits, and effectiveness of an intervention in preventing HIV or injuries or other conditions to justify adopting one particular preventive intervention rather than another and to defend the necessary investment of resources."

Exactly. As it stands, IRBs are the Avandia of ethics.

Sunday, December 26, 2010

First, Do Some Harm, Part III: Loosies in San Francisco

The third recent document illustrating the problem of applying the Hippocratic maxim to non-medical research is Leslie E. Wolf, "The Research Ethics Committee Is Not the Enemy: Oversight of Community-Based Participatory Research," Journal of Empirical Research on Human Research Ethics 5, no. 4 (December 2010): 77–86. It offers a clear example of the kind of valuable research that is impeded by simplistic medical ethics.

Thursday, December 23, 2010

First, Do Some Harm, Part II: The AAA Ethics Task Force

In mid-October, the Ethics Task-Force of the American Anthropological Association solicited comments on the following text, a section of a draft Code of Ethics now being written:


Do No Harm

Anthropologists share a primary ethical obligation to avoid doing harm to the lives, communities or environments they study or that may be impacted by their work. This includes not only the avoidance of direct and immediate harm but implies an obligation to weigh carefully the future consequences and impacts of an anthropologist’s work on others. This primary obligation can supersede the goal of seeking new knowledge and can lead to decisions not to undertake or to discontinue a project. Avoidance of harm is a primary ethical obligation, but determining harms and their avoidance in any given situation may be complex.

While anthropologists welcome work benefiting others or increasing the well-being of individuals or communities, determinations regarding what is in the best interests of others or what kinds of efforts are appropriate to increase well-being are complex and value-laden and should reflect sustained discussion with those concerned. Such work should reflect deliberate and thoughtful consideration of both potential unintended consequences and long-term impacts on individuals, communities, identities, tangible and intangible heritage and environments.


As of December 13, 33 people (presumably all anthropologists, but I'm not sure) had posted comments. The comments are often nuanced, making it hard to say whether they endorse the language or not. But they broke down roughly as follows:

Do No Harm



Significantly, the most wholehearted supporters of the "do no harm" proposal are those who uncritically embrace the Belmont Report and the Common Rule. "'Do no harm' is an IRB principle, and so it should be in our code," writes Bethe Hagens. Four other responses, from Chip Colwell-Chanthaphonh, mkline, Robert T Trotter II, and Simon Craddock Lee, all seem to suggest that the AAA code should conform to those documents, without asking much about their origins or their fit to the practices and beliefs of anthropologists.

Four other responses--from Barbara Rose Johnston, Seamus Decker, socect, and Vicki Ina F. Gloer--endorse Hagens's idea that anthropologist should "intend no harm." Despite the Belmont Report's description of "the Hippocratic maxim ”do no harm” [as] a fundamental principle of medical ethics," this form is more faithful to the Belmont's overall section on beneficence.

Do Some Harm



Eight responses--almost as many--appear to reject the "do no harm" idea on the grounds that neutrality is impossible, and anthropologists should not hesitate to harm those who deserve it. "A blanket edict to 'Do No Harm' could easily lead to a professional paralysis when one considers that a few steps away from the person giving you this interview is someone who will not like, will want or need to fight, or will suffer consequences for what is said much further down the line," writes Benjamin Wintersteen. Murray Leaf concurs. "Do no harm is fine as principle of medical practice," he writes, "where you are working with a single individual. It is nearly meaningless when you (we) work with human communities, in which what is good and what is harm is usually in contention. As some of these posts suggests, what we do is often a matter of helping some while undermining the position of others. No harm at all, in such a context, would almost always be also no help at all–and no effect at all."

Bryan Bruns offers an example. "I work, in conjunction with communities and a government agency, to design and support a process in which communities are likely to, in a reasonably democratic way, act to restrain the behavior and thereby (harm) reduce the benefits of a few people (upstream irrigators, large landowners) who currently take advantage of others, it’s not clear how a principle of 'do no harm' would allow any practical engagement."

I would say that the responses by Dimitra Doukas, Joan P Mencher, Moish, Noelle Sullivan, and Ray Scupin all fall in this general category of respecting critical inquiry. Margaret Trawick's comment is harder to categorize. "I have been teaching 'Do no harm' to my students as the first ethical principle for anthropological fieldwork, for many years," she writes. "It is a difficult principle to follow, precisely because you never know what might cause harm, and therefore you have to THINK about what you are doing in the field more carefully than you might in everyday life. Good intentions are not enough. Additionally, 'harm to whom' is a good question . . . Sometimes to protect and advocate for one party (.e.g. Untouchables in India) is to, at the least, offend some other party – e.g. high caste Hindus." Given her understanding of this problem, I'm not sure why she teaches "do no harm" rather than something like "think about whom you are harming."

It's the Wrong Question



An even greater number of responses suggest that, in the words of Carl Kendall, "This principle is way too vague and self-directed to be practically useful." Kendall hints, perhaps cynically, that anthropologists need one set of principles these ethical principles to "pass IRB muster" and a second set "to protect communities and fieldworkers." Carolyn Fluehr-Lobban argues that "'Harm' should be problematized—are there agreed upon universal standards of harm, and where is there discussion of reasonable disagreement."

James Dow rejects the medical language of IRBs: "'Do no harm' is an good ethical principle to be applied to individual social relationships, which we hope that we understand; however, there is a problem when applying it to larger societies and cultures." Likewise, David Samuels writes that "The place where you need to get informed consent is at the point at which you have turned people into characters in your story. The medicalized pre-framing of the IRB process doesn’t cover that at all."

Taken as a whole, the responses suggest that only a minority of those commenting embrace the Belmont Report and the IRB process as enthusiastically as the AAA did in its 2004 statement that presents the active involvement of IRBs as a positive good. I hope the Task Force recognizes this, and takes the opportunity to reconsider the AAA's overall position in regard to IRB review.

[Hat tip to Alice Dreger. For a historical perspective on another discipline's efforts to craft a research ethics code, see Laura Stark, "The Science of Ethics: Deception, the Resilient Self, and the APA Code of Ethics, 1966–1973," Journal of the History of the Behavioral Sciences 46 (Fall 2010): 337–370.]

Wednesday, December 22, 2010

First, Do Some Harm, Part I: Denzin's Qualitative Manifesto

Three recent documents demonstrate the confusion that arises when people try to apply medical ethics to non-medical fields. I will describe them in individual entries.

In June 2010, Norman Denzin, Research Professor of Communications at the University of Illinois at Urbana-Champaign, published The Qualitative Manifesto: A Call to Arms (Left Coast Press). Chapter five seeks


to outline a code of ethics, a set of ethical principles for the global community of qualitative researchers. I want a large tent, one that extends across disciplines and professions, from anthropologists to archeologists, sociologists to social workers, health care to education, communications to history, performance studies to queer and disability studies.


Part of the impetus for this effort is Denzin's recognition that IRB guidelines may not match "guidelines grounded in human rights, social justice considerations" or disciplinary codes. He is familiar with the debate concerning IRBs, having read the Illinois White Paper, the AAUP reports, and "even a humanities and IRB blog where complaints are aired."

Denzin is also familiar with oral historians' concerns that IRBs impose inappropriate requirements, as well as statements of ethics from other qualititative researchers. He seeks to synthesize what he has learned in a footnoted dialogue, part of a "one-act play" entitled "Ethical Practices":


SCENE FOUR: Oral Historians

. . .

Speaker Two:: We do not want IRBs constraining critical inquiry, or our ethical conduct. Our commitment to professional integrity requires awareness of one's own biases and a readiness to follow a story, wherever it may lead. We are committed to telling the truth, even when it may harm people (Shopes, 2007a, p.4).

Speaker One:: When publishing about other people, my ethics require that I subject my writing to a fine-mesh filter: do no harm (Richardson, 2007, p. 170).

Speaker Two:: So there we have it. A set of methodological guidelines. (83)


No. What we have is a debate between Linda Shopes, a historian, and Laurel Richardson, a sociologist, about the ethical responsibility of an interviewer to a narrator. Their perspectives reflect important differences between their professions. They also refelct the particulars of the book in which Richardson's statement appears, an account of the last months of a dying friend--hardly the typical oral history or sociological study.

Denzin turns a blind eye to this debate, instead seeming to endorse both sides. In the play, Speaker Two states that "Beneficience, do no harm, is challenged in the oral history interview, for interviews may discuss painful topics, and they [sic] have the right to walk away at any time." That seems to endorse Shopes's position. But they book closes with a proposed ethical code that leans toward Richardson, calling on all qualitative researchers to "strive to never do harm." (122)

How can Denzin read and reprint historians' arguments, then reject them without even realizing he is doing so? Is the historians' position so hard to understand? Or is the lure of innocuity so powerful?

Sunday, December 12, 2010

Van den Hoonaard Reviews Ethical Imperialism

Will van den Hoonaard reviews Ethical Imperialism for the Canadian Journal of Sociology. The CJS blog asks that readers not quote or cite the advance version now online, but I suppose linking is OK.

Happy Fourth Birthday, Institutional Review Blog!

Friday, December 10, 2010

George Mason University Posts Consultant Report

The George Mason University Office of Research & Economic Development has posted the following:


The Huron Consulting Group's final report on the Office of Research Subject Protections and the Human Subjects Review Board is now available. The results are available in two formats: A comprehensive Final Report (PDF) and a Faculty Report Presentation (PPT)


As a Mason faculty member, I am involved in the continuing discussions of human research protections at the university. I will therefore refrain from comment except to applaud the university administration for posting these documents where they can inform decisions here and at other universities.

For comparable reports, see Institutional Review Boards at UC [University of California]: IRB Operations and the Researcher’s Experience, Report of the University Committee on Research Policy (UCORP), endorsed by the Academic Council, April 25, 2007, and University of Cincinnati, Final Report: Task Force on IRB and Compliance Roles and Responsibilities, May 2009.

Tuesday, December 7, 2010

Menikoff Passes the Buck

Joseph Millum, bioethicist at the National Institutes of Health, and Jerry Menikoff, director of the Office for Human Research Protections, acknowledge the widespread dissatisfaction with present human subjects regulations and wish that "ethics review could be streamlined under the current regulations if institutions, IRBs, and researchers adhered strictly to the definition of human subjects research and used the available options for exemptions, expedited review, and centralized review—options that remain underused in biomedical research." But they put too much blame for this overregulation on IRBs and research institutions rather than on their own agencies.

[Joseph Millum and Jerry Menikoff, "Streamlining Ethical Review," Annals of Internal Medicine 153, no. 10 (November 15, 2010): 655-657.]