Thursday, December 23, 2010

First, Do Some Harm, Part II: The AAA Ethics Task Force

In mid-October, the Ethics Task-Force of the American Anthropological Association solicited comments on the following text, a section of a draft Code of Ethics now being written:


Do No Harm

Anthropologists share a primary ethical obligation to avoid doing harm to the lives, communities or environments they study or that may be impacted by their work. This includes not only the avoidance of direct and immediate harm but implies an obligation to weigh carefully the future consequences and impacts of an anthropologist’s work on others. This primary obligation can supersede the goal of seeking new knowledge and can lead to decisions not to undertake or to discontinue a project. Avoidance of harm is a primary ethical obligation, but determining harms and their avoidance in any given situation may be complex.

While anthropologists welcome work benefiting others or increasing the well-being of individuals or communities, determinations regarding what is in the best interests of others or what kinds of efforts are appropriate to increase well-being are complex and value-laden and should reflect sustained discussion with those concerned. Such work should reflect deliberate and thoughtful consideration of both potential unintended consequences and long-term impacts on individuals, communities, identities, tangible and intangible heritage and environments.


As of December 13, 33 people (presumably all anthropologists, but I'm not sure) had posted comments. The comments are often nuanced, making it hard to say whether they endorse the language or not. But they broke down roughly as follows:

Do No Harm



Significantly, the most wholehearted supporters of the "do no harm" proposal are those who uncritically embrace the Belmont Report and the Common Rule. "'Do no harm' is an IRB principle, and so it should be in our code," writes Bethe Hagens. Four other responses, from Chip Colwell-Chanthaphonh, mkline, Robert T Trotter II, and Simon Craddock Lee, all seem to suggest that the AAA code should conform to those documents, without asking much about their origins or their fit to the practices and beliefs of anthropologists.

Four other responses--from Barbara Rose Johnston, Seamus Decker, socect, and Vicki Ina F. Gloer--endorse Hagens's idea that anthropologist should "intend no harm." Despite the Belmont Report's description of "the Hippocratic maxim ”do no harm” [as] a fundamental principle of medical ethics," this form is more faithful to the Belmont's overall section on beneficence.

Do Some Harm



Eight responses--almost as many--appear to reject the "do no harm" idea on the grounds that neutrality is impossible, and anthropologists should not hesitate to harm those who deserve it. "A blanket edict to 'Do No Harm' could easily lead to a professional paralysis when one considers that a few steps away from the person giving you this interview is someone who will not like, will want or need to fight, or will suffer consequences for what is said much further down the line," writes Benjamin Wintersteen. Murray Leaf concurs. "Do no harm is fine as principle of medical practice," he writes, "where you are working with a single individual. It is nearly meaningless when you (we) work with human communities, in which what is good and what is harm is usually in contention. As some of these posts suggests, what we do is often a matter of helping some while undermining the position of others. No harm at all, in such a context, would almost always be also no help at all–and no effect at all."

Bryan Bruns offers an example. "I work, in conjunction with communities and a government agency, to design and support a process in which communities are likely to, in a reasonably democratic way, act to restrain the behavior and thereby (harm) reduce the benefits of a few people (upstream irrigators, large landowners) who currently take advantage of others, it’s not clear how a principle of 'do no harm' would allow any practical engagement."

I would say that the responses by Dimitra Doukas, Joan P Mencher, Moish, Noelle Sullivan, and Ray Scupin all fall in this general category of respecting critical inquiry. Margaret Trawick's comment is harder to categorize. "I have been teaching 'Do no harm' to my students as the first ethical principle for anthropological fieldwork, for many years," she writes. "It is a difficult principle to follow, precisely because you never know what might cause harm, and therefore you have to THINK about what you are doing in the field more carefully than you might in everyday life. Good intentions are not enough. Additionally, 'harm to whom' is a good question . . . Sometimes to protect and advocate for one party (.e.g. Untouchables in India) is to, at the least, offend some other party – e.g. high caste Hindus." Given her understanding of this problem, I'm not sure why she teaches "do no harm" rather than something like "think about whom you are harming."

It's the Wrong Question



An even greater number of responses suggest that, in the words of Carl Kendall, "This principle is way too vague and self-directed to be practically useful." Kendall hints, perhaps cynically, that anthropologists need one set of principles these ethical principles to "pass IRB muster" and a second set "to protect communities and fieldworkers." Carolyn Fluehr-Lobban argues that "'Harm' should be problematized—are there agreed upon universal standards of harm, and where is there discussion of reasonable disagreement."

James Dow rejects the medical language of IRBs: "'Do no harm' is an good ethical principle to be applied to individual social relationships, which we hope that we understand; however, there is a problem when applying it to larger societies and cultures." Likewise, David Samuels writes that "The place where you need to get informed consent is at the point at which you have turned people into characters in your story. The medicalized pre-framing of the IRB process doesn’t cover that at all."

Taken as a whole, the responses suggest that only a minority of those commenting embrace the Belmont Report and the IRB process as enthusiastically as the AAA did in its 2004 statement that presents the active involvement of IRBs as a positive good. I hope the Task Force recognizes this, and takes the opportunity to reconsider the AAA's overall position in regard to IRB review.

[Hat tip to Alice Dreger. For a historical perspective on another discipline's efforts to craft a research ethics code, see Laura Stark, "The Science of Ethics: Deception, the Resilient Self, and the APA Code of Ethics, 1966–1973," Journal of the History of the Behavioral Sciences 46 (Fall 2010): 337–370.]

2 comments:

Simon Craddock Lee said...

Hi Zachary (really enjoying your book btw.) I agree I sidestepped the origin and relevance of Belmont/IRB to the beliefs of many anthropologists about their practice. In retrospect, my comments weren't on topic (re: harm) but I see this in terms of pragmatics. I don't believe anthropologists (in my case federally funded) get to side-step IRBs by saying our work doesn't apply and I wrote in that vein. There are differences in methodologies but the spirit of an IRB as I see it is to provide external peer review of plans to engage with human subjects. Just like grant review, other people evaluate my proposal because my own subjective opinion that it's worth doing needs to be balanced by experts who don't have a direct stake in my project. In my mind, the reciprocal obligation of the IRB is that they must have experts who are qualifed in the relevant approach who can make the case to non-anthropologists-- just as an oncologist might need to explain the intricacies of a drug trial to a exercise physiologist. And that is admittedly where many IRBs fall down.

Zachary M. Schrag said...

Thank you for your comments and for your kind words about my book.

I'm afraid I am having some trouble following your argument. On the AAA website, you wrote that "No work involving human subjects is a priori categorically exempt." But the Common Rule states that "research activities in which the only involvement of human subjects will be in one or more of the following categories are exempt from this policy." If work is exempt because it fits into one of the six categories, then it would indeed appear to be "categorically exempt."

Nor do the regulations state that (in your words) while "some research could be exempt (technical determination by the IRB), you still have to demonstrate this is the case." Rather, as OHRP made clear in 2009, "the regulations do not require that someone other than the investigator be involved in making a determination that a research study is exempt."

Beyond these regulatory specifics, keep in mind the bigger picture. The 1981 regulations came with the promise that the exemptions would "exclude most social science research projects from the jurisdiction of the regulations." And just last November, OHRP director Jerry Menikoff reaffirmed that "The categories of research that are exempt or eligible for expedited review are unlikely to include highly unethical studies . . . Using [the exemptions] therefore frees up resources for reviewing riskier research."

Thus, it is not only anthropologists who are saying that most of their work should not be subject to IRB review. It is the text accompanying the regulations, it is statements by members of the Secretary's Advisory Committee on Human Research Protections and the director of OHRP himself, and, I might add, the report of the University of Texas IRB Task Force.

If IRBs had a great track record reviewing anthropological research, they would not need to claim regulatory authority to get anthropologists to submit projects for approval. But the fact that "many IRBs fall down" when reviewing research social science is further reason for anthropologists--individually and collectively--to seek to limit their reach.

As for the comparison to peer review, please see my 2007 post, "Why IRBs Are Not Peer Review."