Friday, April 27, 2007
This is probably not much of a loss, since the various alternatives studied at the conference appear irrelevant to social and behavioral research. But the comment does continue a long tradition of promises that the methods, ethics, and needs of social scientists will get the attention they deserve . . . someday.
Tuesday, April 24, 2007
As unimpeachable as the OHA's own Professional Guidelines may be, I think it is arrogant to assume that oral historians have nothing to learn from other disciplines with regard to the ethical treatment of human subjects. If nothing else, they can become more sensitized to the possibilities for psychological or social harm that may result from oral history interviewing. Whenever our IRB reviews a protocol from the psychology department that involves questions about childhood abuse or some other trauma, we make sure that the investigator is either qualified to directly provide appropriate counseling or intervention, or provides a list of appropriate support services. How many oral historians have the expertise or qualifications to handle a situation in which an informant with PTSD experiences distress during an interview? How many would have a list of counseling services at hand in case it was necessary? How many even imagine such a scenario when they venture out with their tape recorders?
I would like to suggest that historians don't imagine such a scenario because it doesn't happen.
When I asked Atkins what made him think interviews could traumatize narrators, he replied,
when I was at the 2004 OHA meeting, I attended a panel on the Veterans' Oral History Project, at which the presenters very casually remarked that several veterans, being interviewed by small groups of fourth-graders, broke down into tears when talking about their battlefield experiences. My first thought was, "so how did a bunch of fourth-graders respond to that?" Breaking down crying is not always indicative of PTSD, but you surely understand that the possibility is there.
As Atkins concedes, crying is not trauma requiring "counseling or intervention" by a licensed therapist. Basic decencies—a pause in the recording and some words of sympathy—are enough. And while the possibility of real trauma exists, so does the possibility that a narrator will fall down the stairs trying to answer the interviewer's knock at the door. The question is whether the risk is great enough to justify the hassle of IRB review, and Atkins presents no evidence that it is. Historians have recorded oral history interviews for half a century, and he cannot point to one that has traumatized the narrator.
Having imagined a harm, Atkins also imagines a remedy: "a list of appropriate support services" to be tucked into the interviewer's bag, next to spare batteries for the recorder. Unsurprisingly, he has no evidence that such a list has ever helped anyone.
For researchers in parts of the world where such support services are common, carrying such a list isn't much of a burden. But the paperwork and training it takes to get to the point where the IRB will approve one's project is a real burden. And the requirement of a list could disrupt research in parts of the world where those services don't exist, or even for a researcher who travels around the United States to collect stories, and would have to carry lists for each area she visits.
Atkins is not alone in making such claims. Comparable fears appear in Lynn Amowitz, et al., "Prevalence of War-Related Sexual Violence and Other Human Rights Abuses among Internally Displaced Persons in Sierra Leone," JAMA 287 (2002), 513-521, and Pam Bell, "The Ethics of Conducting Psychiatric Research in War-Torn Contexts," in Marie Smyth and Gillian Robinson, Researching Violently Divided Societies (Tokyo: United Nations University Press, 2001). But neither Amowitz nor Bell cites any evidence to suggest that interview research traumatizes narrators. (If anything, Bell's piece indicates that narrators know how to protect themselves, for example, by choosing to be interviewed as a group rather than one-on-one.)
In contrast, the existing empirical evidence suggests that, if anything, conversation is therapeutic. In her essay, "Negotiating Institutional Review Boards," Linda Shopes cites three articles to make this point:
- Kari Dyregrov, Atle Dyregov, and Magne Raundalen, "Refugee Families' Experience of Research Participation," Journal of Traumatic Stress 12:3 (2000), 413–26.
- Elana Newman, Edward A. Walker, and Anne Gefland, "Assessing the Ethical Costs and Benefits of Trauma-Focused Research," General Hospital Psychiatry 21 (1999), 187–196.
- Edward A. Walker, Elana Newman, Mary Koss, and David Bernstein, "Does the Study of Victimization Revictimize the Victims?" General Hospital Psychiatry 19 (1997), pp. 403–10.
To these I would add Elisabeth Jean Wood, "The Ethical Challenges of Field Research in Conflict Zones," Qualitative Sociology 29 (2006): 373-386. Wood writes:
While the discussion of this consent protocol initially caused some interviewees some confusion, once the idea had been conveyed that they could exercise control over the content of the interview and my use of it, participants demonstrated a clear understanding of its terms. In particular, many residents of my case study areas took skillful advantage of the different levels of confidentiality offered in the oral consent procedure. This probably reflected the fact that during the war residents of contested areas of the Salvadoran countryside daily weighed the potential consequences of everyday activities (whether or not to go to the field, to gather firewood, to attempt to go to the nearest market) and what to tell to whom. Moreover, I had an abiding impression that many of them deeply appreciated what they interpreted as a practice that recognized and respected their experience and expertise. Although for many telling their histories involved telling of violence suffered and grief endured, I did not observe significant re-traumatization as a result, as have researchers in some conflict settings (Bell, 2001). I believe the terms of the consent protocol may have helped prevent re-traumatization as it passed a degree of control and responsibility over interview content to the interviewee.
(It's worth repeating that Bell's article presents no observations of re-traumatization.)
Though I have not interviewed trauma survivors myself--at least, not about their trauma--I have no doubt that it is a tricky business. If anyone can show me that interviews can aggravate real trauma, I welcome correction. I would also welcome more scholarship on how interviewers can maximize the catharsis described by Wood.
Unfortunately, the arbitrary power enjoyed by IRBs relieves them of the responsibility or incentive to seek out such real solutions to real problems. Atkins and his colleagues can dream up phantom menaces and require burdensome, useless conditions based only on guesswork. Only the removal of their power is likely to force them to support their arguments with evidence.
Note: I thank Amelia Hoover for pointing me to the Wood and Amowitz articles.
Thursday, April 19, 2007
At Yale, Human Subjects Committee chair Susan Bouregy believes everyone is happy: "Humanities research at Yale is reviewed by an IRB which reviews exclusively social science, behavioral, educational and humanities research so there is a degree of familiarity with the techniques and practices of these disciplines as well as where the regulations allow flexibility to meet the needs of these types of projects."
But the reporter also talked to three Yale historians, all of whom thought IRB review was inappropriate. For a dissenting voice, she had to go to Taylor Atkins, perhaps the only historian to go on record in favor of IRB review.
Tuesday, April 17, 2007
Whether the National Commission adequately considered non-biomedical research in its deliberations is a matter of historical interest, but not directly relevant to understanding the regulations. The regulations were not written by the National Commission, but by individuals within the then Department of Health and Human Services. The regulations, as they are written, do not relate "generalizable knowledge" to disease. When those regulations were written, in the late 1970s, they were always intended to cover non-biomedical research. I was at the PRIM&R meeting in the fall of 1979 when officials from the Office for the Protection from Research Risks (OPRR, the predecessor to OHRP) discussed how the "new" regulations would apply to the social and behavioral sciences. At that meeting they discussed how they were building into the regulations adequate flexibility for IRBs to effectively review social and behavioral research. The subsequent regulations had that flexibility built in and it works well. The interpretation of "generalizable knowledge" that I described in my comment works to help us differentiate between research that needs IRB review and that which does not.
Let me respond step by step:
1. Cohen writes: “Whether the National Commission adequately considered non-biomedical research in its deliberations is a matter of historical interest, but not directly relevant to understanding the regulations. The regulations were not written by the National Commission, but by individuals within the then Department of Health and Human Services.”
Schrag responds: When the regulations were revised in 1979, the Federal Register reported: “The Department of Health, Education, and Welfare (HEW or Department) is proposing regulations amending HEW policy for the protection of human research subjects and responding to the recommendations of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (Commission) concerning institutional review boards (IRBs or Boards). These proposed rules adopt, for the most part, the recommendations of the Commission . . .” Since the stated goal of the revision was to follow the National Commission, I would expect serious interpretation to include attention to that Commission. I also note that the term we are debating, “generalizable,” was introduced into the regulations as a result of its appearance in the Commission’s Belmont Report.
2. Cohen: “The regulations, as they are written, do not relate 'generalizable knowledge' to disease.”
Schrag: I repeat, Section 46.406 twice refers to "generalizable knowledge about the subjects' disorder or condition."
3. Cohen: “When those regulations were written, in the late 1970s, they were always intended to cover non-biomedical research. I was at the PRIM&R meeting in the fall of 1979 when officials from the Office for the Protection from Research Risks (OPRR, the predecessor to OHRP) discussed how the ‘new’ regulations would apply to the social and behavioral sciences. At that meeting they discussed how they were building into the regulations adequate flexibility for IRBs to effectively review social and behavioral research.”
Schrag: At the same PRIM&R meeting, psychologist E. L. Pattullo lamented, “what began as fifteen years ago as an afterthought about legitimate concern for the protection of biomedical subjects has become, at present, a classic example of counterproductive over-regulation.” Political scientist Ithiel de Sola Pool called the regulations “grossly improper and unconstitutional.” (PRIM&R Through the Years, pp. 37 and 42) In other words, for nearly three decades, federal officials have been trying to impose medical ethics on the social sciences, and social scientists have resisted. That some (not all) of the regulations’ authors intended this imposition does not make it proper.
4. Cohen: “The subsequent regulations had that flexibility built in and it works well. The interpretation of ‘generalizable knowledge’ that I described in my comment works to help us differentiate between research that needs IRB review and that which does not.”
Schrag: These are empirical claims that demand evidence. One of the purposes of this blog is to document cases where the current regulations do not work well, and I invite readers to read past posts and follow the links and references to choose their own examples. February's New York Times story and November's Chronicle of Higher Education story make a fine introduction.
On his blog, Dr. Cohen offered his own interpretation of the term “generalizable,” and here he claims that it “works to help us differentiate between research that needs IRB review and that which does not.” Who is "us"? What institutions have taken Cohen’s advice, and do the historians there feel they have been treated fairly?
Friday, April 13, 2007
“even though PRIM&R's name refers to medicine, that is a carry over from it formation over 30 years ago. Since then PRIM&R has grown into an organization that includes all aspects of human subjects research, including the social sciences.”
PRIM&R indeed seeks to control many kinds of research, but it fails to include all types of researchers. Its board of directors includes 22 active members. Nineteen of them (86 percent) are by training or affiliation clearly in the biomedical camp. Of the remaining three, Charles McCarthy is a former senior official at the National Institutes of Health. That leaves two university IRB officials—Keane and Selwitz—as the sole directors whose affiliation is not primarily biomedical. No social or economic researchers sit on the board. That's what I call domination.
Even when PRIM&R ponders what it calls “social, behavioral, and economic research,” it ignores social and economic researchers. The faculty list for the upcoming “SBER” conference lists twenty people. Some of them are psychologists, known in IRB terms as behavioral scientists. But how many are researchers in economics, the social sciences, or the humanities? Well, if we count law, there’s one.
So let me amend my comment: PRIM&R is a body dominated by professionals involved in biomedical research who like to impose medical ethics on other fields.
Thursday, April 5, 2007
Marshall brings a somewhat critical perspective, having complained about her own treatment by an IRB. In the early 1990s, she wanted to interview patients in a waiting room, and—in a classic example of IRB formalism--her IRB insisted that because she was doing research in a medical setting, she had to warn her interview subjects that “emergency medical treatment for physical injuries resulting from participation would be provided.” (Patricia A. Marshall, “Research Ethics in Applied Anthropology,” IRB: Ethics and Human Research 14 [Nov. - Dec., 1992]: 1-5).
Perhaps as a result of this experience, she has maintained some skepticism about IRB review of anthropology, as expressed in her essay, “Human Subjects Protections, Institutional Review Boards, and Cultural Anthropological Research,” Anthropological Quarterly 76 (Spring 2003): 269-285. That essay shows Marshall’s familiarity with much of the critical literature on IRBs, and she repeats some of that criticism herself:
- “IRBs may be overly zealous in their interpretation and application of federal guidelines, exacerbating the challenges faced by anthropologists and other professionals in seeking approval for studies.” (270)
- “Although committees must include representatives from diverse scientific fields and the community, IRBs have a strong orientation to biomedical and experimental research. In fact, a significant flaw in the development of the federal guidelines for ethical research is that social scientists were not included in the process. The result is a conflation of two related problems for anthropologists: first, the Common Rule emphasizes concerns for biomedical researchers; and second, most IRBs do not have members with expertise in anthropological methods.” (272)
- “Misapplications of the Common Rule and inappropriate requests for revisions from IRBs can have a paralyzing effect on anthropological research. Moreover, it reinforces a cynical view of institutional requirements for protection of human subjects, and it uses scarce resources that would be better spent on studies involving greater risks for participants.” (273)
Given her understanding of these problems, one might expect her to advocate, or at least consider, the exclusion of anthropological research from IRB review. Instead, she concludes, “regulatory oversight by IRBs is a fact of life for scientific researchers. Anthropologists are not and should not be exempt.” (280)
This conclusion is so contrary to the rest of the essay that I can only guess at how it got in there. Perhaps it represents a resigned surrender after years of failed efforts to exclude some review. Perhaps it is a failure of imagination. Perhaps Marshall believes that only by embracing IRB review will anthropologists be taken seriously by the biomedical researchers she works with.
Or perhaps the key issue is that Marshall fits the pattern I mentioned earlier of some anthropologists’ embrace of the Belmont Report principles. In “Research Ethics in Applied Anthropology,” Marshall cites not the Code of Ethics of the American Anthropological Association, but the comparable Ethical Guidelines of the National Association for the Practice of Anthropology, which state that “Our primary responsibility is to respect and consider the welfare and human rights of all categories of people affected by decisions, programs or research in which we take part.”
I have no complaint with applying those guidelines to their intended subject: “a professionally trained anthropologist who is employed or retained to apply his or her specialized knowledge problem solving related to human welfare and human activities.” But they are inappropriate restrictions for scholars whose primary role is academic inquiry, not problem solving.
Thus, like Stuart Plattner, Marshall uncritically assumes that one field’s ethics can be imposed on another. She writes, “ethical principles governing applied anthropological research are not unique to this discipline. Respect for persons, beneficence, and justice are fundamental concerns for any scientist.” (“Research Ethics in Applied Anthropology,” 4) While that sounds lovely, the latter two terms, as defined by the Belmont Report, are foreign to the ethical codes of most academic research. Until she recognizes the distinction between problem-solvers whose primary goal is to do no harm and researchers whose primary goal is to seek the truth, she will be a poor advocate for most scholars in the social sciences and humanities.
Yet in previous work, Marshall herself has argued against the idea that humans share a single set of ethics, recognizing instead that “ethics and values cannot be separated from social, cultural, and historical determinants that regulate both the definition and resolution of moral quandaries.” (“Anthropology and Bioethics,” Medical Anthropology Quarterly, New Series, 6 [Mar., 1992]: 62) If she brings that insight to the committee, perhaps she will recognize the basic wrongness of forcing Belmont’s biomedical ethics on non-biomedical fields.