Friday, April 27, 2007
Alternative IRB Models Conference Ignores Behavioral and Social Research
This is probably not much of a loss, since the various alternatives studied at the conference appear irrelevant to social and behavioral research. But the comment does continue a long tradition of promises that the methods, ethics, and needs of social scientists will get the attention they deserve . . . someday.
Tuesday, April 24, 2007
The Canard of Interview Trauma
As unimpeachable as the OHA's own Professional Guidelines may be, I think it is arrogant to assume that oral historians have nothing to learn from other disciplines with regard to the ethical treatment of human subjects. If nothing else, they can become more sensitized to the possibilities for psychological or social harm that may result from oral history interviewing. Whenever our IRB reviews a protocol from the psychology department that involves questions about childhood abuse or some other trauma, we make sure that the investigator is either qualified to directly provide appropriate counseling or intervention, or provides a list of appropriate support services. How many oral historians have the expertise or qualifications to handle a situation in which an informant with PTSD experiences distress during an interview? How many would have a list of counseling services at hand in case it was necessary? How many even imagine such a scenario when they venture out with their tape recorders?
I would like to suggest that historians don't imagine such a scenario because it doesn't happen.
When I asked Atkins what made him think interviews could traumatize narrators, he replied,
when I was at the 2004 OHA meeting, I attended a panel on the Veterans' Oral History Project, at which the presenters very casually remarked that several veterans, being interviewed by small groups of fourth-graders, broke down into tears when talking about their battlefield experiences. My first thought was, "so how did a bunch of fourth-graders respond to that?" Breaking down crying is not always indicative of PTSD, but you surely understand that the possibility is there.
As Atkins concedes, crying is not trauma requiring "counseling or intervention" by a licensed therapist. Basic decencies—a pause in the recording and some words of sympathy—are enough. And while the possibility of real trauma exists, so does the possibility that a narrator will fall down the stairs trying to answer the interviewer's knock at the door. The question is whether the risk is great enough to justify the hassle of IRB review, and Atkins presents no evidence that it is. Historians have recorded oral history interviews for half a century, and he cannot point to one that has traumatized the narrator.
Having imagined a harm, Atkins also imagines a remedy: "a list of appropriate support services" to be tucked into the interviewer's bag, next to spare batteries for the recorder. Unsurprisingly, he has no evidence that such a list has ever helped anyone.
For researchers in parts of the world where such support services are common, carrying such a list isn't much of a burden. But the paperwork and training it takes to get to the point where the IRB will approve one's project is a real burden. And the requirement of a list could disrupt research in parts of the world where those services don't exist, or even for a researcher who travels around the United States to collect stories, and would have to carry lists for each area she visits.
Atkins is not alone in making such claims. Comparable fears appear in Lynn Amowitz, et al., "Prevalence of War-Related Sexual Violence and Other Human Rights Abuses among Internally Displaced Persons in Sierra Leone," JAMA 287 (2002), 513-521, and Pam Bell, "The Ethics of Conducting Psychiatric Research in War-Torn Contexts," in Marie Smyth and Gillian Robinson, Researching Violently Divided Societies (Tokyo: United Nations University Press, 2001). But neither Amowitz nor Bell cites any evidence to suggest that interview research traumatizes narrators. (If anything, Bell's piece indicates that narrators know how to protect themselves, for example, by choosing to be interviewed as a group rather than one-on-one.)
In contrast, the existing empirical evidence suggests that, if anything, conversation is therapeutic. In her essay, "Negotiating Institutional Review Boards," Linda Shopes cites three articles to make this point:
- Kari Dyregrov, Atle Dyregov, and Magne Raundalen, "Refugee Families' Experience of Research Participation," Journal of Traumatic Stress 12:3 (2000), 413–26.
- Elana Newman, Edward A. Walker, and Anne Gefland, "Assessing the Ethical Costs and Benefits of Trauma-Focused Research," General Hospital Psychiatry 21 (1999), 187–196.
- Edward A. Walker, Elana Newman, Mary Koss, and David Bernstein, "Does the Study of Victimization Revictimize the Victims?" General Hospital Psychiatry 19 (1997), pp. 403–10.
To these I would add Elisabeth Jean Wood, "The Ethical Challenges of Field Research in Conflict Zones," Qualitative Sociology 29 (2006): 373-386. Wood writes:
While the discussion of this consent protocol initially caused some interviewees some confusion, once the idea had been conveyed that they could exercise control over the content of the interview and my use of it, participants demonstrated a clear understanding of its terms. In particular, many residents of my case study areas took skillful advantage of the different levels of confidentiality offered in the oral consent procedure. This probably reflected the fact that during the war residents of contested areas of the Salvadoran countryside daily weighed the potential consequences of everyday activities (whether or not to go to the field, to gather firewood, to attempt to go to the nearest market) and what to tell to whom. Moreover, I had an abiding impression that many of them deeply appreciated what they interpreted as a practice that recognized and respected their experience and expertise. Although for many telling their histories involved telling of violence suffered and grief endured, I did not observe significant re-traumatization as a result, as have researchers in some conflict settings (Bell, 2001). I believe the terms of the consent protocol may have helped prevent re-traumatization as it passed a degree of control and responsibility over interview content to the interviewee.
(It's worth repeating that Bell's article presents no observations of re-traumatization.)
Though I have not interviewed trauma survivors myself--at least, not about their trauma--I have no doubt that it is a tricky business. If anyone can show me that interviews can aggravate real trauma, I welcome correction. I would also welcome more scholarship on how interviewers can maximize the catharsis described by Wood.
Unfortunately, the arbitrary power enjoyed by IRBs relieves them of the responsibility or incentive to seek out such real solutions to real problems. Atkins and his colleagues can dream up phantom menaces and require burdensome, useless conditions based only on guesswork. Only the removal of their power is likely to force them to support their arguments with evidence.
Note: I thank Amelia Hoover for pointing me to the Wood and Amowitz articles.
Thursday, April 19, 2007
Yale historians vs. Yale IRBs
At Yale, Human Subjects Committee chair Susan Bouregy believes everyone is happy: "Humanities research at Yale is reviewed by an IRB which reviews exclusively social science, behavioral, educational and humanities research so there is a degree of familiarity with the techniques and practices of these disciplines as well as where the regulations allow flexibility to meet the needs of these types of projects."
But the reporter also talked to three Yale historians, all of whom thought IRB review was inappropriate. For a dissenting voice, she had to go to Taylor Atkins, perhaps the only historian to go on record in favor of IRB review.
Tuesday, April 17, 2007
Jeffrey Cohen on Generalizable Knowledge
Cohen writes:
Whether the National Commission adequately considered non-biomedical research in its deliberations is a matter of historical interest, but not directly relevant to understanding the regulations. The regulations were not written by the National Commission, but by individuals within the then Department of Health and Human Services. The regulations, as they are written, do not relate "generalizable knowledge" to disease. When those regulations were written, in the late 1970s, they were always intended to cover non-biomedical research. I was at the PRIM&R meeting in the fall of 1979 when officials from the Office for the Protection from Research Risks (OPRR, the predecessor to OHRP) discussed how the "new" regulations would apply to the social and behavioral sciences. At that meeting they discussed how they were building into the regulations adequate flexibility for IRBs to effectively review social and behavioral research. The subsequent regulations had that flexibility built in and it works well. The interpretation of "generalizable knowledge" that I described in my comment works to help us differentiate between research that needs IRB review and that which does not.
Let me respond step by step:
1. Cohen writes: “Whether the National Commission adequately considered non-biomedical research in its deliberations is a matter of historical interest, but not directly relevant to understanding the regulations. The regulations were not written by the National Commission, but by individuals within the then Department of Health and Human Services.”
Schrag responds: When the regulations were revised in 1979, the Federal Register reported: “The Department of Health, Education, and Welfare (HEW or Department) is proposing regulations amending HEW policy for the protection of human research subjects and responding to the recommendations of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (Commission) concerning institutional review boards (IRBs or Boards). These proposed rules adopt, for the most part, the recommendations of the Commission . . .” Since the stated goal of the revision was to follow the National Commission, I would expect serious interpretation to include attention to that Commission. I also note that the term we are debating, “generalizable,” was introduced into the regulations as a result of its appearance in the Commission’s Belmont Report.
2. Cohen: “The regulations, as they are written, do not relate 'generalizable knowledge' to disease.”
Schrag: I repeat, Section 46.406 twice refers to "generalizable knowledge about the subjects' disorder or condition."
3. Cohen: “When those regulations were written, in the late 1970s, they were always intended to cover non-biomedical research. I was at the PRIM&R meeting in the fall of 1979 when officials from the Office for the Protection from Research Risks (OPRR, the predecessor to OHRP) discussed how the ‘new’ regulations would apply to the social and behavioral sciences. At that meeting they discussed how they were building into the regulations adequate flexibility for IRBs to effectively review social and behavioral research.”
Schrag: At the same PRIM&R meeting, psychologist E. L. Pattullo lamented, “what began as fifteen years ago as an afterthought about legitimate concern for the protection of biomedical subjects has become, at present, a classic example of counterproductive over-regulation.” Political scientist Ithiel de Sola Pool called the regulations “grossly improper and unconstitutional.” (PRIM&R Through the Years, pp. 37 and 42) In other words, for nearly three decades, federal officials have been trying to impose medical ethics on the social sciences, and social scientists have resisted. That some (not all) of the regulations’ authors intended this imposition does not make it proper.
4. Cohen: “The subsequent regulations had that flexibility built in and it works well. The interpretation of ‘generalizable knowledge’ that I described in my comment works to help us differentiate between research that needs IRB review and that which does not.”
Schrag: These are empirical claims that demand evidence. One of the purposes of this blog is to document cases where the current regulations do not work well, and I invite readers to read past posts and follow the links and references to choose their own examples. February's New York Times story and November's Chronicle of Higher Education story make a fine introduction.
On his blog, Dr. Cohen offered his own interpretation of the term “generalizable,” and here he claims that it “works to help us differentiate between research that needs IRB review and that which does not.” Who is "us"? What institutions have taken Cohen’s advice, and do the historians there feel they have been treated fairly?
Friday, April 13, 2007
What is PRIM&R?
Cohen writes,
“even though PRIM&R's name refers to medicine, that is a carry over from it formation over 30 years ago. Since then PRIM&R has grown into an organization that includes all aspects of human subjects research, including the social sciences.”
PRIM&R indeed seeks to control many kinds of research, but it fails to include all types of researchers. Its board of directors includes 22 active members. Nineteen of them (86 percent) are by training or affiliation clearly in the biomedical camp. Of the remaining three, Charles McCarthy is a former senior official at the National Institutes of Health. That leaves two university IRB officials—Keane and Selwitz—as the sole directors whose affiliation is not primarily biomedical. No social or economic researchers sit on the board. That's what I call domination.
Even when PRIM&R ponders what it calls “social, behavioral, and economic research,” it ignores social and economic researchers. The faculty list for the upcoming “SBER” conference lists twenty people. Some of them are psychologists, known in IRB terms as behavioral scientists. But how many are researchers in economics, the social sciences, or the humanities? Well, if we count law, there’s one.
So let me amend my comment: PRIM&R is a body dominated by professionals involved in biomedical research who like to impose medical ethics on other fields.
Thursday, April 5, 2007
Anthropologist Patricia Marshall Appointed to SACHRP
Marshall brings a somewhat critical perspective, having complained about her own treatment by an IRB. In the early 1990s, she wanted to interview patients in a waiting room, and—in a classic example of IRB formalism--her IRB insisted that because she was doing research in a medical setting, she had to warn her interview subjects that “emergency medical treatment for physical injuries resulting from participation would be provided.” (Patricia A. Marshall, “Research Ethics in Applied Anthropology,” IRB: Ethics and Human Research 14 [Nov. - Dec., 1992]: 1-5).
Perhaps as a result of this experience, she has maintained some skepticism about IRB review of anthropology, as expressed in her essay, “Human Subjects Protections, Institutional Review Boards, and Cultural Anthropological Research,” Anthropological Quarterly 76 (Spring 2003): 269-285. That essay shows Marshall’s familiarity with much of the critical literature on IRBs, and she repeats some of that criticism herself:
- “IRBs may be overly zealous in their interpretation and application of federal guidelines, exacerbating the challenges faced by anthropologists and other professionals in seeking approval for studies.” (270)
- “Although committees must include representatives from diverse scientific fields and the community, IRBs have a strong orientation to biomedical and experimental research. In fact, a significant flaw in the development of the federal guidelines for ethical research is that social scientists were not included in the process. The result is a conflation of two related problems for anthropologists: first, the Common Rule emphasizes concerns for biomedical researchers; and second, most IRBs do not have members with expertise in anthropological methods.” (272)
- “Misapplications of the Common Rule and inappropriate requests for revisions from IRBs can have a paralyzing effect on anthropological research. Moreover, it reinforces a cynical view of institutional requirements for protection of human subjects, and it uses scarce resources that would be better spent on studies involving greater risks for participants.” (273)
Given her understanding of these problems, one might expect her to advocate, or at least consider, the exclusion of anthropological research from IRB review. Instead, she concludes, “regulatory oversight by IRBs is a fact of life for scientific researchers. Anthropologists are not and should not be exempt.” (280)
Huh?
This conclusion is so contrary to the rest of the essay that I can only guess at how it got in there. Perhaps it represents a resigned surrender after years of failed efforts to exclude some review. Perhaps it is a failure of imagination. Perhaps Marshall believes that only by embracing IRB review will anthropologists be taken seriously by the biomedical researchers she works with.
Or perhaps the key issue is that Marshall fits the pattern I mentioned earlier of some anthropologists’ embrace of the Belmont Report principles. In “Research Ethics in Applied Anthropology,” Marshall cites not the Code of Ethics of the American Anthropological Association, but the comparable Ethical Guidelines of the National Association for the Practice of Anthropology, which state that “Our primary responsibility is to respect and consider the welfare and human rights of all categories of people affected by decisions, programs or research in which we take part.”
I have no complaint with applying those guidelines to their intended subject: “a professionally trained anthropologist who is employed or retained to apply his or her specialized knowledge problem solving related to human welfare and human activities.” But they are inappropriate restrictions for scholars whose primary role is academic inquiry, not problem solving.
Thus, like Stuart Plattner, Marshall uncritically assumes that one field’s ethics can be imposed on another. She writes, “ethical principles governing applied anthropological research are not unique to this discipline. Respect for persons, beneficence, and justice are fundamental concerns for any scientist.” (“Research Ethics in Applied Anthropology,” 4) While that sounds lovely, the latter two terms, as defined by the Belmont Report, are foreign to the ethical codes of most academic research. Until she recognizes the distinction between problem-solvers whose primary goal is to do no harm and researchers whose primary goal is to seek the truth, she will be a poor advocate for most scholars in the social sciences and humanities.
Yet in previous work, Marshall herself has argued against the idea that humans share a single set of ethics, recognizing instead that “ethics and values cannot be separated from social, cultural, and historical determinants that regulate both the definition and resolution of moral quandaries.” (“Anthropology and Bioethics,” Medical Anthropology Quarterly, New Series, 6 [Mar., 1992]: 62) If she brings that insight to the committee, perhaps she will recognize the basic wrongness of forcing Belmont’s biomedical ethics on non-biomedical fields.
Monday, March 19, 2007
PRIM&R Plans SBER Conference
Public Responsibility in Medicine and Research (PRIM&R), the professional organization for human subjects enforcers, has scheduled the "2007 Social, Behavioral, Educational Research (SBER) Conference: Sharing Tools and Joining Forces: Ethical and Regulatory Balance in SBER." The conference will be held in Broomfield, Colorado, on May 9 and 10.
The conference is notable because its planning committee includes two scholars who have written quite critically of IRB review of non-biomedical research: C. Kristina Gunsalus and Joan E. Sieber. (An earlier announcement also listed Felice J. Levine, but her name does not appear on the website.)
The conference program includes two sessions that promise to wrestle with the murky questions of definitions and exemptions:
A4. Developing Guidance on the Definition of Human Subjects Research (IRB Tool Kit I Track) [Please note that this is a double session and will end at 1:15 PM. This session has been designed with the dual purpose of discussing strategies and contributing to a written document that will provide guidance, definitions, and examples. This document will be electronically distributed to all Conference attendees following the meeting.]
A5. Developing Guidance on Applying the Exemptions
(IRB Tool Kit II Track) [Please note that this is a double session and will end at 1:15 PM. This session has been designed with the dual purpose of discussing strategies and contributing to a written document that will provide guidance, definitions, and examples. This document will be electronically distributed to all Conference attendees following the meeting.]
Potentially these documents could provide IRBs the guidance and cover they need to exempt survey, interview, and observation research with autonomous adults. While I am sorry I will not be able to attend the conference, I am very interested to see what it produces.
Sunday, March 18, 2007
My Problem with Anthropologists
(In honor of the discussion going on at Savage Minds, I present some thoughts on the role of anthropologists in the IRB debates.)
In terms of methods, anthropology and oral history seem to have a lot in common. Researchers in both disciplines enjoy learning about other people’s lives by talking to those people, often with a recording device. But the two fields have different ethical approaches, and I sometimes fear this makes anthropologists unreliable allies in the struggle for the freedom of research.
The problem starts with the code of ethics of the American Anthropological Association, which states that
anthropological researchers have primary ethical obligations to the people, species, and materials they study and to the people with whom they work. These obligations can supersede the goal of seeking new knowledge, and can lead to decisions not to undertake or to discontinue a research project when the primary obligation conflicts with other responsibilities, such as those owed to sponsors or clients. These ethical obligations include to avoid harm or wrong, understanding that the development of knowledge can lead to change which may be positive or negative for the people or animals worked with or studied.
Maybe I am missing something, but I don't see anything in the American Sociological Association’s code of ethics, the Principles and Standards of the Oral History Association or the American Historical Association’s Statement on Standards of Professional Conduct that obliges members of those organizations to avoid harm or to abandon the pursuit of knowledge lest someone be hurt. In seeking a balance between truth and inoffensiveness, the anthropologists have gone much further toward physicians’ Hippocratic standard of doing no harm than have their fellow social scientists. This decision may explain some troubling behavior:
1. Knee-Jerk Anonymity
My favorite anthropologist is Kathryn Marie Dudley, author of Debt and Dispossession and The End of the Line. Both books show how Americans struggle to reconcile their faith in the free market with their often conflicting belief that hard work should be rewarded regardless of market demand. That tension is central to many of today’s cultural and political debates, and Dudley did a magnificent job getting Midwestern farmers, teachers, and automobile workers to talk about their beliefs.
My complaint is that having done so, she fabricated names for her narrators, and barely felt the need to explain that decision. (Each book has a two-sentence note declaring that she wished to protect the privacy or confidentiality of the narrators, not why she thought this necessary.)
This has terrible consequences. First, it prevents other researchers from learning more about the lives of the people she studies, the way Dudley’s advisor, Katherine Newman, did by following up on the lives of her informants from a previous book. (Or at least it is supposed to. When I assigned Debt and Dispossession, one of my undergraduates did a quick search of newspaper databases and identified Dudley’s pseudonymous “Star Prairie” and some of its inhabitants.) And second, it suggests that the people who are the subjects of her books are not real, important people the way that other figures in the books—Lee Iacocca and Jesse Jackson—are. Unlike Iacocca and Jackson, their backgrounds need not be explored, and their words need not have consequences.
Most significantly for this discussion, the assumption that anonymity should be the norm contributes to the idea that interviewing is a dirty, dangerous activity. Of course some narrators wish their names to be disguised, and under some circumstances that is appropriate. But to see what happens when anonymity is the exception, not the rule, compare Dudley’s books to historian Leon Fink’s Maya of Morganton, another wonderful study of work in contemporary America. Fink offered anonymity to all his subjects, but the only ones who chose it were powerful executives and lawyers, not ordinary workers. In his book, interviews are opportunities to be heard, not sources of shame.
2. Disciplinary Imperialism
A voice of moderation in the IRB debates it that of Kristine L. Fitch, Professor of Communication Studies at the University of Iowa. (Since she defines herself as an ethnographer, I am including her in this rant about anthropologists. Perhaps that is unfair, but keep reading before you decide.) Fitch has been through IRB fights on both sides. In the 1990s, she writes, “I saw firsthand the aspects of human subjects review that so frustrate social science researchers, particularly those in the qualitative/ethnographic domain: applications full of questions aimed at biomedical research, requirements to obtain written consent despite cultural barriers to doing so, board members who said, in so many words, that their goal was to put a stop to as much research as they could.” (“Ethical and Regulatory Issues,” noted below.)
Fitch joined the University of Iowa’s social and behavioral IRB as chair and developed training materials that focused on the challenges faced by social scientists, in contrast to the medically-oriented materials mandated by most IRBs. At Iowa, researchers have their choice between “a two-hour workshop [focused on social and behavioral research], or completion of the National Institutes of Health (NIH) web-based certification course.” (See Fitch, “Difficult Interactions between IRBs and Investigators: Applications and Solutions,” Journal of Applied Communication Research 33 [August 2005]: 269–276. Apparently social-science ethics require live teachers, while medical ethics can be done in multiple-choice.) To help researchers and IRBs beyond Iowa, Fitch helped develop a CD and online training course called “Ethical and Regulatory Issues in Ethnographic Human Subjects Research.”
It is encouraging to see some training materials built from the ground up for social scientists. Rather than clubbing researchers over the head with more stories of Tuskegee, the CD focuses specifically on challenges in social-science research, such as how to protect privacy when studying sensitive topics, like eating disorders and illegal drug use.
But the CD goes wrong when it lumps in other disciplines with anthropology:
An issue that frequently creates tension between ethnographic researchers and IRBs has to do with translation of the ethical principles outlined in the Belmont Report into interpretations of federal regulations governing human subjects research. Some disciplines, such as the American Anthropological Association, the Oral History Association, and others have established systems of ethical principles specific to the kinds of research most characteristic of their areas. Those ethical principles arise from particular disciplinary histories and have been crafted by respected members of those professions. As such, they are often defended as more relevant, appropriate, and in fact more stringent than the necessarily more distant philosophy of the Belmont Report. Part of the disputable territory between researchers and IRBs that becomes contentious, then, is the distinction between abstract ethical principles and the regulations that spell out particular definitions, distinctions and prohibitions. Although researchers and IRBs would probably agree on ethical principles interaction between them is usually limited to the application of regulations to particular procedures, wording of consent documents, and so forth.
Did you catch the sleight of hand? Because anthropologists “and IRBs would probably agree on ethical principles,” Fitch assumes that “researchers and IRBs would probably agree on ethical principles.” I, for one, do not, and I see nothing in the guidelines of the Oral History Association that conforms to the Belmont Report’s demands for beneficence or what it calls justice. (See “Ethical Training for Oral Historians.”)
Beyond this misperception, I think Fitch is simply naïve about the operations of IRBs. “Ethical and Regulatory Issues,” for example, states that “IRB chairs and board members often have seen firsthand the negative consequences of . . . unanticipated problems.” Talk to researchers, talk to IRB members, read the postings on IRB Forum, and I think you’ll see that IRBs generally make decisions based on guesswork and what other IRBs are doing (in Fitch’s terms “long and thoughtful discussion among several reasonable people”), not firsthand or scholarly knowledge of the consequences of poor protocol design. If IRBs were required to support each decision with real-life examples of comparable projects gone bad, we would have many fewer restrictions on research, and essentially none on oral history.
And then there’s her claim in “Difficult Interactions” that “university administrators have a stake in human subjects oversight being carried out effectively and should be open to addressing problems within their IRB system. If they are not, the Office of Human Research Protection (OHRP) can be notified of hypervigilant regulation on the part of a local IRB. They can sanction IRBs for over-interpretation or misapplication of regulations when there is evidence that such is the case.” If OHRP has ever sanctioned an IRB for hypervigilance, I would love to hear about it.
3. Submission to the IRB Regime
What really concerns me are anthropologists in government, and here I am thinking of Stuart Plattner. Plattner served for thirteen years as the human subjects specialist for the National Science Foundation, and he worked to moderate some of the claims of IRBs. For example, the NSF’s website, "Frequently Asked Questions and Vignettes: Interpreting the Common Rule for the Protection of Human Subjects Behavioral and Social Science Research," created under his watch, includes the clearest statement by a federal agency that the Common Rule does not apply to classroom projects.
But Plattner is too ready to apply anthropology’s delicate ethics to other fields. In his 2003 Anthropological Quarterly article, “Human Subjects Protection and Cultural Anthropology,” he complains of “biomedical hegemony,” that is, the imposition of biomedical ethics on other disciplines. Yet in the same article he promotes a sort of anthropological hegemony when he writes, “no one should ever be hurt just because they were involved in a research project, if at all possible.” That’s consistent with anthropologists’ ethical statements, but other disciplines are happy to bring malefactors to account.
Where Plattner really gets scary is in his more recent “Comment on IRB Regulation of Ethnographic Research” (American Ethnologist 33 [2006]: 525–528). There he writes,
The journalist has a mandate from society to document contemporary reality. It is expected that this may involve an exposure of wrongdoing. A reporter’s reason for getting information from a person is to establish what happened. Those who speak to reporters accept the potential for harm that publicity may bring. Social scientists have no such mandate; we document reality to explain it. Our audience is professional, and society gives us no protection in the First Amendment. Our reason for getting information from individuals is to help us explain general processes. A normal condition for an ethnographic encounter or interview is that the information will never be used to harm the respondent.
The line that “our audience is professional” is simply defeatist; my undergraduates enjoyed Dudley’s books, and I hope plenty of readers outside the academy have found them as well. The bit about seeking to “explain general processes” is at odds with historians’ study of contingency, and I hope that other social scientists would reject that notion as well. But for the issue at hand, the key statement is that “society gives us no protection in the First Amendment.”
As a matter of jurisprudence, this is simply false. First Amendment liberties are common to all Americans, and if anything scholars enjoy heightened protection. In the 1967 case Keyishian v. Board of Regents of the University of the State of New York (385 U.S. 589) the Supreme Court held that “our Nation is deeply committed to safeguarding academic freedom, which is of transcendent value to all of us, and not merely to the teachers concerned. That freedom is therefore a special concern of the First Amendment, which does not tolerate laws that cast a pall of orthodoxy over the classroom.” It’s shocking that Plattner, who served so long as perhaps the most senior social scientist involved in shaping federal human subjects policy, has such little understanding of the law and such little concern for academic freedom.
Of course many anthropologists have written critically of IRB interference with their research. In the same journal that features Plattner’s dismissal of academic freedom, Richard Schweder eloquently argues that “a great university will do things that are upsetting,” citing Socrates, rather than Hippocrates, as the best Greek model for social science. (“Protecting Human Subjects and Preserving Academic Freedom: Prospects at the University of Chicago,” American Ethnologist 33 [2006]: 507–518). Yet who best represents the discipline: Schweder or Plattner? How far a leap is it from the American Anthropological Association’s subordination of the search for knowledge to Plattner’s suggestion that the First Amendment does not apply to scholarly research? Can anthropologists fight for academic freedom while holding that research shouldn’t hurt?
Tuesday, March 13, 2007
Why IRBs Are Not Peer Review: A Reply to E. Taylor Atkins
The most obvious difference between peer review and IRB review is that peer review means review by peers—fellow scholars in one’s field. This was also the system envisioned in 1966 when the Public Health Service first mandated “prior review by [a researcher’s] institutional associates” before that researcher could received PHS funds, and to some degree it is the way IRB review works in biomedical research, where teams of physicians and other biomedical researchers review protocols by others in their field. But fundamental to the complaints of oral historians and others in the humanities and social sciences is the fact that IRBs are never (to my knowledge) composed mostly of scholars who conduct qualitative interviews. Indeed, most IRBs probably lack a single such researcher.
Even when IRBs do include such scholars, they operate in a web of laws, regulations, and guidelines written by biomedical researchers and bioethicists without input from historians. Consider the involvement of historians in shaping the present system:
- Number of historians invited to testify before Congress as it was shaping the National Research Act: zero
- Number of historians on the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, creator of the Belmont Report: zero
- Number of historians among the advisors and consultants to the Institutional Review Board Guidebook : zero
- Number of historians on the board of Public Responsibility in Medicine and Research (PRIM&R): zero
- Number of historians among the authors of the CITI Program: zero
- Number of historians on the Secretary's Advisory Committee on Human Research Protections : zero
Although historians have had no chance to shape the framework within which IRBs operate, by law and by their responsibility to their institutions, IRBs cannot ignore this framework. So it is not peer review at all that they impose, but values foreign to the historical profession.
One result of this peerless system is that IRBs are rarely able to offer the kind of expert advice we seek in good peer review. I have reviewed many manuscripts myself, but only when editors sought me out among a national or international pool of scholars. As a result, every manuscript I have read concerned some aspect of urban planning, infrastructure, or transportation—areas to which I have devoted years of research—or was designed as a textbook for use in the kinds of courses I regularly teach. Yet, as I note in my posting, “In Search of Expertise,” IRBs rely not on such a broad pool, but on whomever happens to be on campus and willing to serve on the local IRB.
Atkins also ignores the power of researchers to shape the peer review of their work. Certainly a university researcher who never submitted anything for peer review would be unlikely to win promotion, but Atkins forgets the choice researchers enjoy in deciding which peer-reviewed journal or press will receive their manuscripts, and which will be alternatives should the first choice reject the proposal. Comparable choosiness in ethical review is derided as “IRB shopping,” and university IRBs retain their monopolies (See Jeffrey Brainard, “Federal Agency Decides Not to Regulate 'IRB Shopping,'” Chronicle of Higher Education, 18 January 2006). Moreover, scholars and editors are free to ignore the advice of peer reviewers in preparing a final publication; how fondly I recall my first editor’s telling me not to worry about the difficult theory one reviewer wanted me to incorporate! Most importantly, scholars are always free to publish in non-peer-reviewed forums, such as this blog. In contrast, IRBs insist on the power to decide when a project requires their review.
Atkins recently attended a PRIM&R conference and “was astounded by the persistent ignorance among IRB administrators and board members in attendance about the special needs of SBER and oral history.” He responded by blaming the victim: “If we refuse to teach IRBs, how can they learn what we need them to know?” What he refuses to acknowledge is that IRBs’ power frees them of any incentive to learn. A publisher whose peer-reviewers offered consistently bad advice would soon lack for material. An IRB can mistreat researchers yet suffer no loss of power; its decisions are final.
While peer review is a persuasive effort by volunteers, IRB review is a coercive practice by agents of the state. Leviathans or not, IRBs do little but exercise authority from on high, empowered by federal regulations stating that “an IRB shall review and have authority to approve, require modifications in (to secure approval), or disapprove all research activities covered by [federal] policy,” and that no university official can overturn such decisions. Institutions that disobey this mandate risk losing millions of dollars in funds; individual researchers risk denial of degrees, promotion, or even the right to continue any research.
Take away this coercive power, and perhaps historians will learn to respect IRBs. Until then, expect them to resist.
