Blogger's note: In March the Harvard Crimson mentioned an unpublished essay by ethnographer Scott Atran of the University of Michigan, detailing his complaints about the IRB process. With Dr. Atran's kind permission, I present the complete essay here. ZMS
Monday, May 28, 2007
Scott Atran, "Research Police – How a University IRB Thwarts Understanding of Terrorism"


Thursday, May 24, 2007
Even the Best IRB Can Antagonize Researchers
Yet another researcher finds "the process is so cryptic and idiosyncratic" that "his students often can’t anticipate the reasons why the institutional review board will reject a proposal." And Stern herself, who got valuable help from the IRB, complains that "Before I came to Harvard, I had pretty remarkable interviews with terrorists . . . There are a lot of reasons that those kind of interviews would be hard today. One of them is the post-September 11 environment, but the other is the IRB strictures.” One project, to interview radical black Muslims, died entirely because of the delay in approval. (Note: this is just what Robert Kerr warned us about.)
How can we have the best of both worlds—helpful advice without arbitrary rejections and delays? Voluntary review.
Wednesday, May 23, 2007
Jeffrey Cohen: More Argument Without Evidence
One of the major propositions that the critics of IRB review of social research put forth is that minimal risk social research with competent adults should be completely exempt from IRB review. On the surface, this makes some sense. We're talking about research that is unlikely to harm anyone and where adults can decide for themselves whether to participate. Why do we need to review such research? Based on my experience personally reviewing thousands of research protocols in the social sciences, there is one basic problem with this - researchers are human beings. Human beings are not perfect - they overlook things, make mistakes and and can't be totally objective about their own work. If researchers were perfect, if they always took all of the ethical issues into account when planning and conducting their research, then we wouldn't need IRB review. But they are not perfect - none of us are perfect. So, every research activity needs an independent, objective review.
Characteristically, Cohen offers not a single example of a social research project whose ethical content was improved by his or any IRB review, much less one that could only be improved thanks to the broad definitions and coercive rules now used by IRBs. (And no, forcing interviewers to carry lists of mental-health centers doesn't count.) If he has such examples, he should offer them. If not, a vague reference to "thousands of research protocols" is unlikely to persuade a community of scholars trained to think critically and to weigh evidence.
Monday, May 21, 2007
Zywicki, "Institutional Review Boards as Academic Bureaucracies"
As stated in the abstract, the article "argues that the problem is that IRBs are fundamentally bureaucracies, and that this bureaucratic structure explains much of their frequent suboptimal decision-making. The poor performance of IRBs is thus not a consequence of those individuals who comprise it, but rather a reflection of their bureaucratic nature. The bureaucratic nature of IRBs appears to do nothing to improve the decisions that they make, while being the source of many of their problems."
Zywicki is right to note that people who gain money, power, and prestige from controlling other people's work have an incentive to define their duties broadly, even as they fret about researchers' conflicts of interest. Ironically, he seems only dimly aware of the extent to which administrators have seized power from researchers. He writes, "IRBs are fundamentally bureaucracies," not understanding that IRBs themselves are committees of researchers, and that the true bureaucracies are the university compliance offices. He notes, "there is reported to be a growing IRB conference circuit," suggesting his unfamiliarity with PRIM&R and its administrator-dominated conferences. And he makes no mention of the creation, in 1999, of the "Certified IRB Professional" as a new kind of administrator, someone who has staked a great deal on the maintenance, if not expansion, of IRBs' reach.
But buried in the essay is another explanation of the IRBs' expansion:
With respect to university bureaucracies such as IRBs, at least some of the growth in their internal administrative burden has been spurred by governmental regulations. In addition, the preoccupation of IRBs with paperwork and forms has been promoted by a regime of “fear” of governmental oversight, “[f]ear by the institution that it will be ‘out of compliance’ with one or more aspects of the paperwork, and so subject to penalty upon audit (be that by the NIH, the Office for Human Research Protection, the US Department of Agriculture, or whatever other organization is involved).”
How much of the growth of IRB staffs is due to the internal dynamic Zywicki stresses compared to the growing external threat of federal penalty? One way to find out would be to compare the growth of compliance regimes to major OHRP enforcement actions. If as Zywicki notes, the Northwestern University Office for the Protection of Research subjects "grew from two full-time professionals in the late 1990s to 25 professionals and an administrative staff of 20 last year," I'd like to know how much of that growth took place as a response to the suspension of sponsored research at Hopkins, Virginia Commonwealth, and other universities.
Without far more research on the actual workings of IRBs and compliance offices around the country, we can't test hypotheses such as Zywicki's.
Saturday, May 19, 2007
A Trimmer PRIM&R
PRIM&R's adopting this approach would mean recognizing that current regulations were written with biomedical research in mind and have never suited social science. It should therefore use its prestige to advocate broader exemptions from federal regulation for non-biomedical research. Such exemptions might exclude from IRB review:
* Research involving survey or interview procedures, where the subjects are legally competent, and where the investigator identifies himself/herself, and states that he/she is conducting a research survey or interview
* Research involving the observation (including observation by participants) of public behavior in places where there is no recognized expectation of privacy.
and
* Research involving the observation (including observation by participants) of public behavior in places where there is a recognized expectation of privacy, except where all of the following conditions exist:
(i) Observation are recorded in such a manner that the human subjects can be identified, directly or through identifiers linked to the subjects,
(ii) the observations recorded about the individual, if they became known outside the research, could reasonably place the subject at risk of criminal or civil liability or be damaging to the subject's financial standing or employability, and
(iii) the research deals with sensitive aspects of the subject's own behavior such as illegal conduct, drug use, sexual behavior, or use of alcohol.
I fear that Dr. Cohen will reject such proposed exemptions, taking them as evidence that I have "no interest in making IRB review work better for researchers, but only in eliminating IRB review for [my] research." Before he does so, let me point out that these exemptions are not my creations, but those of a joint subcommittee of the American Association of University Professors' Committee A on Academic Freedom and Tenure and Committee R on Government Relations. They were published as a report on Regulations Governing Research on Human Subjects, Academe, December 1981, 358-370.
A member of the subcommittee, and a signatory to the recommendations, was Sanford Chodosh, MD, the founder of PRIM&R.
Wednesday, May 16, 2007
PRIM&R's Public Responsibilities
We first need to understand PRIM&R's special status as a chosen instrument of the federal government. Unlike scholarly associations, whose letters to OHRP get brushed off, PRIM&R has long functioned as an arm of OHRP and its predecessor, OPRR. For example,
- OHRP promotes and distributes PRIM&R's "Investigator 101" CD-ROM. Similarly, PRIM&R prepared the first edition of the Department of Health and Human Services' IRB Guidebook.
- Former OPRR head Charles McCarthy serves on the PRIM&R board; Cohen, a former OHRP official, co-organized the last conference; and current OHRP officials participate as PRIM&R conference faculty.
- OHRP exhibits at PRIM&R conferences, and OHRP officials, particularly Michael Carome, use PRIM&R conferences to offer guidance that then gets broadcast by IRB consultants, such as Dr. Cohen.
Given these connections, an IRB member or staffer could reasonably assume that following PRIM&R's guidance is a good way to avoid sanctions by OHRP. (Indeed, she would be foolish to assume otherwise.) Since it these staffers who then return to their institutions and impose conditions on research there, PRIM&R is a key link in the chain between federal power and the daily work of researchers.
PRIM&R itself recognizes this function. For example, the recent conference provided attendees with documents offering guidance on the regulatory definition of human subjects research and the proper application of the exemptions. Since providing just such guidance is one of the responsibilities of OHRP, the conference organizers must feel pretty confident that they have OHRP's blessing to take over this important role. (Note that unlike real OHRP guidance, PRIM&R's documents are not made public.)
If this is so, what must PRIM&R do to use its power wisely and justly?
1. Identify its constituents
PRIM&R's website states that "since 1974, PRIM&R has served the full array of individuals and organizations involved in biomedical and social science/behavioral/educational research." But what constitutes that full array? Is "social science/behavioral/educational research" one category or three? If the latter, what disciplines come under which category? And are there varieties of research (such as folklore, nonfiction writing, and law) that fit none of those categories, yet that face IRB review?
I would like to see PRIM&R list all the disciplines that come under review by IRBs whose members it trains. Then it could try to make some distinctions, for example, between disciplines that offer therapy or advocacy and those that do not; disciplines that study the body and those that do not, disciplines that use formal scientific protocols and those that do not; and the like. This would lead to a second task:
2. Include a range of disciplinary perspectives
As I mentioned earlier, PRIM&R's board of directors is dominated by researchers and administrators from hospitals and medical schools. They cannot be expected to be expert in the ethics and methodologies of the full range of disciplines in the behavioral sciences, social sciences, humanities, and professions, nor could any group of scholars drawn from any one field. If PRIM&R wants to offer sound guidance to researchers in all fields subject to IRB review, it should include them on every level, from conference panels to editorial boards to the board of directors itself. The goal should be that each field has the chance to shape any guidance that affects that field, so PRIM&R does not again, for example, offer a conference panel on oral history with no historians present.
Since the vast bulk of IRB review does concern biomedical research, I would not expect equal numbers for researchers in other fields. But I do think that folklorists should have as much power to shape PRIM&R's advice on folklore as physicians have to shape PRIM&R's advice on medical research. If this requires a complex committee structure, so be it.
3. Include a range of viewpoints
Prior to the recent conference, Dr. Cohen wrote me, "if you would like to participate in the conference, we'd be happy to work you in to the program (provided you have something constructive to say)." Now he writes, "I can only conclude that you have no interest in making IRB review work better for researchers, but only in eliminating IRB review for your research."
Does that mean that someone who believes in wider exemptions from review has nothing constructive to say, and therefore no place in PRIM&R? This would contradict Cohen's earlier pride in having invited Linda Shopes, who has advocated an oral-history exemption far longer and more effectively than I. And it would contradict the above-mentioned panels at the last conference, which seem to assume that the proper scope of exemptions has yet to be determined.
I suggest that PRIM&R invite participants based on their knowledge, experience, and ability to represent their discipline, not their adherence to a party line, and that it make public its criteria for choosing participants in all of its endeavors. Perhaps I am not the right person to represent my field at PRIM&R conferences, but Taylor Atkins sure as heck isn't either.
4. Ease participation in conferences
Now I get to Dr. Cohen's specific invitation for suggestions for future conferences. I have three:
a. Don't schedule the conference during exam week. Dr. Cohen informs me that many researchers declined his invitation to participate in the conference. My guess would be that a big part of their refusal resulted from the fact that the second week of May is exam week around the nation--not a problem for federal officials, administrators, and consultants, but a big one for active teaching faculty. As it is, I am surprised he got three researchers from the University of Minnesota; it was exam week there too. October and November are pretty busy with conferences as well for many scholars. I suggest February or March might be more fruitful.
b. Announce the conference. I'm not sure how I found out about the 2007 SBER conference, but I know it was only a few weeks before the conference, and that I did not hear about it through any of my usual reading as a historian, such as H-ORALHIST, the list for oral historians. An open call for participants, months in advance, sent to scholarly newsletters and e-mail lists might boost participation.
c. Pay the costs of researchers. Maybe PRIM&R already pays the travel costs of conference faculty, but if not, it should. Most university professors are lucky to get funding to travel to one or two conferences a year, and they want to use this to attend conferences within their own disciplines. With PRIM&R charging hundreds of IRB staffers up to $950 to attend, it should be able to subsidize a few dozen researchers.
These last three suggestions might bring a few more researchers to the next PRIM&R conference, but I think the real problem is far deeper than participation at conferences. If, as Cohen suggests, PRIM&R truly seeks "to promote communication and collaboration between IRBs and investigators to facilitate research," it will have to work much harder to include the many investigators it has so long neglected.
Thursday, May 10, 2007
Boise State University's IRB Makes a Poor First Impression
The site presents the following information about the requirement of IRB review:
Federal, state and university regulations require all research (including surveys and questionnaires) involving human subjects or data collected, directly or through records (i.e. medical records, specimens, educational test results, or legal documents) to be reviewed by an Institutional Review Board (IRB) . . .
If you are a faculty or staff member, or student at Boise State University, and your research involves the use of human subjects (either directly or through records or other data such as specimens or autopsy materials), your research requires human subjects review.
"Research" is "a systematic investigation, including research development, testing, and evaluation, designed to develop or contribute to generalized knowledge." 45 CFR 46.102(d). Research includes surveys and interviews, behavioral investigations, retrospective reviews of medical information, experiments with blood and tissue, and demonstration and service programs and clinical trials. In addition, FDA includes under the definition of reviewable research, any use of a FDA regulated product except for use of a marketed product in the practice of medicine.
Note: Any administrative, departmental or course assignments involving surveys, questionnaires and interviews designed for internal use and operations of the University do not constitute "research" within the meaning of this policy if the information or conclusion of this data is not intended for scholarly publication or for dissemination to persons outside the administrative organization of the University.
That's it. No explanation of what federal and state regulations apply. No hint of the exceptions specified in 45 CFR 46.101, nor an easy way to learn more about the requirements.
The site does offer a link to the university's "Human Research Protection Policy - BSU 6325 B," but that link, as well as others on the site, is broken. (Yesterday I wrote to the e-mail address on the site, pointing out these broken links, but I have not received a reply). There's also a link to an "IRB Guideline Summary," which in turn offers a link to a Word document called "TYPES OF IRB REVIEW AND APPROVAL," which finally lists the exceptions. Since neither of these link titles mention exceptions, a visitor who knows enough to find this document probably knows about the exceptions already.
All told, my correspondent can be forgiven for fearing (I hope mistakenly) that Boise State "seems to require submission to IRB for analysis of any record of human behavior."
Boise State's Office of Research Administration shows how to antagonize researchers before even meeting them. I would like to remind all such offices that researchers are trained to read critically. Offer them complete and accurate information, and cite your sources.
Update: The links were fixed on May 15.
Monday, May 7, 2007
PRIM&R Finds Another Social Researcher
The conference announcement asks, "Why is it often so difficult for IRBs and investigators to work together effectively when reviewing sophisticated, socially sensitive social/behavioral/educational research?" One answer might be PRIM&R's apparent belief that IRB administrators can learn to regulate research without hearing from researchers.
Friday, April 27, 2007
Alternative IRB Models Conference Ignores Behavioral and Social Research
This is probably not much of a loss, since the various alternatives studied at the conference appear irrelevant to social and behavioral research. But the comment does continue a long tradition of promises that the methods, ethics, and needs of social scientists will get the attention they deserve . . . someday.
Tuesday, April 24, 2007
The Canard of Interview Trauma
As unimpeachable as the OHA's own Professional Guidelines may be, I think it is arrogant to assume that oral historians have nothing to learn from other disciplines with regard to the ethical treatment of human subjects. If nothing else, they can become more sensitized to the possibilities for psychological or social harm that may result from oral history interviewing. Whenever our IRB reviews a protocol from the psychology department that involves questions about childhood abuse or some other trauma, we make sure that the investigator is either qualified to directly provide appropriate counseling or intervention, or provides a list of appropriate support services. How many oral historians have the expertise or qualifications to handle a situation in which an informant with PTSD experiences distress during an interview? How many would have a list of counseling services at hand in case it was necessary? How many even imagine such a scenario when they venture out with their tape recorders?
I would like to suggest that historians don't imagine such a scenario because it doesn't happen.
When I asked Atkins what made him think interviews could traumatize narrators, he replied,
when I was at the 2004 OHA meeting, I attended a panel on the Veterans' Oral History Project, at which the presenters very casually remarked that several veterans, being interviewed by small groups of fourth-graders, broke down into tears when talking about their battlefield experiences. My first thought was, "so how did a bunch of fourth-graders respond to that?" Breaking down crying is not always indicative of PTSD, but you surely understand that the possibility is there.
As Atkins concedes, crying is not trauma requiring "counseling or intervention" by a licensed therapist. Basic decencies—a pause in the recording and some words of sympathy—are enough. And while the possibility of real trauma exists, so does the possibility that a narrator will fall down the stairs trying to answer the interviewer's knock at the door. The question is whether the risk is great enough to justify the hassle of IRB review, and Atkins presents no evidence that it is. Historians have recorded oral history interviews for half a century, and he cannot point to one that has traumatized the narrator.
Having imagined a harm, Atkins also imagines a remedy: "a list of appropriate support services" to be tucked into the interviewer's bag, next to spare batteries for the recorder. Unsurprisingly, he has no evidence that such a list has ever helped anyone.
For researchers in parts of the world where such support services are common, carrying such a list isn't much of a burden. But the paperwork and training it takes to get to the point where the IRB will approve one's project is a real burden. And the requirement of a list could disrupt research in parts of the world where those services don't exist, or even for a researcher who travels around the United States to collect stories, and would have to carry lists for each area she visits.
Atkins is not alone in making such claims. Comparable fears appear in Lynn Amowitz, et al., "Prevalence of War-Related Sexual Violence and Other Human Rights Abuses among Internally Displaced Persons in Sierra Leone," JAMA 287 (2002), 513-521, and Pam Bell, "The Ethics of Conducting Psychiatric Research in War-Torn Contexts," in Marie Smyth and Gillian Robinson, Researching Violently Divided Societies (Tokyo: United Nations University Press, 2001). But neither Amowitz nor Bell cites any evidence to suggest that interview research traumatizes narrators. (If anything, Bell's piece indicates that narrators know how to protect themselves, for example, by choosing to be interviewed as a group rather than one-on-one.)
In contrast, the existing empirical evidence suggests that, if anything, conversation is therapeutic. In her essay, "Negotiating Institutional Review Boards," Linda Shopes cites three articles to make this point:
- Kari Dyregrov, Atle Dyregov, and Magne Raundalen, "Refugee Families' Experience of Research Participation," Journal of Traumatic Stress 12:3 (2000), 413–26.
- Elana Newman, Edward A. Walker, and Anne Gefland, "Assessing the Ethical Costs and Benefits of Trauma-Focused Research," General Hospital Psychiatry 21 (1999), 187–196.
- Edward A. Walker, Elana Newman, Mary Koss, and David Bernstein, "Does the Study of Victimization Revictimize the Victims?" General Hospital Psychiatry 19 (1997), pp. 403–10.
To these I would add Elisabeth Jean Wood, "The Ethical Challenges of Field Research in Conflict Zones," Qualitative Sociology 29 (2006): 373-386. Wood writes:
While the discussion of this consent protocol initially caused some interviewees some confusion, once the idea had been conveyed that they could exercise control over the content of the interview and my use of it, participants demonstrated a clear understanding of its terms. In particular, many residents of my case study areas took skillful advantage of the different levels of confidentiality offered in the oral consent procedure. This probably reflected the fact that during the war residents of contested areas of the Salvadoran countryside daily weighed the potential consequences of everyday activities (whether or not to go to the field, to gather firewood, to attempt to go to the nearest market) and what to tell to whom. Moreover, I had an abiding impression that many of them deeply appreciated what they interpreted as a practice that recognized and respected their experience and expertise. Although for many telling their histories involved telling of violence suffered and grief endured, I did not observe significant re-traumatization as a result, as have researchers in some conflict settings (Bell, 2001). I believe the terms of the consent protocol may have helped prevent re-traumatization as it passed a degree of control and responsibility over interview content to the interviewee.
(It's worth repeating that Bell's article presents no observations of re-traumatization.)
Though I have not interviewed trauma survivors myself--at least, not about their trauma--I have no doubt that it is a tricky business. If anyone can show me that interviews can aggravate real trauma, I welcome correction. I would also welcome more scholarship on how interviewers can maximize the catharsis described by Wood.
Unfortunately, the arbitrary power enjoyed by IRBs relieves them of the responsibility or incentive to seek out such real solutions to real problems. Atkins and his colleagues can dream up phantom menaces and require burdensome, useless conditions based only on guesswork. Only the removal of their power is likely to force them to support their arguments with evidence.
Note: I thank Amelia Hoover for pointing me to the Wood and Amowitz articles.