Friday, December 28, 2007

Columbia University Grants Oral History Exclusion

Mary Marshall Clark, director of Columbia University's Oral History Research Office, has announced on H-Oralhist that the university yesterday approved a new policy on IRB review of oral history research. The policy notes that

Oral history interviews, that only document specific historical events or the experiences of individuals or communities over different time periods would not constitute "human subjects research" as they would not support or lead to the development of a hypothesis in a manner that would have predictive value. The collection of such information, like journalism, is generally considered to be a biography, documentary, or a historical record of the individual's life or experience; or of historical events. Oral history interviews of individuals is not usually intended to be scientific or to produce generalizable information and hence is not usually considered 'research' in accordance with the federal regulations or CU policy. Therefore, such oral history activities should not be submitted to the CU IRB for review.

Still covered by IRB jurisdiction are psychological studies that borrow some oral history techniques to test hypotheses. An example might be Kim T. Buehlman, John M. Gottman, and Lynn Fainsilber Katz, "How a Couple Views Their Past Predicts Their Future: Predicting Divorce from an Oral History Interview," Journal of Family Psychology 5 (March/June 1992): 295-318.

I hope Columbia will prove a model for other universities; in lumine tuo videbimus lumen.

American Historical Association Asks for Oral History Exclusion

In its response to OHRP's solicitation of comments on its 1998 guidance, the American Historical Association asked that "'oral history' . . . be removed from category 7 and explicitly removed from IRB review." The AHA blog, AHA Today, reports the request and posts the full text of the association's letter to OHRP.

Monday, December 24, 2007

Law & Society Review, continued

As I noted earlier, the December 2007 issue of Law & Society Review features five items concerning IRBs and the social sciences.

Malcolm M. Feeley, "Legality, Social Research, and the Challenge of Institutional Review Boards"

The section on IRBs begins with Malcolm M. Feeley's 2006 presidential address to the Law & Society Association. Feeley presents an impassioned critique of IRBs, complaining, "in the name of minimizing risks, IRBs subject researchers to petty tyranny. Graduate students and junior scholars are particularly likely to be caught in their web—and for them IRB tyranny is often more than petty. Senior scholars are generally more adept at avoidance, evasion, and adaptation, but they too are hardly exempt from this tyranny. A number of prominent social scientists, including some members of this Association, know all too well the harms of running afoul of campus IRBs. . . Entire research areas and methodologies are in jeopardy, insofar as the difficulties of obtaining IRB approval affect research priorities for funding agencies and universities' willingness to support researchers.”

Feeley then raises a number of specific problems, such as the ill fit between the beneficence encoded in regulation and the kind of social research that aspires to produce "tarnished reputations and forced resignations" of evil-doers.

To remedy this situation, Feely proposes three modes of action:

1. "Join [IRBs]; subvert them—or at least curtail them. Serve on them and do all you possibly can to facilitate the research of your colleagues rather than act as a censor."

2. Follow Richard Schweder's call to get your university to apply federal regulations only to federally funded research.

3. "Ask about estimates of how much actual harm to subjects in social science research has been prevented by IRB actions. And ask for documentation."

I am a bit skeptical about the first suggestion, for two reasons. First, few universities have IRBs strictly for the social sciences. This means that a sociologist, anthropologist, political scientist, or historian would spend most of her time on an IRB reviewing (or abstaining from reviewing) psychological experiments. That's an unfair price to pay to have some power over one's own research. Second, it assumes that IRBs are run by IRB members. As Caroline H. Bledsoe et al. report in "Regulating Creativity: Research and Survival in the IRB Iron Cage," the size of human protections staffs has ballooned in recent years. If the staff have the real power, IRB members will have little chance to facilitate research.

Laura Stark, "Victims in Our Own Minds? IRBs in Myth and Practice."

The first comment is Laura Stark's. It draws in part on Stark's 2006 Princeton dissertation, "Morality in Science: How Research Is Evaluated in the Age of Human Subjects Regulation." I am glad to learn of this work, and I hope to comment on it in a later post.

Stark suggests trying to improve, rather than restrict, IRBs, because “ethics review in some form is here to stay because of institutional inertia, and [because of her] belief as a potential research subject that ethics review is not an entirely bad idea, even for social scientists.” She advocates "changing local practices to suit the local research community, rather than refining federal regulations."

One intriguing example is the establishment of "IRB subcommittees, which can review lower-risk studies [and] have moved ethics review into academic departments. In so doing, these subcommittees of faculty members (who presumably understand the methods in question) have taken over the task of evaluating low-risk studies from board administrators." This sounds a lot like the departmental review that the AAUP suggested as an alternative to IRB control, and like the Macquarie model I described in August. I hope that Stark will publicize the name of the university that uses such subcommittees, so that it can better serve as an example to others. Stark does not explain why this model is appropriate only for low-risk studies. It seems to me the higher the risk, the more reason to have research reviewed by people who understand its methods.

Significantly, neither in her article nor in her dissertation does Stark take up Feeley's challenge to document cases in which IRBs have prevented actual harm to participants in social science research. Her research offers important insights about how IRBs reach decisions, but no evidence that those decisions do more good than harm, or that they are consistent with norms of academic freedom.

Finally, Stark claims, "the social science victim narrative—by which I mean the story that human subjects regulations were not meant to apply to us—is pervasive among academics, and it is particularly central to qualitative researchers as a justification for their criticisms of IRBs. Yet this victim narrative does not stand up to historical scrutiny, as I have shown." Yes and no. Stark's use of the passive voice (were not meant to apply) is telling; the question is who meant the regulations to apply to social scientists, and who did not. I am working on a full-scale history of the imposition of human subjects regulations on the social scientists, and I can tell Stark that more scrutiny will complicate her story.

Robert Dingwall, "Turn off the oxygen …"

The second comment is Robert Dingwall's "Turn off the oxygen …," the oxygen here referring to the legitimacy granted to IRBs by university faculty.

Dingwall is skeptical of legal challenges, given the cost, the possibility of failure, and the fact that the First Amendment only applies to the United States (Dingwall works in the UK.) He argues instead that “if we can show that ethical regulation does not actually contribute to a better society, but to a waste of public funds, serious information deficits for citizens, and long-term economic and, hence, political decline, then we may have identified a set of arguments that might lead to a more skeptical approach to the self-serving claims of the philosopher kings who sustain that system.” For example, we must continue to document ethical wrongs like the insistence by a British medical journal that two historians falsify the names of their oral history narrators, despite the wishes of most of the narrators to be named. [Graham Smith and Malcolm Nicolson, "Re-expressing the Division of British Medicine under the NHS: The Importance of Locality in General Practitioners' Oral Histories," Social Science & Medicine 64 (2007): 938–48.] I hope Professor Dingwall has a chance to read Scott Atran's essay, "Research Police – How a University IRB Thwarts Understanding of Terrorism," posted on this blog in May. It is an excellent example of the way that IRB interference can disrupt vitally important work.

Jack Katz, "Toward a Natural History of Ethical Censorship"

The third comment, by Jack Katz, is the most shocking, for it is the most thoroughly documented. (It even cites this blog, thanks.) Katz lists several cases, all recent, in which IRBs have derailed potentially important social research. Unlike the 2006 AAUP report, he gives names, universities, dates and citations for most of his horror stories. Among them:

* "In Utah, Brigham Young University's IRB blocked an inquiry into the attitudes of homosexual Mormons on their church. When the same anonymous questionnaire study design was transferred to another researcher, the IRB at Idaho State University found the study unproblematic."

* "A proposed study of university admissions practices [was] blocked by an IRB at a Cal State campus. The study had the potential to reveal illegal behavior, namely affirmative action, which was prohibited when Proposition 209 became California law."

* "At UCLA, a labor institute developed a white paper lamenting the health benefits that Indian casinos offered their (largely Mexican and Filipino) workers. Despite the university's support for the labor institute when anti-union legislators at the state capitol have sought to eliminate its funding, publication was banned by the IRB after a complaint by an advocate for Indian tribes that the study had not gone through IRB review."

Stark would have us believe that "the local character of board review does not mean that IRB decisions are wrong so much as that they are idiosyncratic." But Katz shows that IRBs' idiosyncracies can be hard to distinguish from viewpoint-based censorship.

In contrast to these identifiable harms, Katz finds "no historical evidence that the social science and humanistic research now pre-reviewed by IRBs ever harmed subjects significantly, much less in ways that could not be redressed through post hoc remedies." I don't think I would go quite this far, given Carole Gaar Johnson's description of the harms caused to the residents of "Plainville" by the inept anonymization of their town ("Risks in the Publication of Fieldwork," in Joan E. Sieber, ed., The Ethics of Social Research: Fieldwork, Regulation, and Publication (New York: Springer, 1982). But the rarity of such cases means we should weigh IRB review against other methods of prevention, such as departmental review of projects or better certification of researchers.

Katz reiterates his call, previously set forth in the American Ethnologist, for a "culture of legality," in which IRBs would be forced to explain their decisions and "publicly disseminate proposed rules before they take the force of law." He believes that "were IRBs to recognize formally that they cannot properly demand the impossible, were they to invite public discussion of policy alternatives, and were they to open their files to public oversight, they would fundamentally alter the trajectory of institutional development by forcing confrontation with the central value choices currently ignored in the evolution of ethical research culture."

But what do we do when we confront those value choices? We get statements like Stuart Plattner's: “no one should ever be hurt just because they were involved in a research project, if at all possible," a position clearly at odds with Katz's applause for "the American tradition of critical social research." (Plattner, “Human Subjects Protection and Cultural Anthropology,” Anthropological Quarterly, 2003) The problem with IRBs' value choices is not that they are hidden, but that they are often wrong. The Belmont Report is the most public and widely cited rule used by IRBs, and it is a terrible guide for the kind of critical research Feeley and Katz want done.

Feeley, "Response to Comments"

The most interesting part of Feeley's response comes at the very end. Noting that, with the AAUP's encouragement, some universities have ceased promising to review all human subjects research in favor of the regulatory minimum of federally funded research, he points out that we will soon know if the lack of IRB review of social science at those universities yields a flood of unethical research. "If there are few reports of negative consequences . . . they might encourage national officials to rethink the need for such an expansive regulatory system . . . On the other hand, if opt-out results in increased problems, the findings might help convince Katz, Dingwall, me, and still others of the value of IRBs." This strikes me a very fair bet, and the experiment can't begin soon enough.

Friday, December 21, 2007

My Comments to OHRP

As I noted in November, OHRP is soliciting comments on proposed changes to its 1998 guidance on expedited review. Below are the comments I submitted today. I thank Rob Townsend of the American Historical Association for his help in revision.

Saturday, December 1, 2007

Law & Society Review

John Mueller has kindly alerted me to the December 2007 issue of Law & Society Review, which includes five items concerning IRBs. I will read and comment on them as time permits.

Readers interested in legal analysis of IRBs should also consult Philip Hamburger's "'Ingenious Argument' or a Serious Constitutional Problem? A Comment on Professor Epstein's Paper," a follow-up to the Northwestern Law Review special issue.

Friday, November 9, 2007

Neuenschwander on IRBs and Oral History Legal Releases

The Fall 2007 issue of the Oral History Association newsletter features Professor John A. Neuenschwander's essay, "What's In Your Legal Release Agreement?" Neuenschwander collected "72 agreements from a wide variety of programs including major universities, libraries, government agencies, local historical societies, and independent oral historians," and offers various observations about what is and is not on them. He does not offer a model release, though we can hope that his research will inform the next edition of his indispensable work, Oral History and the Law.

For purposes of this blog, the most interesting section of the essay is entitled "Institutional Review Board Modified Releases," in which Neuenschwander examines the nine forms of the 72 that had clearly been modified by IRBs to conform with the Common Rule. He presents a paragraph from a typically modified form:

The interview will be conducted in the form of a guided conversation and will last approximately ________. I will be free to decline any question that makes me uncomfortable. Moreover, I have the right to stop the tape recording at any time with no negative consequences. There are no foreseeable risks in doing this interview. The benefit of the interview is to the general public in the form of increased historical knowledge. I recognize that because the interview will be donated to the University of ________ there is no assumption of confidentiality, unless I request it.

Neuenschwander approves of this language, stating "the gulf between the medical or scientific culture of the IRB and the social or humanistic one of the oral historian has been bridged successfully." But I fear he glosses over some potential problems in the IRB imposed language:

1. "The interview . . . will last approximately ________." What is that statement doing there? While it is certainly courteous to ask a narrator to set aside a certain amount of time, interviews are quite unpredictable, and my sessions have ranged from 30 minutes to more than seven hours (with a break for lunch). Including an estimate of time in the release form elevates guesswork to an ethical duty, perhaps even a contractual obligation. This strikes me as a bad idea.

2. "There are no foreseeable risks in doing this interview." This statement is contradicted by Neuenschwander's finding that other forms asks narrators to indemnify the interviewer "from any and all claims or demands or lawsuits arising out of or in connection with the use of the interview, including but not limited to any claims for defamation, copyright violations, invasion of privacy or right of publicity." So someone is foreseeing risks from oral history interviews, and even that list doesn't consider the harms to reputation added to the Common Rule in 1991.

3. "There is no assumption of confidentiality, unless I request it." This is certainly an improvement over other IRB boilerplate that assumes confidentiality as the norm. But the statement does follow the OHA guidelines in allowing confidentiality in some circumstances. This raises questions of how far the interviewer must go to defend confidentiality against subpoenas, physical theft, ineptness by the depository, and so on. In another section, Neuenschwander notes that of the roughly 25 forms that promised confidentiality, only three qualified that promise with mention of subpoena and the Freedom of Information Act. Thus, the IRB language is incomplete.

Oral historians need to work out language that can alert narrators to the real risks of speaking on the record without spooking them unnecessarily. If IRBs can help in this task, all power to them, but nothing in Neuenschwander's essay suggests that they can.

Saturday, November 3, 2007

OHRP Seeks Comment on Expedited Review

As stated in the Federal Register, "The Office for Human Research Protections (OHRP) is requesting written comments on a proposed amendment to item 5 of the categories of research that may be reviewed by the institutional review board (IRB) through an expedited review procedure, last published in the Federal Register on November 9, 1998 (63 FR 60364)."
Comments are being taken until December 26, 2007. The full announcement can be found at

This announcement should be of particular interest to oral historians, because the 1998 guidance was, I believe, the first official document to suggest that oral history should even be subject to IRB review. Thus, this is a good opportunity for OHRP to reconsider its whole position on oral history.

Thursday, October 25, 2007

Pentagon Says IRB Review Not Needed for War Zone Anthropology

As reported by Inside Higher Ed, anthropologist Thomas Strong has found that the U.S. Army's Human Terrain Systems (HTS) program, which employs anthropologists to work with combat units in Iraq and Afghanistan, has decided that it need not submit projects for IRB review.

Here is the relevant passage from Strong's post:

[Col. Steve] Fondacaro . . . argues that HTS research is not subject to IRB oversight because of provision 32 CFR 219 sec. 101(b)(2), which states that research conducted through “the use of educational tests (cognitive, diagnostic, aptitude, achievement), survey procedures, interview procedures or observation of public behavior” is exempt.

Thus, the HTS is understood by the program manager to be exempt from IRB regulation within the parameters specified by the common rule. However, as I wrote in a follow up to Fondacaro, “The exemption you cite, 32 CFR 219, sec. 101(b)(2), apparently applies to HTS-type research (that is, ethnographic research) that has already been reviewed. It apparently mandates that a review board at least look at the research in order to determine whether or not continuing oversight is necessary; I don’t think this is typically understood as a determination the researcher him/herself or the research team itself makes. Further, as you know, the exemption stipulates stringent protocols regarding data recording in order to insure the anonymity of research subjects <32 CFR 219, sec. 101(b)(2)(i)>. I am therefore inclined to think the onus is on DoD to insure that those stipulations are being met through a review of the research protocol.” I am waiting to hear from Fondacaro regarding this. It is important to add that Fondacaro says that HTS program managers are waiting for “final guidance” on the matter from US Army lawyers.

Strong is, I believe, confusing the regulation itself with the interpretations of the regulations set forward by OHRP and various IRBs.

First, he assumes that the regulation "apparently mandates that a review board at least look at the research in order to determine whether or not continuing oversight is necessary." But as I noted in an earlier post, that mandate only appeared in OPRR (OHRP's predecessor) guidance of 1995, sixteen years after the exemptions were drafted by the HEW Office of the General Counsel. At the time, that office explained,

The regulations, as proposed, do not require any independent approval of a researcher's conclusion that his or her research is within one of the exceptions. In taking this approach for purposes of discussion, we have weighted the possibility of abuse against the administrative burdens that would be involved in introducing another element of review which might not be very different from the review that the exceptions are intended to eliminate.

["Applicability to Social and Educational Research," attached to Peter Hamilton, Deputy General Counsel, to Mary Berry et al., 27 March 1979, Res 3-1-B Proposed Policy Protections Human Subjects 1978-79, Record Group 443, National Archives]

As late as 1988, DHHS policy allowed institutions to choose who would decide whether an exception applied: the investigator, the IRB chair, or another official. The institutions could also decide, as a matter of institutional policy, that some or all of the exemptions were not valid.

[Robert E. Windom, Assistant Secretary for Health, to Charlotte Kitler, New Jersey Department of Health, 13 September 1988, RES-6-01 Human Subjects, OD Central Files, Office of the Director, NIH]

Thus, it is OPRR/OHRP, and not the Department of Defense, that has departed from the intention of the regulations.

Second, Strong claims that "the exemption stipulates stringent protocols regarding data recording in order to insure the anonymity of research subjects." No, the regulation simply states that IRB review is required if

(i) Information obtained is recorded in such a manner that human subjects can be identified, directly or through identifiers linked to the subjects; and

(ii) Any disclosure of the human subjects' responses outside the research could reasonably place the subjects at risk of criminal or civil liability or be damaging to the subjects' financial standing, employability, or reputation.

What this means is a matter of interpretation; there is nothing about "stringent protocols."

In any event, I must doubt that Strong would be any happier had he learned that the program had been approved by a three-person quorum of a five-person Department of Defense IRB, perhaps with no anthropologists as members. The ethical validity of the program should be debated not by a local IRB but by the anthropological profession itself. And I am glad to see that is just what anthropologists are doing.

Monday, October 22, 2007

Brown U. IRB Chair: System is Broken

Ross Frazier of the Brown Daily Herald reports that Brown's "faculty are yet again pushing for change" in the human subjects review system there. ("As IRB Debate Grows, Profs Push for Reform," 16 October 2007).

Frazier reports the views of Ron Seifer, professor of psychiatry:

Seifer, the IRB chair, acknowledged the system is "fundamentally flawed and broken," adding that, "They are being asked to do something they were never intended to do."

Seifer told the class that the IRBs around the country have expanded their reach because of federal bureaucrats and a community of university research administrators who have created a culture of avoiding risk.

Those factors "all exist within very frightened university environments. They're afraid of lawsuits, and they are afraid of donors going away," Seifer said.

As reported by Frazier, senior Brown administrators seem unwilling either to defend the current system or to reform it, preferring instead to keep discussion of the issue off official agendas.

Sunday, October 7, 2007

The Dormant Right to Expertise

According to federal regulations (45 CFR 46.107), scholars should be able to expect that their research will be reviewed by someone who understands it:

each IRB shall have at least five members, with varying backgrounds to promote complete and adequate review of research activities commonly conducted by the institution. The IRB shall be sufficiently qualified through the experience and expertise of its members, and the diversity of the members, including consideration of race, gender, and cultural backgrounds and sensitivity to such issues as community attitudes, to promote respect for its advice and counsel in safeguarding the rights and welfare of human subjects.

As Jeffrey Cohen has recently noted, this means that "researchers have the right under the regulations to have their research reviewed with the appropriate expertise and it is the Institutional Official's responsibility to ensure that the review is appropriate to the research."

Unfortunately, as I have documented in this blog, researchers in the social sciences and humanities are routinely denied that right. A nice illustration of this problem appears in the recently released RAND Corporation working paper, "Ethical Principles in Social-Behavioral Research on Terrorism: Probing the Parameters," September 2007. James R. Sayer of the University of Michigan's Behavioral Sciences Institutional Review Board describes the problem of reviewing research on terrorism. Presumably this includes the research described on this blog by Scott Atran.

Here is how that case, or perhaps a similar case, appeared to Sayer, whose own expertise concerns "the effects of hydrophobic and hydrophilic glass coatings, window tinting, and defrosters/defoggers on visual performance and driving behavior."

We’ve struggled with expertise, or rather the lack of expertise. Two years ago we sought assistance from a number of academic and private institutions to get reviewers to assist us in the evaluation of one particular protocol. And we found nobody. Maybe that’s because people really don’t want to assume that risk themselves. Maybe it’s because, from an academic standard perspective, I’m sure those of you who review IRB applications don’t get many really big feathers in your cap for doing it. And you’re certainly not going to get them for reviewing somebody outside of your institution. It’s gotten to the point that at the University of Michigan we are seriously considering putting together another board. That board would deal exclusively with international research. There’s enough of it going on that we could easily keep that board busy on a monthly basis. But we will still struggle with trying to find somebody that understands what the culture is like in rural provinces of China, in the Sudan. And as hard as we try, we can’t always adhere to what is the intent of the federal regulations in having the necessary expertise.

This is a nicely humble statement, recognizing the limitations of not only Michigan's IRBs but also the entire IRB process. At the end, Sayer admits that his IRB fails to meet the very requirement Cohen identifies. Yet that humility did not stop the Michigan IRB from delaying and interfering with Atran's research.

The problem is that the right to expert review comes with no enforcement mechanism. Cohen suggests appeals should be directed to the institutional official, but in many cases that is the very person who failed to appoint appropriate experts to the IRB. One would be appealing the violation to the violator. Beyond that official lies only OHRP. And on the day OHRP reprimands an institution for constituting an IRB with insufficient expertise in the social sciences and humanities, I will buy Dr. Cohen an ice cream.

Whether Atran should have sued the university for violating the federal regulations is a question I'll leave to the lawyers.

See also, "In Search of Expertise."

Saturday, September 29, 2007

Roberta S. Gold, “None of Anybody’s Goddamned Business”?

Blogger's note: On September 3, Christopher Leo posted a query to the H-Urban list, asking about the effect of ethics review on urban research . Roberta Gold's response hinted that she had thought hard about the issue, so I asked her to share her thoughts on this blog. She has graciously agreed.

Friday, September 21, 2007

Bledsoe et al., Regulating Creativity

I am still working my way through the Northwestern University Law Review symposium on IRBs. Today's comments focus on Caroline H. Bledsoe, Bruce Sherin, Adam G. Galinsky, Nathalia M. Headley, Carol A. Heimer, Erik Kjeldgaard, James T. Lindgren, Jon D. Miller, Michael E. Roloff & David H. Uttal, "Regulating Creativity: Research and Survival in the IRB Iron Cage."

The article, based largely on events at Northwestern itself, is particularly effective at challenging three myths of IRBs and the social sciences:

Myth #1: Reports of IRB interference with research are overblown, since few projects are rejected and few researchers disciplined.

An example of this myth is Jerry Menikoff's contribution to the same symposium, in which he claims, "social and behavioral scientists who maintain appropriate communication with their institution's IRBs need not be shaking in their boots, fearing some career-ending enforcement action is about to come down from Washington."

Unlike Menikoff, Bledsoe et al., talked to some researchers, asking their colleagues about experiences with Northwestern's IRB. They report,

As a number of our colleagues have emphasized . . . both in person and in their responses to our email query, they alter their course not because of any real risk they perceive to their subjects but simply to pass IRB muster. Trying to reduce their own professional risk, they divert their work, choosing topics or populations selectively, or adapting methods that will entail less demanding IRB review and lessen the probability that they will have to make substantial changes before proceeding. IRB procedures, that is, can snuff out ambition even before the project begins.

The disturbing point is that it is the mere anticipation of onerous IRB review that can result in some alteration of the proposed protocol. Because of the potential for delays and the IRB tendency to intrude into each step of the research process, many social science faculty report that they think twice about taking on research topics, methods, and populations that IRB frames in the mode of risk. One respondent described the impact thus:

"The IRB has become a nightmare over the years that I have been a researcher. I'm sure most of this pressure is coming from the federal government, but the rigidity of the model (based on the medical sciences) and the number of hurdles/ forms, and the scrutiny (to the point of turning back projects for mispagination or other pithy errors, as has happened for some of my students) is just terrible. It is very discouraging, and I find myself thinking of EVERY new research project as it relates to the possibility of IRB approval."

Two respondents indicated that faculty had moved toward non-field projects in large part because of IRB. One faculty member even pointed specifically to concerns about IRB in a decision to make a career shift away from field-project themes and methods that might jeopardize the researcher's career:

"Since last year, my research became more theoretical in large part because of IRB requirements. I simply try not to do any research which would involve Panel E [the social science review panel at Northwestern]. . . . I no longer interview people during my trips abroad and try to limit the data gathering to passive observation or newspaper clippings."

An IRB that approves all social science projects submitted to it (and many, no doubt, do) may still crush research by making it so burdensome that researchers give up submitting proposals.

Myth #2: Medical IRBs are the problem, so an IRB devoted only to non-medical research is the solution.

This suggestion gets thrown out from time to time; for example, it appears as one of Dale Carpenter's admittedly "modest proposals for reform" in his own Northwestern Law Review piece. But Bledsoe et al. report that Northwestern already has a separate non-medical panel, and it doesn't sound pretty:

even a separate social science IRB enterprise suffers from internal tensions between the need for standardization, whether imposed by OHRP rules or by our own desires to ensure equity, and the need to allow the very stuff of novelty that studies are supposed to produce. We have observed that social scientists who confront their review assignments can be no less critical of their fellows' studies than a biomedical panel might be. Indeed, IRB staff have sometimes had to step in diplomatically to rescue a project from a zealous social science faculty panelist threatening to dismember it altogether. In this regard, we have observed a typical life cycle for social science panel members. The typical panel member begins his or her tenure by making it known that a great deal of harmless social science research is delayed without any reasonable cause, and that henceforth the reckless invasiveness of the IRB must be tempered. Yet this same panel member, when given projects to review, is often the most critical.

This pattern reflects a broader impulse among social scientists. We think of ourselves first and foremost as academics. Our business is to read research proposals, journal articles, student papers, and to find fault. Turning to IRB protocols, we become fastidious reviewers. When we read consent forms, it is hard for us to refrain from editing them. When we read with an eye toward possible risk, whether large or small, our expertise itself will unmask it. As social science panel members, we will inevitably find problems with social science IRB submissions; we cannot help ourselves. Importing our own disciplines' ethical dilemmas, the concerns that we raise often go far beyond those imagined by the federal legislators. They also hand the IRB, seeing our plight, both our fears and our language of expressing them to incorporate into its already overburdened repertoire. Over time, such impulses are tempered, and we learn to see the big picture again. In the meantime, however, the damage to the research enterprise is done.

In retrospect, giving the social sciences a separate review channel and letting them into the review process was helpful in that the social sciences gained mediators who could explain studies to their panel colleagues and attempt to buffer the power of the medical model. At the same time, our social science panel's own efforts to help both added to the layers of regulatory stratigraphy and intensified the regulatory flux. All this has undoubtedly provided further grounds for investigators to conclude that the IRB was capricious and inconsistent.

The authors are wrong, however, to suggest that Northwestern has a "social science" panel. According to "Schools, Departments and Programs Served by Panel E of the Institutional Review Board, " Panel E has jurisdiction over "research projects involving human subjects that use social and behavioral science methodologies." The same document claims,

Federal guidance defines social and behavioral science methodologies as those that include research on individual or group characteristics or behavior (including, but not limited to, research on perception, cognition, motivation, identity, language, communication, cultural beliefs or practices, and social behavior) or research employing survey, interview, oral history, focus group, program evaluation, human factors evaluation, or quality assurance methodologies.

The range of methods included in this list means that far from letting ethnographers review ethnographers and experimental psychologists review experimental psychologists, Northwestern has locked all its non-medical researchers in a room and told them to fight it out. Such an arrangement makes no allowance for the wide variation of methods and ethics within non-medical research. (See "My Problem with Anthropologists.")

Moreover, the claim that "federal guidance defines social and behavioral science methodologies" is incorrect. The list of methodologies is taken from OPRR's 1998 "Protection of Human Subjects: Categories of Research That May Be Reviewed by the Institutional Review Board (IRB) Through an Expedited Review Procedure." That document does just what its title suggests: it lists categories of research eligible for expedited review. It does not define social and behavioral science methodologies, nor, to my knowledge, has the federal human subjects apparatus ever defined social or behavioral science.

In reality, therefore, Northwestern's Panel E exists solely to provide full IRB review for projects that even the federal government admits do not require full IRB review. No wonder it doesn't work well.

Myth #3: If social scientists were to join IRBs and learn about their workings, they wouldn't complain so much.

Take this statement by J. Michael Oakes, "Risks and Wrongs in Social Science Research: An Evaluator's Guide to the IRB," Evaluation Review 26 (October 2002) 443-479:
"Investigators well versed in the Belmont Report and more technical IRB procedures rarely need to dispute decisions, and when they do it concerns how known rules are interpreted or what is best for the subjects. It follows that a great deal of frustration may be eliminated by careful study of basic IRB regulations and issues. Education seems to modify frustration in the researcher-IRB-subject chain."

Nonsense. Bledsoe herself chaired a subcommittee of the Northwestern University IRB Advisory Committee, and several of her coauthors served on, chaired, or staffed IRBs at Northwestern or elsewhere, as well as having dealt with IRBs as applicants. They are about as educated and experienced in these issues as one could hope for, and they as frustrated as anyone by the current system.

Beyond busting myths, the article seeks to document the changes in IRB work since the 1990s. Based on their personal experience, Bledsoe and her co-authors describe the expansion of both OHRP and IRB jurisdiction:

The university's Office for the Protection of Research Subjects spiraled from two professionals to what is now a staff of 26, of whom 21 support the IRB operation. Review panels went from one to six—four were created simultaneously in September 2000, with one for the social sciences created a year later, and another medical panel added subsequently— and appointing their membership became the duty of the university's vice president for research. The length of the basic protocol template for new projects went from two pages to its present length of twelve for the social sciences, and fifteen for biomedical research. In addition, the number of supplementary forms and documents required for each submission went from one or two to far more than that, depending on the nature of the study. Many protocols are now better measured in inches of thickness than in number of pages. The level of bureaucratic redundancy, inconvenience and aggravation increased dramatically: Unreturned phone calls, dropped correspondence, and administrative errors on forms became routine.

They also report some good news:

For several years after the IRB ramp-up began, our IRB panel expected detailed interview protocols from everyone. Now, an ethnographer who intends to employ participant observation does not need to provide a detailed specification of what is to be said to participants, and is not asked for it. Without such collusion, ethnographic studies would not survive under the IRB system. As much as social scientists complain about the ill fit their projects pose in IRB review, their own protocols are now spared this level of scrutiny.

As I reported earlier, Northwestern has exempted oral history from review, though Bledsoe et al. do not explain when or why that happened.

The authors conclude that "one could scarcely imagine a better example of a bureaucracy of the kind that so fascinated and infuriated Weber than the contemporary IRB system." It is indeed crucial to look at the systematic pressures on members and administrators, for that can explain why the same IRB abuses show up in such diverse institutions spread around the country.

But while Weber can explain some long-term trends, analyzing bureaucracies, rather than people, obscures the role of individual decisions. In this lengthy account of events at Northwestern, the authors decline to blame, credit, or even name a single individual administrator, researcher, IRB member, consultant, or federal official. Typical is this passage:

When the ratcheting up of the IRB bureaucracy at Northwestern was occurring, administrators were working in an environment in which suspension of federal funding to other institutions had produced considerable anxiety. It was no secret that the Northwestern IRB director was under pressure to bring the university into full compliance as quickly as possible.

Who was ratcheting? Who felt considerable anxiety and why? Who communicated with the federal government? Who was the Northwestern IRB director? Who pressured him or her? Who knew the secret? And, on the other end, who ruled that interviewers did not have to submit detailed protocols?

Because the authors decline to ask such questions, they can hold no one to account for sudden and important decisions. They instead conclude, "the odd history of IRB and its effects have been no one's fault; no one's intention. No convenient villains or victims emerge anywhere we look." But there is nothing to indicate that they looked terribly hard.

Friday, September 14, 2007

Study Finds IRBs Exaggerate Risks of Survey Questions

Michael Fendrich, Adam M. Lippert, and Timothy P. Johnson, "Respondent Reactions to Sensitive Questions," Journal of Empirical Research on Human Research Ethics 2 (September 2007): 31-37

Perhaps because they are punished for being too lax but never for being too strict, IRBs tend to err on the side of what they consider caution, exaggerating the risks of proposed research. It's easy to do so when, as these authors put it, "board members often rely on their 'gut' feeling in determining the potential for survey questions to effect adverse reactions."

To replace that gut feeling with some evidence, Fendrich, Lippert, and Johnson asked survey respondents who had been asked about illegal drug use whether they had felt felt threatened or embarrassed by the questions. Not much: the average score was less than 2 on a 7-point scale. But when asked if other people would feel threatened by those questions, the numbers shot above 5. Thus, survey respondents are as bad as IRBs at guessing how other people will feel about being questioned.

The authors conclude:

Consent documents often summarize potential adverse subject reactions to questions. For example, in the current study, the University of Illinois at Chicago’s REC [research ethics committee] approved consent document contained the following two sentences under the heading: “What are the potential risks and discomforts?”

There is a risk that you may feel anxious, uncomfortable or embarrassed as a result of being asked abut drug use and drug testing experience. However, you are free not to answer any question, and you are free to withdraw from the study at any time.

If our findings can be generalized to other studies asking questions about drug use, the first sentence may inappropriately convey an exaggerated sense of a drug survey’s risk. Even though voluntary participation is a non-contingent right, the second sentence seems to link the right of refusal and the voluntary nature of participation to this exaggerated risk.

The first author’s experience as a member and Chair of a behavioral science REC leads him to conclude that paragraphs like those cited above are common in survey consent documents. Researchers may pair statements about rights with statements about risk in order to appease REC concerns about study interventions to address risk. In the absence of empirical data, RECs should be cautious about recommending and approving consent documents that include clauses suggesting that questions about drug use cause emotional discomfort. Furthermore, RECs should recommend that consent documents decouple important reminders about subject rights from statements about potential risk (whether or not those risks are valid). While it may be important to reinforce rights in a consent document, we believe it is contrary to best practice to even imply that voluntary participation (and the right to withdraw or refuse to answer questions) should be contingent on adverse reactions. The type of text described above, however, would be obviated if RECs adopted a more realistic view of subject perceptions regarding drug use surveys.

Laud Humphreys Remembered

Scott McLemee's essay, "Wide-Stance Sociology" (Inside Higher Ed, 12 September 2007) uses Senator Larry Craig's arrest as a news hook for a discussion of the life and career of sociologist Laud Humphreys. Humphreys's 1960s research on men who found male lovers in public restrooms is a touchstone for advocates of IRB review of observational research. But as McLemee and some of the comments make clear, the case was far more nuanced than the medical-research scandals that inspired the federal requirement for ethical review, nor was his use of deliberate deception in any way typical of the work of the social scientists who now find themselves constrained by IRBs.

Friday, August 24, 2007

Macquarie's Respect for Expertise

A few weeks ago I critiqued "An inside-outsider’s view of Human Research Ethics Review," a blog post by Greg Downey of the Department of Anthropology at Macquarie University, Australia. Downey complained in a follow-up comment that I had "singled [him] out on [my] blog for derision," an unfair charge, given the effort I have put into deriding a wide assortment of scholars and public officials. But he took the bait, and in two follow-up posts, Dr. Zachary Schrag on ethics, IRB & ethnography and Some practical notes on ethics applications, he explains some of the ethics review process at Macquarie. Together with the "Human Ethics" page of the Macquarie Research Office, these postings provide a glimpse at a system that cares more about research ethics than regulatory compliance.

Particularly striking is Downey's description of who reviews applications:
A kind of departmental review does take place within the university-wide committee at Macquarie, as members of the committee are clustered so that color-coded sub-groups do the preliminary and most serious review of applications for which they have special expertise. If an application has to go to the whole committee (for example, research with children, medical procedures, Aboriginal Australian groups, or ethically challenging research tends to), we usually turn to the members of our committee who are best versed in the area of study. If we have a particularly difficult ones, we’ll consult with a faculty member outside the committee who has special experience.

The key words here are "special expertise," "best versed in the area of study," and "special experience." Someone at Macquarie has decided that having a nutritionist review oral history, or an oral historian review nutrition experiments, is not in the best interest of researcher or subject, but that having knowledgeable people review applications might make everyone happy. I am particularly impressed that the experts do the preliminary review, which under the American regulatory framework (I'm not sure about Australia) would mean giving them the power to provide exemptions or expedited approval.

As I have noted in comments on his postings, Downey has yet to persuade me that even this level of expert review is necessary for projects by trained researchers that only involve survey, interview, and observation research, and I remain attracted to the University of Pennsylvania system, under which researchers are certified and then, largely, left alone. But I thank Downey for introducing me to a university that is thinking hard and creatively about ethical review.

Tuesday, August 21, 2007

Northwestern IRB: Unsystematic Interviews Are Not Subject to Review

Today's New York Times features a story, "Criticism of a Gender Theory, and a Scientist Under Siege," about the case of J. Michael Bailey, Professor of Psychology, Northwestern University. Bailey's controversial book about identity. The book provoked several complaints, including the charge by "four of the transgender women who spoke to Dr. Bailey during his reporting for the book . . . that they had been used as research subjects without having given, or been asked to sign, written consent."

As reported by the Times, the case was investigated by Alice Domurat Dreger, Associate Professor of Clinical Medical Humanities & Bioethics at Northwestern, who has posted a draft article on the subject, "The Controversy Surrounding The Man Who Would Be Queen: A Case History of the Politics of Science, Identity, and Sex in the Internet Age," [PDF]

Dreger finds that Bailey did not commit serious ethical violations, nor did he violate the requirements for IRB review:

the kind of research that is subject to IRB oversight is significantly more limited than the regulatory definition of “human subject” implies. What is critical to understand here is that, in the federal regulations regarding human subjects research, research is defined very specifically as “a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge” (United States Department of Health and Human Services, 2005, sect. 46.102, def. “b”). In other words, only research that is truly scientific in nature—that which is systematic and generalizable—is meant to be overseen by IRBs. Thus, a person might fit the U.S. federal definition of “human subject” in being a person from whom a researcher gains knowledge through interpersonal interaction, but if the way that the the knowledge she or he intends to gain is unlikely to be generalizable in the scientific sense, the research does not fall under the purview of the researcher’s IRB.

It is worth noting here, for purposes of illustration of what does and doesn’t count as IRB-qualified work, that I consulted with the Northwestern IRB to confirm that the interviews I have conducted for this particular project do not fall under the purview of Northwestern’s IRB. Although I have intentionally obtained data through interpersonal interaction, the interview work I have conducted for this historical project has been neither scientifically systematic nor generalizable. That is, I have not asked each subject a list of standardized questions—indeed, I typically enjoyed highly interactive conversations during interviews; I have not interviewed all of my subjects in the same way; I have negotiated with some of them to what extent I would protect their identities. This is a scholarly study, but not a systematic one in the scientific sense. Nor will the knowledge produced from this scholarly history be generalizable in the scientific sense. No one will be able to use this work to reasonably make any broad claims about transsexual women, sex researchers, or any other group.

When I put my methodology to the Northwestern IRB, the IRB agreed with me that my work on this project is not IRB-qualified, i.e., that, although I have obtained data from living persons via interactions with them, what I am doing here is neither systematic nor generalizable in the scientific sense.

Clearly Bailey's work hurt the feelings of some people he wrote about, but, as Dreger notes, "scholarship (like journalism) would come to a screeching halt if scholars were only ever able to write about people exactly according to how they wish to be portrayed." Indeed, that's what social scientists have been arguing for three decades.

Monday, August 20, 2007

Guidance Creep

I am often frustrated by the argument that federal regulations provide, in Jeffrey Cohen’s words, “sufficient flexibility for the efficient and appropriate review of minimal risk research.” While individual IRBs have much to answer for, federal regulators have, over the years, stripped them of a great deal of flexibility.

I recently came across a striking example of this. In 1983, Richard Louttit of the National Science Foundation, who had helped craft the list of exemptions encoded in 45 CFR 46.101, explained them as follows:

Much research of minimal risk was exempted from IRB review in order to reduce the IRB workload so that research involving ethical questions could get more than cursory review. But some institutions have decided that the IRB, or its chairperson, must review proposals to decide if they are exempt from review. If this seems contradictory, it is. And this was not envisioned by the staff group which worked out the exemptions.

[Richard T. Louttit, "Government Regulations: Do They Facilitate or Hinder Social and Behavioral Research?," in Joan E. Sieber, ed., NIH Readings on the Protection of Human Subjects in Behavioral and Social Science Research: Conference Proceedings and Background Papers (Frederick, Md: University Publications of America, 1984), 179.]

Yet in 1995, the Office for Protection from Research Risks adopted just that contradictory position:

Institutions should have a clear policy in place on who shall determine what research is exempt under .46.101(b). Those persons who have authority to make a determination of what research is exempt are expected to be well-acquainted with interpretation of the regulations and the exemptions. In addition, the institution should be prepared to reinforce and review, as necessary, the method of determining what is exempt. OPRR advises that investigators should not have the authority to make an independent determination that research involving human subjects is exempt and should be cautioned to check with the IRB or other designated authorities concerning the status of proposed research or changes in ongoing research.
(OPRR Reports, 95-02) [21 July 2014 updated link.]

The regime in place today is far more intrusive than the one worked out in 1981. Changes in the regulations themselves are part of the problem, but so are radical reinterpretations like the one above.

Thursday, August 16, 2007

Study Finds IRBs Impose Inappropriate Forms and Guidelines

I thank Brad Gray for alerting me to:

Sarah Flicker et al., "Ethical Dilemmas in Community-Based Participatory Research: Recommendations for Institutional Review Boards," Journal of Urban Health 84 (July 2007); 478-493.

I do not have access to the full article, but here's the abstract:

National and international codes of research conduct have been established in most industrialized nations to ensure greater adherence to ethical research practices. Despite these safeguards, however, traditional research approaches often continue to stigmatize marginalized and vulnerable communities. Community-based participatory research (CBPR) has evolved as an effective new research paradigm that attempts to make research a more inclusive and democratic process by fostering the development of partnerships between communities and academics to address community-relevant research priorities. As such, it attempts to redress ethical concerns that have emerged out of more traditional paradigms. Nevertheless, new and emerging ethical dilemmas are commonly associated with CBPR and are rarely addressed in traditional ethical reviews. We conducted a content analysis of forms and guidelines commonly used by institutional review boards (IRBs) in the USA and research ethics boards (REBs) in Canada. Our intent was to see if the forms used by boards reflected common CBPR experience. We drew our sample from affiliated members of the US-based Association of Schools of Public Health and from Canadian universities that offered graduate public health training. This convenience sample (n = 30) was garnered from programs where application forms were available online for download between July and August, 2004. Results show that ethical review forms and guidelines overwhelmingly operate within a biomedical framework that rarely takes into account common CBPR experience. They are primarily focused on the principle of assessing risk to individuals and not to communities and continue to perpetuate the notion that the domain of “knowledge production” is the sole right of academic researchers. Consequently, IRBs and REBs may be unintentionally placing communities at risk by continuing to use procedures inappropriate or unsuitable for CBPR. IRB/REB procedures require a new framework more suitable for CBPR, and we propose alternative questions and procedures that may be utilized when assessing the ethical appropriateness of CBPR.

Tuesday, August 14, 2007

Incestuous Gay Monkey Sex

Scott Jaschik reports on the American Sociological Association annual meeting, "Who’s Afraid of Incestuous Gay Monkey Sex?," Inside Higher Ed, 14 August 2007:

Mary L. Gray, an anthropologist at Indiana University at Bloomington, described her work in graduate school, which raised all kinds of red flags with her IRB at the time: She wanted to study the way gay, lesbian, bisexual and transgender youth develop their identities in the rural Southeast, and she wanted to base her research on interviews with such youth, under the age of 18, without their parents’ knowledge. Her project, she said, “had every imaginable red flag.”

With some regrets, she won IRB support by appealing to prejudice many have of the rural South. Although she had no evidence to make this claim, she argued that the situation in the rural South is “so awful” for the young people she was studying that she couldn’t possibly approach their parents for consent. (Actually Gray believes that the situation for gay youth is more subtle and less uniform than she suggested, but she guessed it would work with the IRB, and it did.)

Because the IRB was — like most IRB’s — oriented around medical research, not social science, the focus was on potential harm that Gray could cause her research subjects in person. Gray reported that she received relatively little questioning or guidance from her IRB on one of her major areas of research: what the young people she studied wrote about themselves online. Gray developed her own ethics rules (she wrote to the subjects to ask permission), but she was struck by what was and wasn’t considered important by the IRB.

To the IRB, “distance read as objectivity” and so was by definition “good,” she said. Never mind that what her subjects shared about themselves online was as important as the thoughts they shared in person. This points to Gray’s broader critique of the IRB process. Social scientists frequently complain about IRB’s failing to understand their studies, but Gray suggested it was time to move beyond the idea of just adding more social scientists to the panel. Rather, she said it was time to question certain underlying assumptions of IRB’s and whether they even make sense for social science. It’s not that Gray doesn’t think there are ethical issues researchers must consider, but whether the medical model can ever work for projects that don’t follow the pattern of having a hypothesis designed to lead to the dispassionate creation of generalizable knowledge.

Gray said that “IRB fatigue” is discouraging researchers — especially graduate students — from even trying to get projects approved.

I can't say that I'd want a graduate student in any field asking minors about their sex lives without some kind of supervision. But it sounds as though UC San Diego's IRB lacked the expertise to give Gray meaningful guidance. That lack of expertise is built into the system of local IRB review, and it can produce decisions that are too lax as well as those that are too strict.

Evolving Research

Two recent blog postings raise the question of the awkward fit between IRBs' insistence on protocol review and research, such as ethnography and oral history, which begins with no set protocol.

University of Winnipeg professor of politics Christopher Leo poses and then answers positively the question, "Does the Ethics Bureaucracy Pose a Threat to Critical Research?" This provocative essay raises so many key questions that I plan to return to it in future postings. For now, note Leo's description about the evolving nature of his work:

Many researchers concerned with politics and policy stay in regular touch with politicians and public servants and, in the process, ask them questions the answers to which may well be used in future publications. That is an essential part of the research process because regular contact with well-informed people makes it possible for researchers to stay abreast of events and identify important issues as they arise.

So when does a query become a research question and a conversation an interview that requires ethics review? The guidelines are little help in answering that question, but, if we take them literally, they would appear to have taken from university researchers a right that every ordinary citizen enjoys, namely that of picking up the phone and talking to a politician or public servant without applying for bureaucratic permission to do so.

Meanwhile, over at Savage Minds, Alex Golub, assistant professor of anthropology at the University of Hawai’i Manoa (aka Rex), touches on the same question in the posting, "Using informed consent forms in fieldwork." He writes, "In some cases I interviewed people I’d known for years. I’d have breakfast or lunch with them and then schedule the official ‘interview’ for later on in the week."

IRBs that rigidly follow a biomedical model for ethics may insist that research protocols be spelled out in advance--even demanding sample questions. Such demands are inappropriate for the kind of work described by Leo and Golub: keeping in touch with knowledgeable people over a period of time.

An alternative appears in the University of Pennsylvania's Policy Regarding Human Subject Research in the Sociobehavioral Sciences. That policy accommodates such projects by freeing researchers from the requirement to submit "a fixed research protocol":

Evolving Research

Evolving research is a class of research in the sociobehavioral sciences in which the questions that are posed evolve in the course of investigation. An example is ethnography, where research questions may only be clarified after a period of observation and where current findings drive the next steps in the study. This class of research typically involves studying human behavior in non experimental settings, with or without active participation by the investigator; but it can also occur in more structured observational settings (e.g., oral histories, focus groups). In specific cases, such research does not pose more than minimal risk to human subjects and is considered to be “exempt from review,” as stated below. An approved mechanism is necessary for presenting to the IRB a research protocol that will evolve in the course of investigation. This policy institutes such a mechanism via certification.

4a. Research involving only non-interventionist observation of behavior occurring in public (including domains of the Internet clearly intended to be publicly accessible), for which no identifying information is recorded, is exempt from review.

4b. Investigators are allowed to use their certification, as per policy item 1, as a reference for describing evolving research activities to the IRB in lieu of a fixed research protocol.

This policy eliminates the need for investigators doing evolving research to spell out the details of a dynamic research protocol. The IRB can be assured that the research will be conducted in an ethically appropriate fashion, with full protection of human subjects, when certified investigators attest that their pre-registered research plan will be conducted within the ethical framework laid out in the training program for which they are certified.

In other words, if you have shown you know what you are doing, you don't have to get the IRB's approval for specific questions or topics.

I must note, however, that the Penn policy includes this disclaimer: "Note that different studies by the same investigator(s) must be submitted to the IRB as separate research protocols. These must not be viewed as a single study evolving from one investigation into another." I wonder what the Penn IRB would do with someone like Leo or Golub, who has the audacity to keep in touch with people for years.

Sunday, August 12, 2007

How Oral History Really Works

In "'If I See Some of This in Writing, I’m Going to Shoot You': Reluctant Narrators, Taboo Topics, and the Ethical Dilemmas of the Oral Historian," Oral History Review 34 (2007): 71-93, Tracy E. K’Meyer and A. Glenn Crothers present some of the challenges faced by oral historians in determining how much deference to give to a narrator's wishes. They describe a series of interviews they conducted with Marguerite Davis Stewart, a World War II Red Cross veteran, who contacted the Oral History Center at the University of Louisville offering to tell her stories.

Over three months, the interviewers recorded 32 hours of conversation, and found themselves wrestling with a number of questions. For example, they had to decide how much they should credit Stewart's dubious claim that she didn't think about race, how hard to press her for details of important but sensitive topics like her divorce, and how seriously to take Stewart's jests about not wanting some stories to be recorded. And since Stewart was blind and confined to a wheelchair, the interviewers had to decide how much time they could devote to helping her in daily life, and when to call in a qualified social worker.

Though the article does not mention IRB review, it suggests the futility of such review in solving the real questions that are likely to confront oral history interviewers. It shows that the hard questions were not present at the start of the process, but only emerged well after the point that an IRB would have approved, modified, or rejected a proposal. And that the tough questions were highly specific to the narrator. Thus, K'Meyer and Crothers write,

Conflict arose when, over time, Stewart sought our commitment to write a book according to her vision and outline. By that point in the interview process it had become clear to us that there would not be sufficient documentary resources to supplement her oral history and support a book-length manuscript. More important, because of her resistance there were gaps in the story that could not be filled. In short, we explained to her on frequent occasions that we could not write her book. We did agree to fulfill the original goal, to help her record the story, and to put an edited form of the transcript into the library for public use, organized according to the themes and chapters she identified. In effect, we promised separate products: her story deposited in the library and our interpretations in our academic work.

Even the most aggressive IRBs have not--as far as I know--demanded to review oral histories one narrator at a time, so they could not police such idiosyncratic concerns.

Finally, the questions raised in this article do not have clear right or wrong answers. It would be dreadful if an IRB could forbid the research or punish the interviewers because its members did not like the choices the interviewers made.

Saturday, August 11, 2007

James Weinstein's Anti-Intellectualism

In his contribution to the Northwestern symposium, "Institutional Review Boards and the Constitution," Professor James Weinstein defends the constitutionality of IRB review for both biomedical and non-biomedical research. Though he does not quite say that IRB review of journalism would be unconstitutional, he is clearly troubled by it, so he needs to distinguish journalism from the social sciences. He does so by denigrating scholarly research as largely irrelevant to democracy:

Although there is obviously a considerable area of overlap, social science research and journalism have distinct purposes and perform different societal functions. The primary purpose of research, at least at universities, is to discover knowledge both for its own sake and for the betterment of human kind, not to improve the practice of democracy by supplying the public with information to facilitate the “voting of wise decisions.” While some of this knowledge will facilitate public as well as private decisionmaking, much will not. And while academic researchers occasionally engage in research with the specific purpose of producing information to persuade others on matters of public concern, such ideological advocacy is not the primary ethos of academic research and, indeed, can be in tension with the primary academic goal of discovering truth regardless of its political or social implication. In contrast, a primary purpose of journalism is to inform people about matters of public concern and to act as a “watchdog” against governmental abuse and official malfeasance. Similarly, the function of the editorial side of journalism is precisely to influence public opinion.

This essential difference between social science research and journalism is reflected in the publications through which these two professions communicate with the public—scholarly journals and academic books versus newspapers and magazines. Scholarly publications are usually aimed at a narrow, specialized audience and address people in their professional capacity; journalistic media, in contrast, are typically aimed at a more general audience, often addressing people in their capacity as citizens (as well as consumers). Moreover, not only do general circulation newspapers and magazines usually contain an editorial page in which the publisher and editors try to persuade people on matters of public concern, these publications also often have an opinion page in which members of the public are invited to do the same. In contrast, while some scholarly publications publish editorials and even more commonly letters from scholars in response to an article or some academic issue, these publications do not generally solicit the views of the general public. Accordingly, although scholarly journals and books, on the one hand, and newspapers and magazines, on the other, both form part of the “structural skeleton that is necessary for public discourse to serve the constitutional value of democracy,” it is the newspaper and magazines that form the “backbone” of this structure. Scholarly publications, in contrast, contribute less central support for the structure (a “shin bone” perhaps, to continue Post’s orthopedic metaphor).

The Court might therefore take a more refined approach to a law that imposed IRB regulations directly upon social scientists at research institutions than it would to similar restrictions imposed on journalists. Borrowing a page from its defamation jurisprudence, the Court might hold that to the extent the interviews related to matters of public concern such as attitudes towards homosexuality, abortion, or the war in Iraq, the regulations could not be applied. With respect to interviews on subjects not of public concern, such as the language used by waiters and waitresses or whether people can match dogs to their owners, the Court might well apply a lesser degree of scrutiny. This distinction would reflect the more important role played by the press in our democracy. It would also take account of the related fact that unlike the typical journalistic interview or survey, many social science interviews and surveys will not contribute to democratic self-governance.

It is true that avoiding an overly-refined doctrine that is difficult to administer argues for deeming all interviewing techniques directed towards producing public information a unified medium warranting the same high level of First Amendment protection. Still, the lesser contribution that social science interviews generally make to the “constitutional value of democracy” suggests that regulations that burden these communications should trigger something less than the exacting scrutiny that would be applied if IRB regulations were applied to journalists’ communication with their sources.

The idea that the "typical journalistic interview or survey" involves great questions of war and peace and commerce and culture, while the typical social-science interview or survey involves dog owners, is, I suppose, a testable hypothesis, though not one that Weinstein tests by sampling news stories and journal articles. Absent such evidence, it is an anti-intellectual slur.

Weinstein concedes that "funding issues aside, it is possible that the Court would find IRB regulations unconstitutional as applied to research using only traditional interview techniques," especially if they were imposed on survey or interview research "on matters of public concern." His tolerance for IRBs must therefore rest on his belief that such research is so rare that it's OK if it is caught up in a system designed to protect human subjects from useless research.

I will let Dr. Woodrow Wilson reply:

There is the statesmanship of thought and there is the statesmanship of action. The student of political science must furnish the first, out of his full store of truth, discovered by patient inquiry, dispassionate exposition, fearless analysis, and frank inference. He must spread a dragnet for all the facts, and must then look upon them steadily and look upon them whole. It is only thus that he can enrich the thinking and clarify the vision of the statesman of action, who has no time for patient inquiry, who must be found in his facts before he can apply them in law and policy, who must have the stuff of truth for his conscience and his resolution to rely on. . .

The man who has the time, the discrimination, and the sagacity to collect and comprehend the principal facts and the man who must act upon them must draw near to one another and feel that they are engaged in a common enterprise. The student must look upon his studies more like a human being and a man of action, and the man of action must approach his conclusions more like a student.

[Woodrow Wilson, "The Law and the Facts: Presidential Address, Seventh Annual Meeting of the American Political Science Association," The American Political Science Review 5 (February 1911), 8.]

Friday, August 10, 2007

Symposium on Censorship and Institutional Review Boards

The long-awaited Northwestern University Law Review Symposium on Censorship and Institutional Review Boards has hit the Web. I have read and blogged about some of these articles in their SSRN incarnations, but I look forward to reading the rest.

Insider-Outsider-Down Under

David Hunter kindly alerted me to "An inside-outsider’s view of Human Research Ethics Review," posted on Culture Matters, a blog hosted by the Department of Anthropology at Macquarie University, Sydney, Australia. The posting is anonymous, but the author describes himself "as a sitting member of Macquarie University’s review board for human research," which I think identifies him as Greg Downey. Alas, another failed attempt at keeping an informant anonymous.

Downey's [?] essay is a rebuttal to Jack Katz, "Ethical Escape Routes for Underground Ethnographers," American Ethnologist 33 (2006): 499-506. In that essay, Katz argues that protocol-review, the basic tool of ethics committees, is inappropriate for ethnographic fieldwork because fieldwork is so unpredictable. As Katz puts it

when researchers participate in naturally occurring social life and write field notes on what they observe, they often encounter people and behavior they cannot anticipate. Indeed, one of the strongest reasons for conducting participant-observation research is the view that the current state of knowledge, as shaped by fixed-design research that prespecifies the kind of people to be studied and the ways to study them (sampling designs, formalized questions and protocols, and time- and space-delimited situations in which to observe), is artificial, a product not of the subjects’ social lives but of prejudice.

He also notes that some ethnographers draw from past experiences and observations of everyday life, neither of which can be reviewed by an ethics committee. He then suggests ways that researchers and universities might escape the regulatory boundaries that seem to require prior review of research.

Downey [?] seeks to rebut this argument by insisting that prior review can improve the ethical content of anthropological research. He writes,

The ethics review process should not be avoided, escaped, or ‘exempted’ away. Rather, ethics review boards can be educated about ethnographic research methods and encouraged to produce clear standards for our research. I worry that too many anthropologists inadvertently suggest that ‘ethics’ is a bureaucratic hoop, that the ‘politics of representation’ is a far more worthy consideration than the nuts and bolts of evaluating risk, minimizing dangers to participants (including researchers), balancing public interest against risks that can’t be eliminated, and thinking hard about our relationships to our subjects, our collaborators, the field, the public at large, our home institutions, and those who support our work.

This is unresponsive to Katz's critique. If anthropologists lack "clear standards for our research," by all means they should develop them, with or without the help of scholars in other fields. But I don't see how ethics committees can contribute to this effort by demanding from researchers that they get "preauthorization for observations and interviews," as Katz puts it. That's just a demand for information that doesn't exist.

Downey [?] also writes,

Katz’s suggestion that decisions be made public—for many reasons—seems to me an excellent one, but that can happen on the departmental level even without university boards being involved. That is, each student need not invent the application anew every time. The goal is not vacuous or self-righteous ‘boilerplate language’ for ethics applications, as one recent anthropology blogger suggested, but a legitimate attempt by the anthropology community to think about effective techniques for recurring issues such as oral informed consent, naturalistic observation in heavily trafficked settings, the use of photographs, the protection of populations under dangerous regimes, and the ethical requirements on those learning of illegal activity.

OK, so we have some movement toward compromise and consensus. I would like to suggest that if Downey [?] believes that departments are the appropriate organs to publicize ethics-committe success stories, the first department to do so should be the Department of Anthropology at Macquarie University. A listing of proposed ethnography projects and the improvements made to them by the Macquarie ethics committee could prove a model for researchers around the world.

Finally, I thank Downey [?] for drawing my attention to Australia's
National Statement on Ethical Conduct in Human Research. This document is so shocking that I will save comments on it until I have more time.

Wednesday, August 8, 2007

To the Historian All Men Are Dead

Blogger's note: Since I find myself in dialogue with a bioethicist working in the United Kingdom (see IRBs vs. Departmental Review and its many comments), now seems like a good time to present the views of Sir John Kaye, a British historian of the nineteenth century who used correspondence and interviews, as well as documents, in his work. I first read this passage as an impressionable college freshman, and it shaped my views of what historians do and why.


Sir John Kaye, "Preface," 1870.

From Kaye's and Malleson's History of the Indian Mutiny of 1857-8 (1897-1898; reprint, Westport, Connecticut: Greenwood, 1971), vol. 2, xi-xiii.

Dealing with the large mass of facts, which are reproduced in the chapters now published, and in those which, though written, I have been compelled to reserve for future publication, I have consulted and collated vast piles of contemporary correspondence, and entered largely into communication, by personal intercourse or by letter, with men who have been individually connected with the events described. For every page published in this volume some ten pages have been written and compiled in aid of the narrative; and if I have failed in the one great object of my ambition, to tell the truth, without exaggeration on the one hand or reservation on the other, it has not been for want of earnest and laborious inquiry or of conscientious endeavour to lay before the public and honest exposition of the historical facts as they have been unfolded before me.

Still it is probable that the accuracy of some of the details in this volume, especially those of personal incident, may be questioned, perhaps contradicted, notwithstanding, I was about to say, all the care I have taken to investigate them, but I believe that I should rather say "by reason of that very care." Such questionings or contradictions should not be too readily accepted; for although the authority of the questioner may be good, there may be still better authority on the other side. I have often had to choose between very conflicting statements; and I have sometimes found my informants to be wrong, though apparently with the best opportunities of being right, and have been compelled to reject, as convincing proof, even the overwhelming assertion, "But, I was there." Men who are personally engaged in stirring events are often too much occupied to know what is going on beyond the little spot of ground which holds them at the time, and often from this restricted stand-point they see through a glass darkly. It is hard to disbelieve a man of honour when he tells you what he himself did; but every writer, long engaged in historical inquiry, has had before him instances in which men, after even a brief lapse of time, have confounded in their minds the thought of doing, or the intent to do, a certain thing, with the fact of having actually done it. Indeed, in the commonest affairs of daily life, we often find the intent mistaken for the act, in the retrospect.

The case of Captain Rosser's alleged offer to take a Squadron of Dragoons and a troop of Horse Artillery to Dehli on the night of the 10th of May . . . may be regarded as an instance of this confusion. I could cite other instances. One will suffice:--a military officer of high rank, of stainless honour, with a great historical reputation, invited me some years ago to meet him, for the express purpose of making to me a most important statement, with reference to one of the most interesting episodes of the Sipáhi War. The statement was a very striking one; and I was referred, in confirmation of it, to another officer, who has since become illustrious in our national history. Immediately on leaving my informant, I wrote down as nearly as possible his very words. It was not until after his death that I was able orally to consult the friend to whom he had referred me, as being personally cognisant of the alleged fact--the only witness, indeed, of the scene described. The answer was that he had heard the story before, but that nothing of the kind had ever happened. The asserted incident was one, as I ventured to tell the man who had described it to me at the time, that did not cast additional lustre on his reputation; and it would have been obvious, even if he had rejoiced in a less unblemished reputation, that it was not for self-glorification, but in obedience to an irrepressible desire to declare the truth, that he told me what afterwards appeared to be not an accomplished fact, but an intention unfulfilled. Experiences of this kind render the historical inquirer very sceptical even of information supposed to be "on the best possible authority." Truly, it is very disheartening to find that the nearer one approaches the fountain-head of truth, the further off we may find ourselves from it.

But, notwithstanding such discouraging instances of the difficulty of extracting the truth, even from the testimony of truthful men, who have been actors in the scenes to be described, I cannot but admit the general value of such testimony to the writer of contemporary history. And, indeed, there need be some advantages in writing of events still fresh in the memory of men to compensate for its manifest disadvantages. These disadvantages, however, ought always to be felt by the writer rather than by the reader. It has been often said to me, in reply to my inquiries, "Yes, it is perfectly true. But these men are still living, and the truth cannot be told." To this my answer has been: "To the historian all men are dead." If a writer of contemporary history is not prepared to treat the living and the dead alike--to speak as freely and as truthfully of the former as of the latter, with no more reservation in the one case than in the other--he has altogether mistaken his vocation, and should look for a subject in prehistoric times. There are some actors in the scenes here described of whom I do not know whether they be living or whether they be dead. Some have passed away from the sphere of worldly exploits whilst this volume has been slowly taking shape beneath my pen. But if this has in any way influenced the character of my writing, it has only been by imparting increased tenderness to my judgment of men who can no longer defend themselves or explain their conduct to the world. Even this offence, if it be one against historical truth, I am not conscious of having actually committed.

Friday, August 3, 2007

Study Finds IRBs Make Consent Forms Harder to Read

In "Human-Subjects Research: Trial and Error," (Nature, 2 August 2007), Heidi Ledford writes:

When [physician William] Burman, of the University of Colorado in Denver, joined in two studies run by the Tuberculosis Trials Consortium, he knew that the consent forms needed to cater to people with an eighth-grade reading level (comprehensible to an educated 13-year-old). The trials involved multiple institutions, and the forms were sent to 39 institutional review boards (IRBs) — committees designed to determine whether a proposed experiment is ethically sound. The final approvals came in 346 days later, but what the IRBs sent back, Burman found disturbing.

"The consent forms were longer. The language was more complex," Burman says. "And errors were inserted at a surprising frequency." In one case, a potential negative side effect of the treatment had been accidentally edited out. Burman responded to the problem as any researcher would: he studied it. He had an independent panel review the changes. The reviewers found that 85% of the changes did not affect the meaning of the consent forms, but that the average reading level had jumped from that of an eighth grader to that of a twelfth grader (around 17 years old)1. His results confirmed something he'd suspected for some time. "I started to think about what was happening and it just seemed like the system was flawed." It was time to change the system.

Though the article (and the accompanying editorial, "Board Games," does not mention non-biomedical research, it does highlight the problem of relying on local IRBs, which are essentially committees of amateurs, to handle specialized tasks like drafting consent forms and determining procedures for confidentiality. See In Search of Expertise.

Thursday, August 2, 2007

IRBs vs. Departmental Review

In comments on this blog's introduction, bioethicist David Hunter of the University of Ulster asked me about my preferred alternative to IRB review, and I mentioned my hopes for departmental review (hopes shared by the AAUP). Lest our conversation get lost in the comments, I am moving it to this new posting:


I'd disagree on departmental review being best for two reasons.

1. While a committee which has some knowledge and expertise in the area of the project, too much expertise and it becomes too close to the subject matter. This can mean that it misses significant ethical issues because they are standard practice within a specific discipline. To give one example, psychologists often want to give part of their students grade (10%) for being involved in their research. Most RECs I am involved in don't allow this practice because it is felt it is unduly coercive. I imagine if a REC/IRB was entirely composed of psychologists they may disagree.

2. It is important for a REC to be substantially independent from the researcher, but this doesn't happen in departmental review, instead the REC has an interest in the research being let to go ahead.

My university presently runs on a departmental review model, and while I can't name names I have personally seen examples of both of the above issues coming up.

I've written about these problems here:
Hunter, D. 'An alternative model for research ethics review at UK universities' Research Ethics Review. (2006) Vol 2, No 2, 47-51.
(Which unfortunately isn't available online)

and here: Hunter, D. 'Proportional Ethical Review and the Identification of Ethical Issues Journal of Medical Ethics. (2007);33:241-245.

I certainly agree with you that IRBs shouldn't be dominated by medics and medical concerns, they instead should have a wide range of representation. I'm inclined to think though that the baseline ethical issues are similar and while different rules may be appropriate for different disciplines they flow out of the same background.

In terms of examples here are a few, I can't be too specific with details for reasons of confidentiality.

1. Study of sexual attitudes in school children. Asked very probing questions as one might expect, but didn't intend to get parental consent to carry out the research a parallel can be found here: India Research Ethics Scandal: Students made guinea pigs in sex study
No consideration had been given to what might have been done if there was disclosure of harmful behaviour etc.

2. Historian was going to civil war stricken country to interview dissidents about the war, intended to publish identifying comments (without getting consent for this) which were likely to be highly critical of the current regime.

3. Social scientist wanted to understand children's attitudes towards a particular topic. As a blind so that the participant would not know the questions they wanted to answers to, they proposed to use the becks depression index. This contains questions about self harm, future worth and was potentially very distressing, not at all appropriate as a blind.

4. Student wished to conduct interviews with employees of a company on an issue that could significantly damage the companies profitability. No consideration was given to how to best report this information to minimise harm to the company.

I'm inclined to think that any sort of research involving humans can lead to harm whether that is physical, social, financial, psychological or so on. As such the benefits and the risks need to be balanced, and it needs to be considered how to minimise that harm. That I take it is the job of the researcher. However, having sat on RECs for a while it is a job that sometimes the researchers fail at spectacularly, then it becomes the job of the IRB/REC. The difficulty is how, without full review by a properly constituted REC, do you identify those applications that have serious ethical issues?


Thanks for these examples.

First, let me state that I am primarily interested in projects that fit Pattullo's proposal of 1979: “There should be no requirement for prior review of research utilizing legally competent subjects if that research involves neither deceit, nor intrusion upon the subject’s person, nor denial or withholding of accustomed or necessary resources.” Under this formula, the projects invovling children (who are not legally competent) and the project involving undergraduates (whose course credit is an accustomed or necessary resource) would still be subject to review.

That said, I have little confidence that IRBs are the right tool to review such research. As for child research, under U.S. regulations, and, I believe, the rules of most universities, the studies could be approved by three IRB members wholly lacking in expertise on child development. (The regulations encourage but do not require the inclusion of one or more experts when vulnerable populations are involved.) Were I the parent of a child involved in such studies (and I'm proud to say that both my children have furthered the cause of science by participating in language studies), I would greatly prefer that the protocols be reviewed not by a human subjects committee, but by a child subjects committee composed mostly or entirely of people expert in child research.

For the psychology course and the history project, the real question is whether a departmental committee can be trusted to enforce its own discipline's ethical code. The code of the British Psychological Society forbids pressuring students to participate in an experiment. And the ethical guidelines of the the Oral History Society require interviewers "to inform the interviewee of the arrangements to be made for the custody and preservation of the interview and accompanying material, both immediately and in the future, and to indicate any use to which the interview is likely to be put (for example research, education use, transcription, publication, broadcasting)." So yes, those sound like unethical projects.

Perhaps some departments would fail to correct these mistakes, just as some IRBs and RECs get them wrong. At some level this is an empirical question that cannot be answered due to the uniform imposition of IRB review. In the U.S., at least one university (the University of Illinois) had a system of departmental review in psychology that worked without complaint until it was crushed by federal regulation in 1981. With the federal government imposing the same rules nationwide, we can only guess about how well alternatives would work.

Moreover, departmental review would allow committees to bring in considerations unknown to more general ethics committees. For example, the British and American oral history codes require attention to preservation and access to recordings, something that and IRB/REC is unlikely to ask about.

I would also add that something close to departmental review is typical of the standard IRB, i.e., one in a hospital or medical school. It's true that the U.S. regulations require "at least one member whose primary concerns are in nonscientific areas" and "at least one member who is not otherwise affiliated with the institution and who is not part of the immediate family of a person who is affiliated with the institution." But the rest of the members can be biomedical researchers of one stripe or another. If that's good enough for the doctors, how about letting each social science discipline form an IRB of its members, with a community member and a non-researcher thrown in?

Still, if IRBs/RECs limited themselves to holding researchers up to the standards of the researchers' own academic discipline, I wouldn't be complaining.

Where we really disagree, then, is on project 4. You write, a "Student wished to conduct interviews with employees of a company on an issue that could significantly damage the company's profitability. No consideration was given to how to best report this information to minimise harm to the company."

That sounds a lot like this case:

Kobi Alexander's stellar business career began to unravel in early March with a call from a reporter asking why his stock options had often been granted at the bottom of sharp dips in the stock price of the telecom company he headed, Comverse Technology Inc.

According to an affidavit by a Federal Bureau of Investigation agent, unsealed in Brooklyn, N.Y., the call to a Comverse director set off a furious chain of events inside the company that culminated yesterday in criminal charges against Mr. Alexander and two other former executives. Federal authorities alleged the trio were key players in a decade-long fraudulent scheme to manipulate the company's stock options to enrich themselves and other employees.

After the March 3 phone call from a Wall Street Journal reporter, the FBI affidavit said, Mr. Alexander and the other two executives, former chief financial officer David Kreinberg and former senior general counsel William F. Sorin, attempted to hide the scheme. Their actions allegedly included lying to a company lawyer, misleading auditors and attempting to alter computer records to hide a secret options-related slush fund, originally nicknamed "I.M. Fanton." It wasn't until a dramatic series of confessions later in March, the affidavit said, that the executives admitted having backdated options. The trio resigned in May.

That's an excerpt from Charles Forelle and James Bandler, "Dating Game -- Stock-Options Criminal Charge: Slush Fund and Fake Employees," Wall Street Journal, 10 August 2006. As far as I can tell, Forelle and Bandler made no effort to minimize the harms to the companies they studied or the executives they interviewed. Their "Perfect Payday" series won the 2007 Pulitzer Prize for public service.

Your insistence that an interviewer minimize harm is a good example of an effort to impose medical ethics on non-medical research, and a good reason to get RECs away from social science.