Friday, December 25, 2009

After Human Terrain, Will AAA Debate IRBs?

Earlier this month, the American Anthropological Association's Commission on the Engagement of Anthropology with the US Security and Intelligence Communities (CEAUSSIC) issued its Final Report on The Army's Human Terrain System Proof of Concept Program.

The report argues that the Human Terrain System (HTS) combines scholarly research and military information-gathering in a way that muddles ethical issues:

HTS ethnographers attempt to juggle dual loyalties both to civilian populations and to their military units, under conditions which almost inevitably lead to conflicting demands. Potentially conflicting demands (between serving occupied, studied populations, and serving the needs of the military with whom [Human Terrain Teams] embed) almost necessitates that HTS social scientists choose between multiple interests in ways that stand to undermine basic ethical principles that govern research with human subjects among anthropologists and among government researchers. (52)

Significantly, the report more or less recognizes that the choice of interests could go either way. One possibility would be to bring HTS wholly into the realm of scholarly research, with all of its ethical codes and legal regulations, including IRBs:

If HTS carries out a research function as advertised, and if it encourages its social scientists to use ethical research practices, then it should comply with 32CFR219, regulations issued by the Office of the Secretary of Defense (OSD) that address human subjects protection. (47)

Alternatively, the report hints that the real problem is merely a poor choice of words. "We should consider the work of HTTs to be sharply different, in its goals, from conventional disciplinary ethnographic pursuits and not to be 'ethnography' in any credible sense." (54) If HTS were re-branded to avoid the terms "anthropology," "ethnography," and "social science," and instead present itself as a counterinsurgency program pure and simple, then--it seems--CEAUSSIC would not expect it to follow either the AAA ethics codes or the Common Rule. All of this points to the need for clear definitions when discussing ethical and legal obligations.

For the purposes of this blog, a more interesting document is the October 13 blog post, "Why not Mandate Ethics Education for Professional Training of Anthropologists?" by CEAUSSIC member Carolyn Fluehr-Lobban.

Fluehr-Lobban calls for "ethics education as a mandatory part  of anthropology curricula." As she describes it,

A future standard ethics curriculum would minimally include a history of the discipline and ethics– this would help to correct misconstruing history, as has been the case in security engagement polemics where a standard of "voluntary informed consent" is often cited as 'traditional' or normative when, in fact, language on informed consent appears for the first time in the 1998 AAA code.  It would also include case studies representing a realistic spectrum of scenarios and dilemmas where mixed outcomes are the likely norm, and clear positive or negative outcomes are likely exceptions.

But while Fluehr-Lobban seems open to questioning such standards as "informed consent" and to exploring the nuances of real-world research, she is dismissive of comparable discussion of the legitimacy of IRBs:

There is still a tradition of resistance to the annoyance of having to go before an IRB. Part of this history rests with anthropology as the study of "the other," of "subjects," using "informants," whereby the anthropologist is ideally unfettered with unlimited freedom to conduct research. But, clearly, this is not the world we live in. As standard practice, all anthropological research is, or should be, subject to external review.

In other words, Fluehr-Lobban suggests that anyone who doesn't like IRBs wants unlimited freedom to study "the other." This is an insult to the many thoughtful critics who, over the decades, have shown that IRBs and their attendant apparatus can be a barrier to true ethical reflection. It is also an indicator of how entrenched the belief in IRBs has become within the AAA leadership. But has the organization ever really debated whether IRBs are the best way to promote its ethical standards? If not, CEAUSSIC should seize this opportunity for such a discussion within the profession.

Tuesday, December 22, 2009

Grad Student Needed 80 IRB Approvals

In an account apparently posted in July 2008, Jennifer M. Purcell describes what she went through to get approval for her dissertation research in education at the University of South Florida.

Purcell was investigating the apparent disparity between the knowledge and skills needed by college faculty, and the knowledge and skills taught in doctoral programs. She wanted to ask college professors what they thought faculty and students should know and who should teach it. A typical question asked how important these professors considered the ability to "appreciate the history and purposes of higher education." (Jennifer M. Purcell, "Perceptions of Senior Faculty Concerning Doctoral Student Preparation For Faculty Roles," Ph.D. diss., University of South Florida, 2007.)

Saturday, December 19, 2009

Is Documentary Film Human Subjects Research?

Kimberlianne Podlas, a lawyer and an assistant professor of media studies at the University of North Carolina, Greensboro, argues that "virtually all journalistic inquiry and nonfiction filmmaking . . . are not subject to IRB jurisdiction." ("This Film Has Been Rated 'Approved': Are Documentary Films Subject To Institutional Review Board Approval and Federal Human Subjects Research' Rules?")

To reach this result, Podlas argues that documentary films fail one or more of five tests necessary to trigger IRB jurisdiction:

First, the general type of undertaking must be one that is directly regulated by a federal agency. Second, the activity must be "human subjects research"; This requires the undertaking to conform to the regulatory definition of "research." Third, that research must collect information from or about living individuals. Fourth, that information must be either "data" or "private information." And finally, the "human subjects research" must be either biomedical or behavioral.

Let's take these in order.

Saturday, December 12, 2009


Google Alerts uncovered an April 2008 memo from the Army's chief of military history, explaining that while Army historians are obliged to obtain the informed consent of anyone they interview, the U.S. Army's Historical Program does not consider oral history to be under IRB purview:

Given that oral history is the collection of personal and unique insights on events, it does not fit the definition of scientific research as outlined in 45 Code of Federal Regulations 46 that is at the center of the Department of Health and Human Services regulations of the issue. Oral histories are not a "systematic" attempt to gather data from "human subjects" that can be used in any way to contribute to "generalizable" knowledge. They are therefore exempt from HRPP oversight and IRBs.

Happy Birthday, Institutional Review Blog!

Wednesday, December 2, 2009

Survey: Most IRBs Lack Sociologists

The Western Massachusetts Institute for Social Research kindly alerted me to its survey of socioogists, conducted in the summer of 2009. Of the 98 respondents who have conducted research in the past five years, 90 reported that they had undergone IRB review.

The survey found that IRBs are more likely than sociologists to judge a study risky. Only 13 respondents "said that they believe that some harm could have come to respondents as a result of their involvement in the research," but 20 reported that a member of the IRB believed there was such a risk.

This is not surprising. The premise of IRB review is that committees are better able to flag potential harms than are individual researchers, so the higher levels of risk seen by the IRBs could indicate that they are working well, or that they are overestimating the risks of research.

To distinguish the two possibilities, it would help to know why the IRB members saw risk. In 1979, for example, Lauren Seiler and James Murtha showed that IRB chairs commonly insisted on modifications even though most had never heard of harm coming to a participant in sociology research. [Lauren H. Seiler and James M. Murtha, "Federal Regulation of Social Research," Freedom at Issue, Nov-Dec 1979.] Is that still the case?

Another finding of the Western Massachusetts survey is that a minority (44 percent) of respondents reported that the IRB that reviewed their research included a sociologist. Federal regulations require IRBs to include members "with varying backgrounds to promote complete and adequate review of research activities commonly conducted by the institution." This was one of the few protections offered to social scientists worried that their research would be subject to the whims of people outside their field. But it appears that many or most IRBs have failed to meet this standard.

Friday, November 20, 2009

Draft TCPS Allows Critical Inquiry of Aboriginal Governments

In 2006, Canadian historian Nancy Janovicek complained that ethics polices designed to protect Aboriginal peoples could allow Aboriginal governments to silence their critics by denying researchers permission to speak with them.

A new draft revised version of Chapter 9 of the Tri-Council Policy Statement addresses that problem. While it calls for researchers to secure the permission of Aboriginal governments for most types of research, it recognizes that this may be inappropriate when those governments themselves are being critically examined:

Article 9.7.   Research that critically examines the conduct of public institutions or persons in authority may do so ethically, notwithstanding the usual requirement, in research involving Aboriginal peoples, of engaging representative leaders.

As I have written before, the draft TCPS is inconsistent in its respect for critical inquiry, and ethics committees may not give sufficient weight to the disclaimers like this one. But such statements do give researchers a foothold in arguing for the freedom of inquiry.

Wednesday, November 18, 2009

Brown Pledges IRB Reform

The Brown Daily Herald reports "a series of reforms" at Brown University, intended to "streamline Institutional Review Board procedures." (Sydney Ember, "Reform in the Works for Research Review Board," 13 November 2009.)

The process started in 2007, when Brown faculty complained that IRB operations were inhibiting research, especially by undergraduates. Brown's Research Advisory Board convened a four-person, ad hoc subcommittee to investigate. That subcommittee released a draft report, "Undergraduate Research in the Social Sciences and the Institutional Review Board at Brown" in January 2009.

The January 2009 Report

The draft report found:

Many faculty members from the social sciences report some aspect of the IRB to be or to have been a burden, and a significant fraction feel that IRB practices are having or have had some dampening effect on the quality or availability of undergraduate research opportunities. There is the widely held belief that many social science projects typically consisting of interviews or surveys, have a low intrinsic risk. Most faculty members, however, do recognize the existence of some risk, depending on the nature of the activity, and the need for some type of oversight. Many faculty members believe that the formal and rigidly structured IRB process is not an optimal way to oversee and regulate undergraduate work, due to variable levels of organization, knowledge, and professionalism among the undergraduates, and the special time constraints associated with the undergraduate senior year.

The subcommittee offered three policy options:

A. "Brown adopts IRB review as the standard procedure for undergraduate theses and non-classroom projects dealing with human subjects."

B. "Continue the current system, but clarify several points and communicate the policy more explicitly to avoid faculty and student confusion."

C. "Adopt and communicate a policy in which non-federally-funded undergraduate work is not subject to IRB review, but rather to some other educational and oversight system tailored for undergraduates, to be defined."

It recommended Option B "as the best near-term solution," while keeping C open as a longer-term option.

Brown's Faculty Executive Committee (FEC) received the draft report at its January 2009 meeting. According to the minutes of that meeting, "The FEC was disappointed that the report did not address some of the larger issues. It appears that the definition of research is getting broader so that the IRB has their hand in every aspect of research." As far as I can tell, the report has still not been finalized.

Changes Since January 2009

According to the Daily Herald story, the university's Research Protections Office (RPO) claims that it has implemented many of the recommendations in the report, primarily by updating its website. Since many of the pages on that website are undated, it's hard to know how many have been changed since the release of the draft report in January.

Some of the report's recommendations do seem to have been implemented. For example, the report called for the prominent placement of the information that faculty advisors get to decide whether an undergraduate project needs IRB review. That information is now indeed prominently displayed.

In contrast, the report specifically objected to the assertion that "obvious examples of dissemination [and therefore generalizability] are publication in a scholarly journal, presentation at a professional conference, or placement of a report in a library." It recommended the deletion of the references to conferences and libraries, but as of today, they remain on the RPO website. Given that this was one of the most concrete, immediate proposals of the subcommittee, I suggest that the Daily Herald may have been premature in its announcement of reform.

Erroneous Assertions

Not mentioned in the faculty report are three significant misstatements about federal regulations in the RPO's document, Frequently Asked Questions.

1. "Federal Regulations are clear that it is not up to the investigator alone to determine if a project is exempt."

Federal regulations specify no such thing, as recently reiterated by OHRP.

2. "In certain situations, all involving no more than minimal risk, the IRB can waive the requirement that you obtain the participant's signature on the consent form."

In fact, 45 CFR 46.117(c)(1) also allows IRBs to waive the signature requirement when "the only record linking the subject and the research would be the consent document and the principal risk would be potential harm resulting from a breach of confidentiality." This is true even when the risk is greater than minimal.

3. "As long as your research involves collecting data or information from or about living individuals, you need to have it reviewed by the IRB."

This would put reading a newspaper under the jurisdiction the IRB. Brown's RPO doesn't really believe this, but it hasn't been careful with its explanations.

Unfinished Business

It's great that a Brown faculty committee has taken a look at IRB operations, and that the administration has made some changes in response. But the report indeed failed to address some of the big issues at stake, and the administration has failed to implement some of the minor reforms suggested ten months ago. This case suggests the difficulty of restoring the principle of faculty governance when it comes to social science research.

Tuesday, November 10, 2009

Princeton IRB Delays Student Research

The Daily Princetonian reports a sociology major's difficulties getting IRB approval for her senior thesis on Brazilian immigrants' changing perceptions of gender roles.

"It's such a long process that it thwarts your field work efforts," [Christine] Vidmar said, noting that the review board does not meet to approve proposals during the summer. "I've been waiting since I got back to school. The first deadline that I could apply for was in October. It's November now, and I still can't officially go do my interviews."

. . .

Vidmar noted that a well-researched thesis may require up to a year of field work, adding that review board hurdles make it more challenging to complete sufficient research. "If you're a senior and you don't have a thesis chosen by the spring of junior year then you can't start field research until November or December of senior year, which is really late," she said. "You need to be in the field in order to know what questions you're going to ask, but in order to be in the field you need to have given the IRB your questions ahead of time."

As horror stories go, this one is mild. But consider the following:

  • While details are lacking, Vidmar's proposed research sounds to be exempt under federal regulations; she's just interviewing adults about their perceptions of gender.
  • Princeton demands full board review for "almost all proposals," offering expedited review only on "an exception basis."
  • The IRB does not meet for three and a half months in the summer and requires proposals to be submitted two weeks in advance of the meeting. Hence, a student who misses the late-May deadline must wait almost four months until late September for review.

Put these together, and it seems that Princeton has built a substantial impediment to students who would like to interact with people as a capstone to their undergraduate training but are unable to write detailed research protocols six months in advance.

This is not to say that undergraduates should be sent into the field without training or supervision. But review by at the department level, as suggested by Felice Levine and Paula Skedsvold; subcommittee review, as practiced at Macquarie University; or researcher certification as permitted at the University of Pennsylvania, might well achieve the same or better levels of oversight as full-board review without delaying the work and discouraging the curiosity of a student researcher.

Friday, November 6, 2009

Former IRB Chair Decries Inconsistency

Jim Vander Putten, Associate Professor of Higher Education at the University of Arkansas-Little Rock, kindly alerted me to his essay, "Wanted: Consistency in Social and Behavioral Science Institutional Review Board Practices," Teachers College Record, 14 September 2009.

Vander Putten, who chaired his university's IRB for six years, complains that IRBs fail to make decisions consistently. He accuses them of both under- and over-protection, and then offers two suggestions for reform.

Saturday, October 31, 2009

AAHRPP Retreats from "Objective Data"

In July I posted my comments on AAHRPP's Proposed Revised Standards. At the time, I applauded Element I.5.B for insisting that "based on objective data, the Organization identifies strengths and weaknesses of the Human Research Protection Program, makes improvements, when necessary, and monitors the effectiveness of the improvements."

How disappointing, then, to find that the Final Revised Accreditation Standards omit the phrase about objective data. Are we to infer that AAHRPP considers objective data too difficult a standard, and wants institutions to base their programs on subjective impressions? Of course, most of the IRB regime is based on such guesswork, but I had thought that AAHRPP seeks to raise the level of IRB review.

Wednesday, October 28, 2009

AAHRPP Policy on FWAs Remains Blurry

Back in July, I reported on the AAHRPP's ambiguous position on whether the institutions it accredits may "uncheck the box" on their federalwide assurances.

AAHRPP's new Final Revised Accreditation Standards fail to resolve this ambiguity. They require that an accredited organization apply "its HRPP [Human Research Protection Program] to all research regardless of funding source, type of research, or place of conduct of the research," but do not state not whether that HRPP must track federal regulations in all cases.

Interviewed for the October 2009 Report on Research Compliance, AAHRPP President Marjorie Speers had this to say:

We believe an organization must protect participants in all of the human research it conducts, whether or not it receives federal funding . . . As an accrediting organization, we don't have an opinion on whether or not an institution should 'check the box,' on their FWAs to OHRP. If an institution 'checks the box,' then we hold the institution to follow the regulations to all research to which 'the box' applies. If the boxes are unchecked, we hold the organization to have equivalent protections in place for all research.

This does little to clarify matters. What are "equivalent protections" to those specified in federal regulations? Were AAHRPP site visitors correct to tell the University of California "that in order for a human research protection program to be accredited, it must apply the Common Rule and its subparts to all human research at the institution, irrespective of funding"? Or can a university add new categories for exemption and expedited review, as advocated by Lisa Leiden, and consider those equivalent to the federal categories?

Unchecking the box is one of the leading proposed remedies for IRB overreach. It is a pity that AAHRPP has missed this opportunity to address this movement more directly.

Sunday, October 18, 2009

OHRP Grudgingly Okays Self-Exemption

In his May 14 speech, "The Legal Assault on the Common Rule," OHRP director Jerry Menikoff pledged that his office would issue new guidance on the Common Rule exemptions. While OHRP would still recommend that investigators not be empowered to decide for themselves whether their research is exempt, it would also emphasize that "it's just a recommendation. You don't have to follow it."

Five months later, OHRP has kept that promise, issuing a new document entitled FAQs: Exempt Research Determination. While the new guidance continues to recommend that "because of the potential for conflict of interest, investigators not be given the authority to make an independent determination," it makes clear that this is not a regulatory requirement. It even goes further, offering a somewhat detailed scenario that would satisfy regulatory requirements:

For example, an institution might craft a checklist for certain exemption categories, with questions that are easily answered "yes" or "no" by an investigator, with certain answers leading to a clear conclusion that the study is exempt. The institution might allow a researcher to immediately begin a study after having completed such a checklist and filed it, together with accompanying documents, with an appropriate institutional office, without waiting for or requiring any prior review of that filing. Similarly, a web-based form might be created that served the same purpose, allowing the researcher to begin the research immediately after submitting the required information using the web form. In both instances, the key issue would be whether these procedures lead to correct determinations that studies are exempt.

While this is certainly a step in the right direction, it leaves unanswered the question of why OHRP still deprecates such a system of "independent determination." In particular, the new guidance claims that "an institutional policy that allowed investigators to make their own exemption determinations, without additional protections, would likely risk inaccurate determinations." What is the basis of this claim? Has anyone done a study showing that investigators make poor determinations? What does it even mean to make an inaccurate determination, when federal officials themselves appear unable to apply the exemptions to hypothetical projects?

The truth is that OPRR's 1995 guidance was less a response to any misapplication of the exemptions than part of a larger effort to look busy amid national concern about human radiation experiments conducted decades before OPRR's creation. Rather than reconsidering its panicked advice from that period, OHRP has merely acknowledged that its recommendation has no basis in the regulations.

Note: As of this posting (18 October 2009), the bottom of the page with the new guidance reads "Last revised: April 20, 2009." An OHRP representative tells me this is an error, and that the new guidance was in fact posted on 14 October 2009.

Friday, October 9, 2009

Oral History Association Considers Guideline Revisions

At its annual meeting next week, members of the Oral History Association will vote on a set of General Principles for Oral History and Best Practices for Oral History.

The most striking feature of the new guidelines is that they avoid the confusing format of the existing Evaluation Guidelines, which pose dozens of questions without offering the proper answers or explaining whether answers might vary by project. Instead, the new guidelines present clear, declarative statements about how best to conduct oral history.

A more substantive change concerns harm. The existing guidelines state that "interviewers should guard against possible exploitation of interviewees and be sensitive to the ways in which their interviews might be used," and they suggest that interviewers must endeavor "to prevent any exploitation of or harm to interviewees." While the new guidelines offer many specific protections to narrators, they eliminate this vague language of exploitation and harm. And they caution that interviewers cannot guarantee control over the interpretation and presentation of interviews.

More generally, while the guidelines reflect historians' concerns with informed consent, they show the irrelevance to historical research of the biomedical concerns of risk/benefit analysis and equitable selection of subjects. There is more to research ethics than what is contained in the Belmont Report.

Friday, September 4, 2009

Internet Survey Sparks Outrage

Two newly PhD'd "cognitive neuroscientists"--Ogi Ogas and Sai Gaddam--got a book contract (rumored to be quite lucrative) with a popular press to write a book called "Rule 34: What Netporn Teaches Us About The Brain."

As part of their work, they launched an online survey aimed at authors of sexually explicit, online fan fiction. Many people who read the survey found it to be poorly designed and offensive, and anger grew as fan authors came to fear that the book would present erroneous information about their community.

The study was not IRB approved. Because the researchers had graduated from Boston University by the time they launched the survey, BU's IRB has disclaimed any authority over the matter, though it may have asked the researchers to stop using presenting themselves as being affiliated with the university. While some of the commentary on the event has included discussions about what the IRB might have done had it been presented the protocol, we can only speculate about whether IRB review would have changed the project for better, worse, or not at all.

Moreover, the chief concern of critics seems not to be that individual survey respondents would be harmed, but that their community as a whole would be harmed by a mass-market book written by inept, ignorant authors. Since the National Commission, policy makers have generally agreed that IRBs should not try to defend whole communities against mischaracterization by scholars.

Still, readers of this blog may be interested in a case where researchers' lack of preparation irreparably alienated the very people whom they wished to study.

For a good introduction, see Alison Macleod's human element blog. Many links follow.

Thursday, August 27, 2009

Survey Seeks Ethnographers' Experiences with Ethics Oversight

Lisa Wynn of Macquarie University has posted an online survey asking for ethnographers' "subjective experience of ethics oversight – their memories of when and how they first became aware of ethics oversight, what they think and feel about it, whether and how they comply with it, and whether they think it makes ethnographic research more ethical or not."

Since I will publish Wynn's findings in the special issue of the Journal of Policy History I am editing, I naturally hope that researchers embrace this opportunity to help us understand the evolving role of IRBs and other ethics oversight bodies in the social sciences.

Note that Wynn defines ethnography broadly to include "any discipline that uses ethnographic research methods, including, but not limited to, anthropology, sociology, political science, history, geography, linguistics, Indigenous studies and area studies."

Sunday, August 16, 2009

Psychologist Blasts "Taxonomic Chaos"

John J. Furedy, Emeritus Professor of Psychology, University of Toronto, has posted, "Implications for Australian Research of the Taxonomic Chaos in the Canadian Bioethics Industry: Après Moi le Deluge," originally presented at a June 2009 ethics conference in Australia. Though Furedy's expertise is in experimental psychology--a field outside the scope of this blog--his paper is relevant to the social sciences and humanities as well.

Furedy, who himself served for decades on ethics committees, argues that Canadian research ethics boards worked pretty well until the early 1990s. But since then bioethicists "have created taxonomic chaos by conflating such distinctions as the distinction between ethical and epistemological issues, or the differences among medical drug evaluation studies, psychological experiments, and sociological surveys."

He offers three specific complaints:

1. "REBs have taken it upon themselves to judge not only whether the proposed research is ethical, but also whether it is scientifically valid. But research-design issues for a particular piece of research require a specific sort of epistemological expertise which most REB members do not possess."

2. "The Tri-Council committee has succeeded in persuading governments and universities to treat a sociological opinion survey and a drug evaluation study, as if they were all part of 'human subject research,' that can be evaluated by the same all-knowing REB, using criteria that may apply to medical treatment-evaluation studies, but that do not apply to most social science research."

3. Though the Tri-Council agreed to drop the term "code" (with its suggestion of mandatory rules), "it was made clear to REBs, that if a researcher did not follow the so-called "statement", the right to apply for funding would be denied, because the REB would refuse to accept the proposed research."

Furedy stresses that all of this is relatively new, but that new scholars may not understand that. He writes,

senior investigators are likely to be able get their research proposals through, even though they know, in their heart of hearts, the significance of distinctions such as the one between ethical and epistemological or research-design issues. But for younger researchers, and especially those who are currently students, the distinction between ethical and epistemological issues has been conflated, and so they lack a memory of how research used to be conducted. So researchers of the future are likely to succumb to the bioethics industry. They will, in the epistemological sense, be corrupted by these developments. Current senior researchers, then, who are in control to-day, are acting like France's Louis XV, who was said to have said "Après moi, le deluge."

As a historian, I applaud both the reference to the Bourbon monarchy and Furedy's emphasis on the need for historical consicousness. If younger researchers understand that scholars did not always operate under today's restrictive conditions, they are more likely to imagine alternatives.

Tuesday, August 11, 2009

UT Knoxville's IRB Joins "Collective Mobbing"

Over at Counterpunch, anthropologist David Price reports on the case of Janice Harper, an anthropologist recently dismissed from the University of Tennesee Knoxville.

According to Price, Harper's troubles began in 2007, when she reported sexual harassment by a colleague. Despite a unanimous vote from her college's tenure and promotion committee and strong outside letters of support, her associate dean opposed her bid for tenure. Worse still, she was accused of mental instability. As Price reports, "like a textbook discussion of collective mobbing behavior, the act of investigation brought more accusations," including student allegations that Harper planned to build a hydrogen bomb. This led to an FBI investigation, which found no criminal activity.

All of this would be bad enough, but then the IRB decided to make it worse. As Price explains,

Dr. Harper says that in early June, the University of Tennessee’s Institutional Review Board (IRB) revoked her standing research clearance on the grounds that the police and FBI investigations and the seizure of her research materials exposed her informants to risks. She was told that she "could not use my data until I had assurance from the FBI and university that I was no longer under surveillance." As these investigations continued, however, they found nothing to indicate that she had made threats or was somehow building a hydrogen bomb. Yet, Dr. Harper was caught in a classic double-bind. Although the FBI did not find that she had done anything wrong, she could not complete her work simply because this investigation had opened her private research records up to FBI scrutiny. This, of course, seriously imperiled her professional activity and development. Last fall, Dr. Harper learned that the faculty in her department voted to deny her tenure application.

Price suggests that the IRB's action was a major element in the collapse of Harper's career. He writes that "the loss of a scholar’s IRB clearance because of an FBI investigation that found no wrong doing ought to be an issue of central importance to such professional organizations, and I would hope that the AAUP, AAA and SFAA would recognize the need for them to weigh-in on this and other procedural aspects of her case. This is a case that impacts us all."

Price complains about the heavy hand of the "National Security State," and he titles his post "Trial by FBI Investigation." But in his account, the FBI was not Harper's biggest problem; it investigated a threat of nuclear terrorism and closed the case with reasonable efficiency. The IRB, by contrast, apparently offered no such resolution. Perhaps Price needs to worry less about the National Security State and more about the Human Subjects Protection State.

[Editor's Note: The Institutional Review Blog opposes letting anthropologists acquire thermonuclear weapons.]

Tuesday, July 28, 2009

A Defense of RECs

Professor Adam Hedgecoe of Cardiff University kindly pointed me to his article, "Research Ethics Review and the Sociological Research Relationship," Sociology 42 (2008): 873-886.

The article is a response to longstanding criticisms of British research ethics committees (RECs), especially those affiliated with the National Health Service (NHS). For example, Sue Richardson and Miriam McMullan surveyed "UK academic social researchers working in Health, or health services researchers, who had experience of using the NHS research ethics process prior to March 2004," in "Research Ethics in the UK: What Can Sociology Learn from Health?," Sociology 41 (2007): 1115-1132. Fifty-one percent of their respondents reported degrading their research design as a result of the committee approval process, while only 32 percent reported making changes for the better. Overall, 59 percent offered negative comments, while only 15 percent offered positive comments. And Richardson and McMullan set a pretty low bar for a positive comment, counting this: "It’s a lot of paperwork but once you know what is required, it’s acceptable." Overall, it seems, NHS RECs are inhibiting the sociological study of health care in the United Kingdom.

Hedgecoe seeks to rebut this impression, based on his observation of three NHS RECs in 2005 and 2006, and some follow-up interviews. He argues that "NHS RECs are not inherently hostile to social science research, especially qualitative research." (882) The double-negative construction of that thesis suggests Hedgecoe's problem: he's trying to prove that something doesn't happen, or at least not as often as ethics-committee critics believe. That's not an easy task, and I congratulate him for trying. But I find the article unpersuasive.

Saturday, July 25, 2009

Oral History Update

Linda Shopes revises and expands her 2007 essay, "Negotiating Institutional Review Boards" in a page on the Oral History Association website.

Shopes, who spent years negotiating with federal officials, now despairs of that route: "After more than a decade of largely ineffective advocacy vis-à-vis OHRP and its predecessor, oral historians are not likely to gain many concessions from federal regulators."

I must agree with Shopes's pessimism. As Michael Carome conceded in October 2008, OHRP has taken action on only a handful of the 147 recommendations put forward by the Secretary's Advisory Committee on Human Research Protections. If regulators cannot or will not implement the recommendations of their own official advisory body, they are unlikely to prove more responsive to the concerns of a group of scholars whom they have ignored for years.

Instead of looking to OHRP for relief, Shopes suggests that "if we must live within a regulatory system that is, at best, incongruent with our ways of working, perhaps the best we can do is work within our individual institutions to develop a measure of mutual accommodation." She notes the progress historians have made at Amherst, Columbia, UMKC, Michigan, and Nebraska. Here's hoping the next update of her essay has a longer list.

Monday, July 20, 2009

U. of California Shouldn't Avoid Debate

In my previous post, AAHRPP and the Unchecked Box, I mentioned a 2008 memo, " “Unchecking the Box” on the FWA – Issues and Guidance," by Rebecca Landes, research policy coordinator at the University of California's Office of Research and Graduate Studies.

The memo deserves a second look, since it shows the tensions within a university administration when faced with challenges from social scientists.

On the one hand, the memo acknowledges the complaints:

There is increasing pressure of late from social science, behavioral and humanities researchers to modify IRB review of research in these disciplines. While there may be good reasons to apply different review standards to different types of research, changes in the application of subject protection rules at UC should be effected through systemwide discussion and consensus. Campus by campus modifications to subject protection rules for nonfederally funded research would lead to confusion and chaos.

I do not see why campus-by-campus modifications in this area should sow more confusion than already exists. I doubt, for example, that UCLA's absurd policies were cleared with other campuses before being promulgated. But at least this portion of the memo calls for "systemwide discussion and consensus."

But continue reading, and you get to a section on "Pros and Cons" of promising to apply federal regulations no nonfunded research. And here's one of the "pros": "Avoids opening up the debate on differing protections for different disciplines, e.g., social science, behavioral and humanities research."

So which is the real goal of the University of California administration: to foster "systemwide discussion," or to avoid opening up a debate? Only one choice is worthy of a great university system.

Friday, July 17, 2009

AAHRPP and the Unchecked Box

Regular readers of this blog likely know that most United States universities submit "federalwide assurances" (FWAs) pledging to abide by the Common Rule for research funded directly by federal agencies that have adopted that rule.

Section 4 of the standard assurance includes an optional pledge that "This Institution elects to apply . . . to all of its human subjects research regardless of the source of support, except for research that is covered by a separate assurance" either the Common Rule or the Common Rule and subparts B, C, and D of 45 CFR 46. Institutions that check this box--as seems to have been common in the past--with one stroke of the pen eliminate one of the major concessions made by federal regulators in 1981, when they promised that non-federally-funded research would not be regulated.

Recently, however, at least 164 universities have "unchecked the box," declining to promise to apply the regulations to all research. The American Association of University Professors has strongly recommended that universities uncheck the box as a first step toward devising procedures less burdensome than those specified in the regulations.

Malcolm Feeley has noted that unchecking the box could also yield important empirical data:

If there are few reports of negative consequences . . . they might encourage national officials to rethink the need for such an expansive regulatory system . . . On the other hand, if opt-out results in increased problems, the findings might help convince Katz, Dingwall, me, and still others of the value of IRBs.

Nor are such comments confined to outsiders. At the July 16, 2008, meeting of the Secretary's Advisory Committee on Human Research Protections, committee member Lisa Leiden of the University of Texas system spoke of her own interest in freeing nonfunded research from direct federal regulation:

We have talked about limiting the federal wide assurances, unchecking the box, and I believe the position that we're going to be taking is to advocate in a gentle way thinking about doing that. We have heard both sides of the story or maybe just a few sides, but we think that there are certainly some advantages. And one of the advantages might be . . . what can we do with the expedited review level. It seems that there is a lot of flexibility in that, and we might be able to increase some of that by unchecking the boxes and adding different categories for that.

Unchecking the box is therefore one of the most promising incremental reforms now on the table. This is why I was disappointed to see that the AAHRPP's proposed revised standards, described in my previous post, seem to preclude this option.

A correspondent questioned this assertion, noting that AAHRPP president Majorie Speers had mentioned unchecking the box in her presentation, "Finding Flexibility in the Regulations." But there's nothing in the slides to suggest that AAHRPP or Speers approves of such a practice, and a July 2008 memo from the University of California states that AAHRPP site visitors have told university administrators "that in order for a human research protection program to be accredited, it must apply the Common Rule and its subparts to all human research at the institution, irrespective of funding."

Either AAHRPP forbids accredited organizations from unchecking the box, or its policies are so unclear that its site visitors are giving out bad information. Either way, I suggest that the revised standards permit unchecking the box as a means of reform.

Wednesday, July 15, 2009

AAHRPP Proposes Revised Standards

Robert Townsend, PhD, kindly alerted me to the Proposed Revised Accreditation Standards of the Association for the Accreditation of Human Research Protection Programs (AAHRPP). The revisions are largely cosmetic, grouping many of the existing standards under new headings. As far as the review of social science and humanities research goes, I see no drastic departures from previous AAHRPP positions. This is a pity, since the standards need more substantive revision to meet the goals that AAHRPP has set for itself.

AAHRPP is accepting comments until July 30. My comment follows.

Saturday, July 4, 2009

The Systematic Threat to Academic Freedom

Lisa Rasmussen kindly alerted me to her essay, "Problems with Minimal-Risk Research Oversight: A Threat to Academic Freedom?" IRB: Ethics & Human Research 31 (May 2009): 11-16. The essay mostly seeks to rebut the AAUP's 2006 report, "Research on Human Subjects: Academic Freedom and the Institutional Review Board." Rasmussen identifies some important shortcomings in that report, and she raises key questions about the relationship between IRBs and academic freedom. But I am unpersuaded by her central arguments.

Before I address them, I should note the repeated disclaimers within the essay. "I will not settle here the fundamental issue of whether a convincing argument exists that IRB review poses a threat to academic freedom," Rasmussen writes. "A longer explanation of the [AAUP report's] failures is beyond the scope of this paper, but a brief outline is possible." I am disappointed by these limits. Rasmussen devotes significant space to matters peripheral to the question of academic freedom, such as her assertion that researchers whose work was approved by a department--rather than a central IRB--would necessarily merit less legal protection, a claim whose weakness she acknowledges in a footnote. Given only six pages, Rasmussen would have done better to focus on the question posed in her title.

Rasmussen's main argument is that the AAUP report "does not demonstrate that IRBs pose a threat to academic freedom." As she notes, such a demonstration would require a definition of academic freedom, something lacking in the AAUP report. So she offers a passage from the AAUP's "1940 Statement of Principles on Academic Freedom and Tenure": "Institutions of higher education are conducted for the common good and not to further the interest of either the individual teacher or the institution as a whole. The common good depends upon the free search for truth and its free exposition." Emphasizing the grounding of this argument in the search for the "common good," Rasmussen then concludes that "there is a prima facie claim that research can be subjected to assessment regarding whether it threatens to harm the common good via harm to individuals."

I believe this is a misreading of the 1940 Statement, for it suggests that any policy aimed at safeguarding the common good is consistent with academic freedom. For example, she could have written, "there is a prima facie claim that research can be subjected to assessment regarding whether it threatens to harm the common good via the promotion of communist overthrow of the government," and that therefore a prohibition on the use of Marxist analysis is consistent with academic freedom.

A more relevant definition of academic freedom can be drawn from the AAUP's 1915 Declaration of Principles on Academic Freedom and Academic Tenure":

The liberty of the scholar within the university to set forth his conclusions, be they what they may, is conditioned by their being conclusions gained by a scholar’s method and held in a scholar’s spirit; that is to say, they must be the fruits of competent and patient and sincere inquiry, and they should be set forth with dignity, courtesy, and temperateness of language . . .

It is, however . . . inadmissible that the power of determining when departures from the requirements of the scientific spirit and method have occurred, should be vested in bodies not composed of members of the academic profession. Such bodies necessarily lack full competency to judge of those requirements; their intervention can never be exempt from the suspicion that it is dictated by other motives than zeal for the integrity of science; and it is, in any case, unsuitable to the dignity of a great profession that the initial responsibility for the maintenance of its professional standards should not be in the hands of its own members. It follows that university teachers must be prepared to assume this responsibility for themselves.

As Matthew W. Finkin and Robert C. Post write in their new book, For the Common Good: Principles of American Academic Freedom, freedom of research depends on "a framework of accepted professional norms that distinguish research that contributes to knowledge from research that does not." (54) While these two experts on academic freedom decline to offer a firm opinion on the legitimacy of IRBs, they take the AAUP's concerns far more seriously than does Rasmussen (69).

The question, then, is whether IRBs, like the boards of trustees that concerned the authors of the 1915 statement, "lack full competency to judge of [scholarly] requirements." Rasmussen suggests that IRBs merely maintain scholarly standards: "The source of the threat to academic freedom via oversight by one’s colleagues is far from clear," she writes, "especially since researchers undergo peer review for research funding and when submitting their manuscripts for publication." But IRB review is not peer review, since it is conducted mostly by people ignorant of the scholarly methods they are reviewing. (See "Why IRBs Are Not Peer Review," and other posts tagged "peer review.")

To make this a bit more concrete, we can examine the exemplary "horror stories" included in the 2006 AAUP report. Rasmussen rejects these as "unelaborated anecdotes with no documenting citations," rather than examining their implications for academic freedom.

Here's one: "A Caucasian PhD student, seeking to study career expectations in relation to ethnicity, was told by the IRB that African American PhD students could not be interviewed because it might be traumatic for them to be interviewed by the student." Or another: "A campus IRB attempted to deny an MA student her diploma because she did not obtain IRB approval for calling newspaper executives to ask for copies of printed material generally available to the public." No peer review process would impose such conditions. If these are not infringements of academic freedom, then nothing is.

Rasmussen is quite right that we should not equate "inconvenience and hassle with abridgement of academic freedom." Yet nor should we dismiss the abridgement of academic freedom as mere inconvenience and hassle. When IRBs impose conditions on research that prevent researchers from conducting the basic tasks of scholarship--talking to people of varied backgrounds, recording interviews, or telephoning for information--they abridge academic freedom. The more interesting questions are how often this occurs, and why it happens.

Rasmussen presents IRB abuse as a somewhat random process: "IRBs can function well or poorly, and which is true for a given IRB depends on many factors, not least of which are institutional support and member training." This suggests that IRB abuses are individual anomalies, rather than a pattern.

By contrast, the AAUP detects a systematic bias toward the infringement of freedom. This is better developed in the AAUP's 2000 report (cited by Rasmussen), "Institutional Review Boards and Social Science Research." That report includes such observations as "no one is likely to get into trouble for insisting that a research proposal is not exempt" and "no university is likely to want to explain to either the government or the public why its commitment to avoid harming the human subjects of research is limited by the source of funding for the research." In these and other cases, the AAUP recognizes that the IRB system punishes individuals and institutions only for approving research, not for restricting it.

The design flaws in the system have yielded a pattern of abuse. Read Maureen Fitzgerald and Laura Stark, both of whom observed repeated abuses by the IRBs they studied. Read Linda Thornton, whose work was thwarted at 15 of 24 institutions she contacted. Read Jack Katz, who shows that IRBs are particularly likely to pounce on controversial topics. IRBs can function well or poorly, but the system is weighted toward poor function.

Rasmussen acknowledges that poorly designed systems can lead to systematic problems. She concedes that the "lack of an [IRB] appeals process may threaten academic freedom." She also details the way that departmental-level review might systematically hamper research. And she ends her essay with a promising proposal for “template review:"

Disciplines at the national level might formulate templates to guide very common research approaches. For example, a research template for oral historians could stipulate that the researcher will interview individuals, record their answers, refer them to counselors if the questions have provoked strong emotions, procure consent forms, lock the transcripts securely, and identify what will happen to the transcripts at the close of research. IRBs at individual institutions would review the template once and approve it (or even decide to accept any templates from given professional societies). Thus, a researcher would simply submit a form to the IRB stating her agreement to abide by the format of the template. Upon receipt of the form, the IRB would approve the protocol.

If IRBs are not threatening academic freedom, why propose this reform? Inside this proposal is an acknowledgment that disciplinary experts and professional societies in the social sciences and humanities have been excluded from the present IRB system. While such exclusion does not automatically threaten academic freedom, we should not be surprised when it does. For all her skepticism of the AAUP report, Rasmussen has presented her own suggestion that the current system is rotten at the core.

Monday, June 29, 2009

Finnish Group Warns Against Unnecessary Bureaucracy

Klaus Mäkelä and Kerstin Stenius kindly alerted me to their paper, "A New Finnish Proposal for Ethical Review in the Humanities and Social Sciences," which they presented in London in April. The paper describes a draft report by a working group of Finland's National Advisory Board on Research Ethics, which examined the need for ethics review in the humanities and social sciences.

In its draft report, issued in January, the working group adopted some principles that would be familiar to ethics committees and regulators in the United States and other countries. The report stresses the importance of voluntary participation, informed consent, the confidentiality of information, the avoidance of "undue risk and harm," and the need for special care when researching minors. It sees ethics review committees as part of a process to effect these goals.

On the other hand, the working group recognizes that too much oversight presents its own problems:

5. It is important to respect the autonomy and good sense of research subjects. In social research, participants usually are fully competent to assess the risks involved without outside expertise. Ethics committees should avoid paternalism.

8. Clear criteria should be formulated for what kinds of projects require ethical review, but it should be up to the researcher to determine whether a project meets these criteria.

10. The work of ethics committees should be as transparent and open as possible and a system of appeals should be put in place.

The second part of principle number 8 is particularly significant. U.S. regulators have, since 1995, insisted that researchers cannot be trusted to determine when their research is subject to review under the Common Rule. Recently, Jerry Menikoff of OHRP noted that institutions are not legally required to strip researchers of the power to make these determinations, but OHRP will continue to recommend that they do so.

The Finnish working group, by contrast, sees a greater danger in giving that power to committee members and staffers who will likely err on the side of too much review:

It is a matter of judgement to decide what kinds of stimuli are 'exceptionally strong'. To avoid unnecessary bureaucracy, it nevertheless should be up to individual researchers to decide whether their project falls into the categories listed above and needs to be submitted to ethical review. It is highly unlikely that this will lead to transgressions, and ex post facto sanctions will be enough to keep any exceptions under control.

In short, the working group understands that in this case, the dangers of too much bureaucracy outweigh the dangers of too little.

I should note that the London conference at which Mäkelä and Stenius presented their work was the Third Working Meeting of the International Study of Ethical Codes and Ethical Control in the Social Sciences, the previous conferences having been held in London in 2007 and 2008. The meetings have brought together scholars from several northern European countries to discuss social science ethics and regulations across international borders. It is splendid that these scholars are at work on so important a topic, and I look forward to learning more from them.

Wednesday, June 17, 2009

Menikoff to Critics: "Yes, We Hear You"

Theresa Defino kindly alerted me to the streaming video feed of Dr. Jerry Menikoff’s May 14 address at the University of Michigan, “The Legal Assault on the Common Rule." The speech was an impressive acknowledgment of the widespread criticism of the foundations of the IRB system, and it ended with the promise of some substantive improvement. But by listing some of the most common critiques of the IRB system without attempting to rebut them, Menikoff fell short of the dialogue he seeks to foster.

Friday, June 5, 2009

Lisa Wynn's Words and Pictures

In April I commented on the ethics training program for ethnographers developed by Lisa Wynn of Macquarie University with some colleagues.

At Culture Matters, the blog of the Macquarie anthropology department, Wynn described the ideas that led her to develop the program.

Now, at Material World, a blog hosted by the the anthropology departments of University College London and New York University, Wynn describes another aspect of the training program: the pictures.

Wynn explains that along with its medical-centered ethics and jargon-laden text, the standard NIH ethics training program suffers from clip art in which people are depicted as faceless cartoons--probably not the best way to get researchers thinking about others as autonomous individuals. So for her program, Wynn offers pictures of real researchers and research participants, from Laud Humphreys to Afghan school administrators.

Gathering these photos--about a hundred in all--wasn't easy, but they contribute meaningfully to the warmth and depth of the site. And it put Wynn in touch with some prominent scholars.

[Side note: Professor John Stilgoe tells his students that it's rare to have enough photos of yourself at work. That's a good admonition; you never know when someone will want to show you doing controversial research.]

In another posting on Culture Matters, Wynn describes her continuing research on research ethics. She notes that ethics-committee oversight of ethnography is a relatively recent phenomenon. While it was debated as early as the mid-1960s, only in the 1990s did it become widespread. Thus, in studying the effect of ethics committees,

We’ve got a perfect “natural” control: an older generation of researchers who spent most of their careers not seeking ethics clearance, a younger generation for whom it is standard operating procedure, and a “middle-aged” group of researchers like myself who started their research under one regime and now live under another (I swear, this is the first time I’ve thought of myself as middle-aged). By correlating responses with different regulatory regimes, we can ask questions like: do researchers who never got ethics clearance have different ideas about what is ethical than researchers who go through ethics review? Does one group consider itself more or less ethical than the other? Or do they feel like ethics oversight hasn’t made any difference to their research practice?

Wynn plans to contact scholars in Australia and the United States to see how the spread of ethics review affected ideas about research ethics. I'm quite excited by this work; in fact, I plan to publish it in a special issue of the Journal of Policy History I am editing on the general topic of the history of human research ethics regulation.

How many pictures should I demand?

Wednesday, May 13, 2009

A Horror Anthology

Mark Kleiman takes on IRBs at The Reality-Based Community. On April 14 he asked his readers for IRB horror stories, and on May 2 he posted some of the responses.

The saddest concerns a group of law students who wished "to send testers of different races in different styles of clothing to the restaurant over some period of time to test whether they enforced their dress code in a discriminatory manner." Law school administrators told them they would have to secure IRB approval. This discouraged the students, who did not want to go through the time and effort of the approval process.

This was not the intent of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. At its 15 April 1978 meeting, the commission discussed just such a scenario (pp. II-5 to II-21 of the transcript), and all the members seemed to agree that such testing for discrimination should not require IRB review. But, as I've noted before, the commission wrote a definition of human subjects research that plausibly includes a great deal of activity the commission did not seek to regulate. Thirty years later, justice suffers as a result of the commission's sloppiness.

NOTE: In honor of Professor Kleiman's search, I have gone back through this blog to add the "horror stories" tag to some posts that should have had it to begin with. Clicking on that tag now yields more than a dozen posts, with even more documented horror stories.

Friday, May 8, 2009

Journal of Policy History

The Journal of Policy History has published my article, "How Talking Became Human Subjects Research: The Federal Regulation of the Social Sciences, 1965–1991." As permitted by the transfer of copyright, I have posted a PDF on my personal website:

Not much has changed since I posted a version on SSRN in April 2008. The major changes come in the "medical origins" section; the earlier draft underestimated the strength of social scientists' opposition to IRB rules in the late 1960s. Also, the new version better explains the origins of Ithiel de Sola Pool's concern about IRBs (see p. 18).

Tuesday, May 5, 2009

DeGette Still Doesn't Get It

Representative Diana DeGette (D-CO) has introduced the Protection for Participants in Research Act (H.R. 1715), which would impose IRB requirements on all human subject research supported by the federal government or affecting interstate commerce.

As amp&rsand notes, this is the sixth time DeGette has introduced this bill. And I don't think that counts earlier submissions of similar bills by Senator John Glenn. None of these previous efforts went far, so there's no particular reason to fear this bill's passage.

Still, it is disappointing that DeGette has introduced this bill six times without understanding its potential consequences. Her press release states that “I think one thing we can all agree on in a bipartisan way is that we need to encourage medical experimentation but we need to do it in a way that both protects the patients and gives them informed consent about what they are getting into," as if the bill would affect only medical experimentation. It points to medical trials in 1999 and 2006 as evidence of insufficient oversight, and argues that "research is the key to innovation and discovery, including curing deadly disease." Nowhere in the press release is a hint that DeGette understands that her bill would outlaw most journalism, not to mention further inhibiting social science and humanities research.

Thirty-five years after the passage of the National Research Act, Congress still doesn't know what it has done.

Bonus question for Rebecca Tushnet's 43(B)log: Is the copyright claim on DeGette's press release--the work of a federal employee in her official capacity--illegal, or merely false?

Update 8 May 2009: A correspondent notes that the copyright statement is no longer on the site. If DeGette removed the statement in response to this blog, good for her. I have a PDF of the press release as it appeared on May 4, if anyone is interested.

Saturday, April 25, 2009

UMKC's Respectful Oral History Policy

The University of Missouri-Kansas City (UMKC) has posted a promising new policy: "Social Sciences IRB and Oral History."

The policy has a number of elements that set it apart from the typical university policy, which seeks to cram oral history into a system designed for medical experimentation. Instead, it adapts only those elements of the medical IRB system that encourage historians to follow their own discipline's ethics and best practices.

I suggest that readers of this blog read the whole policy, but here are some highlights:

1. Respect for Critical Inquiry

As I have written repeatedly on this blog, historians do not take the Hippocratic Oath, and should not promise not to harm the people they interview. Any IRB that imposes the Belmont Report on historians is asking them to forswear their own ethics.

UMKC understands this. Its policy notes that

akin to a journalist or lawyer, an historian is also responsible to a wider public to recover a shared past “as it really happened.” In keeping with the public role of an historian in a democratic society, these responsibilities, especially when conducting narrative interviews, can necessitate a confrontational style of critical inquiry. So while historians do not set out to hurt their interviewees, oral historians are expected to ask tough questions in their interrogation of the past.

2. Respect for Peer Review

The UMKC neither subjects oral historians to the whims of board members unfamiliar with their field, nor does it leave them on their own. Instead, it offers scholars a number of relevant readings, including publications of the Oral History Association, and then encourages them to talk to colleagues knowledgeable about interviewing:

After reviewing these resources on their own, the researcher is strongly encouraged to discuss their research protocol with peers before implementing their research protocol. In some cases, peer review by members of one’s own department would be most useful; in other cases, a researcher might be better served by seeking review from a colleague in a different department.

To foster these kinds of conversations among the faculty, the Social Sciences IRB Subcommittee for Oral History will hold two meetings per semester . . . to discuss “Best Practices” in oral history. Faculty experts in oral history will guide these conversations . . . These meetings are designed to meet the needs of researchers seeking advice and peer review for their research protocols. They are also designed to meet the needs of Chairs and/or designees interested in learning how to advise researchers in their departments to make responsible decisions regarding oral history.

3. Respect for OHRP's Pledge

UMKC takes seriously the carefully negotiated 2003 agreement between the American Historical Association and the Oral History Association and OHRP, even posting a copy on its website. The university elaborates on that agreement:

At UMKC, we draw a distinction between idiographic research that uses oral histories to describe the unique story of some particular social group or individual, which does not constitute “human subjects research”; and nomothetic research that employs oral histories in the hopes of contributing to a general theoretical or comparative debate about the human nature or behavior, which does fall under the category of “human subjects research."

While I confess that the terms idiographic and nomothetic are not in my working vocabulary, I believe they do express a real difference between the ethics of oral historians and those of other scholars. If one is interested in a general theoretical or comparative debate about the human nature or behavior--as many social scientists seem to be--then it makes less sense to single out individuals for potential honor or calumny. Writing about unique individuals or groups changes one's responsibility toward the individuals interviewed.

4. Respect for Researchers

Policies like UCLA's infantilize researchers, making them submit every judgment to an administrator. By contrast, UMKC trusts its scholars:

The bottom line is that the researcher makes these determinations in careful consultation with the Chair of the department or another official designee appropriate to the kind of study being planned. Together this determination is based on shared understanding of all relevant guidelines and their shared expertise in their specialized field of scholarship.

5. Respect for the IRB

Even as it empowers historians, the UMKC policy keeps the IRB involved, making it a resource, rather than an obstacle. Researchers still have to learn something about human subjects regulations, and they must complete a form explaining why they have determined that their policy does not fall under federal regulations.

(The form's demand for an explanation of "no more than 1500 characters" sounds suspiciously bureaucratic, but it's a good length for the presentation of a single idea--about the same as the 150-word limit for a New York Times letter to the editor.)

More importantly, the frequent meetings of the Social Sciences IRB Subcommittee on Oral History suggest that some scholars at UMKC have devoted their time to helping colleagues deal with the real ethical challenges of oral history.

The website explaining the policy notes that it was developed by "a group of faculty and administrators involved with the Social Science Institutional Review Board (SSIRB) . . . with input from members of the SSIRB, the College of Arts & Sciences, and the Faculty Senate at UMKC." I congratulate all the scholars and administrators who developed this innovative system, and I hope it works as well in practice as it reads on the screen.

With this policy, UMKC joins Amherst College, Columbia University, the University of Michigan-Ann Arbor, and the University of Nebraska-Lincoln Policy on a small but growing list of schools that have adopted OHRP's 2003 position removing most oral history research from IRB jurisdiction. Five schools not very many, but it's five more than the AHA could find in February 2006. Who will be number six?

Tuesday, April 21, 2009

Deregulation "Is Not Going to Happen"

Linda Shopes kindly alerts me to the April 20 issue of COSSA Washington Update, the newsletter of the Consortium of Social Science Organizations, which reports on an April 1 meeting of the National Academies’ Board on Behavioral, Cognitive, and Sensory Sciences, at which IRBs were discussed.

Here's the key passage:

Philip Rubin, CEO of Haskins Laboratories in New Haven, CT, and former director of the National Science Foundation’s (NSF) Division of Behavioral and Cognitive Sciences, chairs the Board. He began the session with a review highlighting the difficulties social/behavioral researchers have had with the current system under the Common Rule regulation and its interpretation by campus Institutional Review Boards (IRBs). Complaints have been loud, but mostly anecdotal . . . Once again the bottom line is that despite efforts by Joan Sieber and the Journal of Empirical Research on Human Ethics, which she edits, there are still large gaps in our empirical knowledge of how the system works for social and behavioral scientists.

Rubin was followed by Jerry Menikoff, new head of the U.S. government’s Office of Human Research Protections (OHRP). Menikoff announced that he was all for “flexibility” in the system and that “changes can be made.” He also endorsed conducting more research. He rejected the arguments of the American Association of University Professors and Philip Hamburger of Northwestern University Law School that IRBs violate researchers’ first amendment rights. He acknowledged the importance of expedited review, but stated quite clearly that “removing minimal risk research from the system is not going to happen.”

I don't want to make too much of these comments; an OHRP spokesperson tells me that they were an extemporaneous response to Rubin, and not prepared remarks. Still, I am disappointed. Menikoff's comments suggest a retreat from his earlier concession that "flexibility" often can be code for arbitrary power. And it's a pity for a public official to insist that a given policy "is not going to happen" even as he endorses more research. Wise governance depends on making policies after finding facts, not before.

Friday, April 17, 2009

Macquarie's Innovative Ethics Training

In previous posts and my 2007 essay, "Ethical Training for Oral Historians," I have complained about standardized, medicine-centric ethics training systems like the CITI Program and called for training programs better tailored to individual disciplines.

Lisa Wynn of Macquarie University (also known as MQ) has alerted me to just such a program she created with Paul H. Mason and Kristina Everett. The online module, Human Research Ethics for the Social Sciences and Humanities, has some elements that I find inappropriate. Overall, however, it is vastly superior to the CITI Program and comparable ethics programs I have seen, and it deserves attention and emulation.

Friday, April 10, 2009

Training Day

Peter Klein of the Organization and Markets blog offers a sad account of what it takes for a University of Missouri economist to gain permission to interview entrepreneurs or hand out surveys to corporate executives. Like many scholars across the country, he was directed to an online training system, which demanded that he provide correct answers to questions like the following:

32. The investigator is a 1/8th V.A. employee. She proposes to recruit MU outpatients into a study conducted exclusively at MU facilities. Which of the following groups must approve the research project before participants can be enrolled?

* The MU Health Sciences Center IRB
* The V.A. Research and Development Committee
* Both a. and b.
* Neither a. nor b.

While such knowledge may be of critical importance to health researchers at Missouri, it is irrelevant to social scientists not doing medical work. The lesson Klein takes away from such an experience is not that he must be sure to obey laws and ethics standards while doing his research, but that his campus IRB administrators do not respect him enough to provide relevant ethical training.

Administrators take note: you are making fools of yourselves, and earning your faculty's contempt.

See Comments Oppose New Regulations on Training.

Sunday, March 29, 2009

Deadline Extended for TCPS Comments

John Lowman kindly alerts me that Canada's Interagency Advisory Panel on Research Ethics has extended the deadline for comments on the draft second edition of the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans (TCPS). Comments will now be accepted through 30 June 2009, though the PRE encourages comments by March 31, since the next round of revision will begin in April.

An official announcement of the deadline extension is online at the PRE's French-language website. I could not find an English-language version on the PRE website, but the University of Western Ontario has posted one.

A form for online comments, and instructions for submitting comments by mail, fax, or e-mail, is online.

I have sent in a version of the comments posted on this blog. As I prefaced my comments to the PRE, I write as a non-Canadian. But the regulation of research ethics is an international endeavor. Just as TCPS draws heavily from the Belmont Report and 45 CFR 46, so can we expect TCPS to influence American policy and guidance. I therefore consider myself to have some stake in the outcome of the TCPS revision.

Wednesday, March 18, 2009

Canadian Criminologists Decry TCPS Draft

Back in January, I mentioned the release of the Draft 2nd Edition of the Tri‐Council Policy Statement: Ethical Conduct for Research Involving Humans, prepared by Canada's Interagency Advisory Panel on Research Ethics, or PRE.

Ted Palys and John Lowman of the School of Criminology, Simon Fraser University, kindly alerted me to their critique of the draft, or TCPS-2, as they term it. (They even more kindly cited this blog in their work.) They find that TCPS-2 "poses a significant threat to academic freedom in Canada." (3)

Their 20-page critique, "One Step Forward, Two Steps Back: Draft TCPS-2’s Assault on Academic Freedom," is all meat and no fat, and I recommend that it be read in its entirety. But here are a few salient points.

Saturday, February 14, 2009

Less Flexibility, More Freedom

Defenders of the present IRB system often boast of the "flexibility" offered by current regulations. (See, for example, Dr. Jeffrey Cohen's report from the November PRIM&R meeting.)

But flexibility--when combined with the possibility of punishment--can actually empower censorship. Here is how Human Rights Watch describes an analogous system, China's censorship of the Internet:

The display of politically objectionable content can result in reprimands to company management and employees from the MII, the State Council Information Office, the Communist Party's Propaganda Department, and/or various state security organs, accompanied by warnings that insufficient controls will result in revocation of the company's license. In order to minimize reprimands and keep their licenses in good standing, BBS and blog hosting services maintain lists of words and phrases that either cannot be posted or which cause monitoring software to "flag" the content for manual removal by employees.

Search engines likewise maintain lists of thousands of words, phrases and web addresses to be filtered out of search results so that links to politically objectionable websites do not even appear on the search engine's results pages, even when those websites may be blocked at the backbone or ISP level . . . Such lists are not given directly to Internet companies by the Chinese government; rather, the government leaves the exact specifics and methods of censorship up to companies themselves. Companies generate their "block-lists" based on educated guesswork plus trial-and-error: what they know to be politically sensitive, what they are told in meetings with Chinese officials, and complaints they may receive from Chinese authorities in response to the appearance of politically objectionable search results.

But the complicity of companies is even more direct: they actually run diagnostic tests to see which words, phrases, and web addresses are blocked by the Chinese authorities at the router level, and then add them to their lists, without waiting to be asked by the authorities to add them. And because they seek to stay out of trouble and avoid complaints from the authorities, many businesspeople who run [Internet Content Providers] in China confess that they are inclined to err on the side of caution and over-block content which does not clearly violate any specific law or regulation, but which their instincts tell them will displease the authorities who control their license. In all these ways, companies are doing the government's work for it and stifling access to information. Instead of being censored, they have taken on the role of censor.

In other words, by keeping secret the exact terms that will trigger a license revokation, the Chinese government achieves more censorship than it could by publishing a list of forbidden terms, and it makes Google, Yahoo!, and other U.S. companies complicit in the censorship. Similarly, OHRP's vagueness about what will trigger a shutdown fails to assure universities that they can safely deregulate research, so universities restrict research that should be exempt from review.

Fortunately, OHRP's new director, Jerry Menikoff, understands this. In the Winter 2009 issue of AAHRPP Advance he writes,

We often hear that it’s better not to provide specific guidance—that the absence of guidance allows people greater flexibility in interpreting the regulations. In my experience, the opposite can be true. Guidance can empower individuals and advance both research and research protections. In the absence of guidance, people tend to be reluctant to take certain actions out of fear that they are violating the rules. In some instances, important research is not even attempted, all because of a misunderstanding. Guidance could eliminate the misconception and clear the way for research.

I'm delighted that Dr. Menikoff takes this approach, and I look forward to more specific guidance from OHRP that would clear the way for research.

[Thanks to Rob Townsend for altering me to Menikoff's comments.]

Friday, February 13, 2009

AAUP's Rhoades Takes Soft Line on IRB Training

In an essay on compulsory sexual harassment training ("Sexual Harassment and Group Punishment," Inside Higher Ed, 12 February 2009), the new AAUP general secretary, Gary Rhoades, offers side comments on human subjects research training:

In research universities (where professors’ work routinely involves human subjects, though even there literary and some other scholars are not required to undergo such training), perhaps the most obvious example of this is the human subjects training surrounding research grants and activity. Prior to getting grants approved by the sponsored projects division of a university, an investigator must have undergone human subjects training. Although the training varies by university, there are common patterns nationally. Typically, for example, such training is online, and is not particularly rigorous, to put it mildly. Indeed, the format involves investigators taking an exam by reading some written passages and then answering questions about them. After each section or module the person finds out whether he or she missed too many questions in a section, and proceeds. If they have missed too many questions in a section they simply backtrack, get the same questions in a different order, and retake the quiz, until they pass. A widely used set of exams (which are specified to social/behavioral and biomedical research) are those offered by the Collaborative Institutional Training Initiative, which over 830 institutions and facilities (including a very large number of research universities, and indeed including the University of California at Irvine) utilize. The modules for the CITI quiz typically include three to six questions.

For the most part, although faculty complain about the inconvenience and irrelevance of the training, I do not know of anyone who would suggest that such training should be required only of investigators found to have violated the rights of human subjects. The more important questions of process and principle surround the institutional review board activities that regulate the approval of an investigator’s proposal. Here, serious questions have been raised about compromising investigators’ academic freedom to engage in certain types of research and to research certain subject matter. But the controversy is not, for the most part, about the human subjects training per se. Indeed, I would venture to say that for colleagues in the social and behavioral sciences, among the most common comments and complaints about human subjects training are that it is ineffective, that it does little by way of actually protecting human subjects and seems to be geared more to protecting the institution.

Apparently, Dr. Rhoades is unfamiliar with the widespread, principled opposition to CITI and other online training programs. That is worrisome, if it signals the retreat of AAUP from its longtime leadership in the fight against overly broad human subjects regulations and requirements.

Tuesday, February 10, 2009

More Comments on Maryland's IRB Law

In January 2008, I commented on Adil E. Shamoo and Jack Schwartz, "Universal and Uniform Protections of Human Subjects in Research," American Journal of Bioethics 7 (December 2007): 7-9. That essay applauded a 2002 Maryland law that seeks to impose federal regulations on all human subjects research conducted within the state, even if not conducted at an institution with an FWA. As I noted at the time, the law is of dubious constitutionality, and it has probably survived this long only because it has never been enforced.

The November 2008 issue of the same journal reprints the Shamoo/Schwartz essay as a "target article," along with eight invited commentaries. Several of the commentaries complain that the federal regulations--and therefore the Maryland law--are insufficiently protective, rather than overly broad, but none of these address social science.

Three commentators do address Shamoo and Schwartz's failure to consider the impact on social science research. Neil W. Schluger complains that "their solution will do very little to protect human subjects, and perversely, it may actually make the situation worse by simply piling more studies into an overburdened and flawed system." (13) He notes that

a very large percentage of studies that IRBs review are studies involving minimal risk to subjects. Many of these are observational studies, reviews of existing data, studies in which the only intervention is administration of a questionnaire, or other types if studies where there really is no reasonable expectation that any harm could result. Although such studies can be reviewed by the use of expedited review procedures without convening the full IRB, they still require considerable administrative and regulatory oversight by IRB staff. Further efforts should be made to reduce the work associated with these harmless studies. (14)

David B. Resnik notes that the idea of regulating all research nationwide has been kicking around since 1995, and has been the subject of six failed bills in Congress. But, he continues, "it is not obvious that society would gain much by requiring an organization, such as Gallup or ABC News, to submit a proposal to an institutional review board (IRB) to conduct an anonymous survey each time that it decides to gauge public opinion on a particular issue. Social resources might be better spent overseeing riskier research, such as clinical trials." (6) He concludes that "Any proposal that is made into a law should include adequate provisions for exempting some low risk research and clarifying the definitions of important terms." (8)

Lisa M. Rasmussen complains that the Shamoo/Schwartz position "makes no distinction between highly risky biomedical research and, on the opposite end of the spectrum, research that involves no more risk than we all accept daily. Does beneficence really require that human subjects be protected from answering questionnaires or being interviewed? If so, why does this protection not extend to marketing, polling, or journalism?" (18)

Instead, Rasmussen proposes

the one-time approval of research “templates.” Taking advantage of the fact that a great deal of research follows traditional disciplinary methods, this model suggests that IRBs could approve a variety of research templates (written, for example, by discipline-specific bodies such as the American Psychological Association (Washington, DC), or by a researcher whose classes may repeat experiments semester after semester), and grant automatic exemption to any researcher using such templates. Accountability and oversight could be ensured by requiring the researcher to submit a simple form to the IRB agreeing to use such a template (which would also include provisions for protecting confidential data). Were this form electronically based, research could proceed as soon as the form was submitted, without requiring submission of a protocol or awaiting approval. This achieves the goals of both minimizing bureaucracy and protecting human subjects. It meets our moral obligations to human subjects of research without uniformly requiring IRB oversight of research. (18)

Rasmussen concludes that "universal and uniform regulation of all human subject research is well-meaning but un-nuanced." (18)

It is a pity that the journal does not print a reply by Shamoo and Schwartz. As I noted in my post last year, Shamoo himself has suggested that what he terms "low-risk research" is overregulated. I remain puzzled why he favors state laws that promise even more regulation of such research.