Thursday, December 30, 2010

NIH Bioethicist Grady Questions IRB Effectiveness

JAMA has published an interesting exchange concerning the lack of data about IRB effectiveness.

[Christine Grady, "Do IRBs protect human research participants?," JAMA 304 (2010):1122-3; James Feldman, "Institutional Review Boards and Protecting Human Research Participants," and Christine Grady, "Institutional Review Boards and Protecting Human Research Participants—Reply," JAMA 304 (2010): 2591-2592.]

In the September 8 issue, Christine Grady of the Department of Bioethics, National Institutes of Health Clinical Center, quotes David Hyman's charge that "Despite their prevalence, there is no empirical evidence IRB oversight has any benefit whatsoever—let alone benefit that exceeds the cost." Grady is less blunt, but her message is the same:

Without evaluative data, it is unclear to what extent IRBs achieve their goal of enhancing participant protection and whether they unnecessarily impede or create barriers to valuable and ethically appropriate clinical research. This lack of data is complicated by the reality of no agreed-on metrics or outcome measures for evaluating IRB effectiveness. Although available data suggest a need for more efficiency and less variation in IRB review, neither efficiency nor consistency directly gauges effectiveness in protecting research participants. Protection from unnecessary or excessive risk of harm is an important measure of IRB effectiveness, yet no systematic collection of data on research risks, no system for aggregating risks across studies, and no reliable denominator of annual research participants exist. Even if aggregate risk data were easily available, it may be difficult to quantify the specific contribution of IRB review to reducing risk because protection of research participants is not limited to the IRB.

Serious efforts are needed to address these concerns and provide evidence of IRB effectiveness.

The December 15 issue features a reply by James Feldman of the Boston University School of Medicine. Feldman makes two points.

First, he doubts that IRBs cause that much trouble:

The critique of IRBs by Bledsoe et al, which was cited as evidence that they stifle research without protecting participants, is based on a single-site report of the results of an e-mail survey mailed to 3 social science departments with a total of 27 respondents. The evidence that IRBs have "disrupted student careers [and] set back tenure clocks" should also meet a reasonable standard of evidence.

OK, but what is that standard of evidence? In the absence of federal funding to study systematically a problem created by federal regulations, how much are frustrated researchers expected to do to demonstrate the problem? In other words, how many horror stories would Feldman need to change his views?

Having insisted that evidence is necessary to show the costs of IRB review, Feldman then asserts that no evidence is needed to show its benefit:

I believe that the effectiveness of IRBs in protecting human participants from research risks is analogous to preventive medicine. It is difficult to derive evidence that can quantify the effectiveness of a specific preventive intervention (new cases of HIV prevented? new injuries prevented?). However, evidence of preventable injury or illness makes a case for the need for effective prevention. Similarly, the tragic and prevalent cases of research abuse and injury make a compelling case for more rather than less review by IRBs that are independent, experienced, and knowledgeable.

As Grady points out in her reply to the letter, even if we accept the analogy, the IRB system does not meet the standards we impose on preventive medicine. She writes, "clinicians and public health officials do rely on evidence of the risks, benefits, and effectiveness of an intervention in preventing HIV or injuries or other conditions to justify adopting one particular preventive intervention rather than another and to defend the necessary investment of resources."

Exactly. As it stands, IRBs are the Avandia of ethics.

Sunday, December 26, 2010

First, Do Some Harm, Part III: Loosies in San Francisco

The third recent document illustrating the problem of applying the Hippocratic maxim to non-medical research is Leslie E. Wolf, "The Research Ethics Committee Is Not the Enemy: Oversight of Community-Based Participatory Research," Journal of Empirical Research on Human Research Ethics 5, no. 4 (December 2010): 77–86. It offers a clear example of the kind of valuable research that is impeded by simplistic medical ethics.

Thursday, December 23, 2010

First, Do Some Harm, Part II: The AAA Ethics Task Force

In mid-October, the Ethics Task-Force of the American Anthropological Association solicited comments on the following text, a section of a draft Code of Ethics now being written:

Do No Harm

Anthropologists share a primary ethical obligation to avoid doing harm to the lives, communities or environments they study or that may be impacted by their work. This includes not only the avoidance of direct and immediate harm but implies an obligation to weigh carefully the future consequences and impacts of an anthropologist’s work on others. This primary obligation can supersede the goal of seeking new knowledge and can lead to decisions not to undertake or to discontinue a project. Avoidance of harm is a primary ethical obligation, but determining harms and their avoidance in any given situation may be complex.

While anthropologists welcome work benefiting others or increasing the well-being of individuals or communities, determinations regarding what is in the best interests of others or what kinds of efforts are appropriate to increase well-being are complex and value-laden and should reflect sustained discussion with those concerned. Such work should reflect deliberate and thoughtful consideration of both potential unintended consequences and long-term impacts on individuals, communities, identities, tangible and intangible heritage and environments.

As of December 13, 33 people (presumably all anthropologists, but I'm not sure) had posted comments. The comments are often nuanced, making it hard to say whether they endorse the language or not. But they broke down roughly as follows:

Do No Harm

Significantly, the most wholehearted supporters of the "do no harm" proposal are those who uncritically embrace the Belmont Report and the Common Rule. "'Do no harm' is an IRB principle, and so it should be in our code," writes Bethe Hagens. Four other responses, from Chip Colwell-Chanthaphonh, mkline, Robert T Trotter II, and Simon Craddock Lee, all seem to suggest that the AAA code should conform to those documents, without asking much about their origins or their fit to the practices and beliefs of anthropologists.

Four other responses--from Barbara Rose Johnston, Seamus Decker, socect, and Vicki Ina F. Gloer--endorse Hagens's idea that anthropologist should "intend no harm." Despite the Belmont Report's description of "the Hippocratic maxim ”do no harm” [as] a fundamental principle of medical ethics," this form is more faithful to the Belmont's overall section on beneficence.

Do Some Harm

Eight responses--almost as many--appear to reject the "do no harm" idea on the grounds that neutrality is impossible, and anthropologists should not hesitate to harm those who deserve it. "A blanket edict to 'Do No Harm' could easily lead to a professional paralysis when one considers that a few steps away from the person giving you this interview is someone who will not like, will want or need to fight, or will suffer consequences for what is said much further down the line," writes Benjamin Wintersteen. Murray Leaf concurs. "Do no harm is fine as principle of medical practice," he writes, "where you are working with a single individual. It is nearly meaningless when you (we) work with human communities, in which what is good and what is harm is usually in contention. As some of these posts suggests, what we do is often a matter of helping some while undermining the position of others. No harm at all, in such a context, would almost always be also no help at all–and no effect at all."

Bryan Bruns offers an example. "I work, in conjunction with communities and a government agency, to design and support a process in which communities are likely to, in a reasonably democratic way, act to restrain the behavior and thereby (harm) reduce the benefits of a few people (upstream irrigators, large landowners) who currently take advantage of others, it’s not clear how a principle of 'do no harm' would allow any practical engagement."

I would say that the responses by Dimitra Doukas, Joan P Mencher, Moish, Noelle Sullivan, and Ray Scupin all fall in this general category of respecting critical inquiry. Margaret Trawick's comment is harder to categorize. "I have been teaching 'Do no harm' to my students as the first ethical principle for anthropological fieldwork, for many years," she writes. "It is a difficult principle to follow, precisely because you never know what might cause harm, and therefore you have to THINK about what you are doing in the field more carefully than you might in everyday life. Good intentions are not enough. Additionally, 'harm to whom' is a good question . . . Sometimes to protect and advocate for one party (.e.g. Untouchables in India) is to, at the least, offend some other party – e.g. high caste Hindus." Given her understanding of this problem, I'm not sure why she teaches "do no harm" rather than something like "think about whom you are harming."

It's the Wrong Question

An even greater number of responses suggest that, in the words of Carl Kendall, "This principle is way too vague and self-directed to be practically useful." Kendall hints, perhaps cynically, that anthropologists need one set of principles these ethical principles to "pass IRB muster" and a second set "to protect communities and fieldworkers." Carolyn Fluehr-Lobban argues that "'Harm' should be problematized—are there agreed upon universal standards of harm, and where is there discussion of reasonable disagreement."

James Dow rejects the medical language of IRBs: "'Do no harm' is an good ethical principle to be applied to individual social relationships, which we hope that we understand; however, there is a problem when applying it to larger societies and cultures." Likewise, David Samuels writes that "The place where you need to get informed consent is at the point at which you have turned people into characters in your story. The medicalized pre-framing of the IRB process doesn’t cover that at all."

Taken as a whole, the responses suggest that only a minority of those commenting embrace the Belmont Report and the IRB process as enthusiastically as the AAA did in its 2004 statement that presents the active involvement of IRBs as a positive good. I hope the Task Force recognizes this, and takes the opportunity to reconsider the AAA's overall position in regard to IRB review.

[Hat tip to Alice Dreger. For a historical perspective on another discipline's efforts to craft a research ethics code, see Laura Stark, "The Science of Ethics: Deception, the Resilient Self, and the APA Code of Ethics, 1966–1973," Journal of the History of the Behavioral Sciences 46 (Fall 2010): 337–370.]

Wednesday, December 22, 2010

First, Do Some Harm, Part I: Denzin's Qualitative Manifesto

Three recent documents demonstrate the confusion that arises when people try to apply medical ethics to non-medical fields. I will describe them in individual entries.

In June 2010, Norman Denzin, Research Professor of Communications at the University of Illinois at Urbana-Champaign, published The Qualitative Manifesto: A Call to Arms (Left Coast Press). Chapter five seeks

to outline a code of ethics, a set of ethical principles for the global community of qualitative researchers. I want a large tent, one that extends across disciplines and professions, from anthropologists to archeologists, sociologists to social workers, health care to education, communications to history, performance studies to queer and disability studies.

Part of the impetus for this effort is Denzin's recognition that IRB guidelines may not match "guidelines grounded in human rights, social justice considerations" or disciplinary codes. He is familiar with the debate concerning IRBs, having read the Illinois White Paper, the AAUP reports, and "even a humanities and IRB blog where complaints are aired."

Denzin is also familiar with oral historians' concerns that IRBs impose inappropriate requirements, as well as statements of ethics from other qualititative researchers. He seeks to synthesize what he has learned in a footnoted dialogue, part of a "one-act play" entitled "Ethical Practices":

SCENE FOUR: Oral Historians

. . .

Speaker Two:: We do not want IRBs constraining critical inquiry, or our ethical conduct. Our commitment to professional integrity requires awareness of one's own biases and a readiness to follow a story, wherever it may lead. We are committed to telling the truth, even when it may harm people (Shopes, 2007a, p.4).

Speaker One:: When publishing about other people, my ethics require that I subject my writing to a fine-mesh filter: do no harm (Richardson, 2007, p. 170).

Speaker Two:: So there we have it. A set of methodological guidelines. (83)

No. What we have is a debate between Linda Shopes, a historian, and Laurel Richardson, a sociologist, about the ethical responsibility of an interviewer to a narrator. Their perspectives reflect important differences between their professions. They also refelct the particulars of the book in which Richardson's statement appears, an account of the last months of a dying friend--hardly the typical oral history or sociological study.

Denzin turns a blind eye to this debate, instead seeming to endorse both sides. In the play, Speaker Two states that "Beneficience, do no harm, is challenged in the oral history interview, for interviews may discuss painful topics, and they [sic] have the right to walk away at any time." That seems to endorse Shopes's position. But they book closes with a proposed ethical code that leans toward Richardson, calling on all qualitative researchers to "strive to never do harm." (122)

How can Denzin read and reprint historians' arguments, then reject them without even realizing he is doing so? Is the historians' position so hard to understand? Or is the lure of innocuity so powerful?

Sunday, December 12, 2010

Van den Hoonaard Reviews Ethical Imperialism

Will van den Hoonaard reviews Ethical Imperialism for the Canadian Journal of Sociology. The CJS blog asks that readers not quote or cite the advance version now online, but I suppose linking is OK.

Happy Fourth Birthday, Institutional Review Blog!

Friday, December 10, 2010

George Mason University Posts Consultant Report

The George Mason University Office of Research & Economic Development has posted the following:

The Huron Consulting Group's final report on the Office of Research Subject Protections and the Human Subjects Review Board is now available. The results are available in two formats: A comprehensive Final Report (PDF) and a Faculty Report Presentation (PPT)

As a Mason faculty member, I am involved in the continuing discussions of human research protections at the university. I will therefore refrain from comment except to applaud the university administration for posting these documents where they can inform decisions here and at other universities.

For comparable reports, see Institutional Review Boards at UC [University of California]: IRB Operations and the Researcher’s Experience, Report of the University Committee on Research Policy (UCORP), endorsed by the Academic Council, April 25, 2007, and University of Cincinnati, Final Report: Task Force on IRB and Compliance Roles and Responsibilities, May 2009.

Tuesday, December 7, 2010

Menikoff Passes the Buck

Joseph Millum, bioethicist at the National Institutes of Health, and Jerry Menikoff, director of the Office for Human Research Protections, acknowledge the widespread dissatisfaction with present human subjects regulations and wish that "ethics review could be streamlined under the current regulations if institutions, IRBs, and researchers adhered strictly to the definition of human subjects research and used the available options for exemptions, expedited review, and centralized review—options that remain underused in biomedical research." But they put too much blame for this overregulation on IRBs and research institutions rather than on their own agencies.

[Joseph Millum and Jerry Menikoff, "Streamlining Ethical Review," Annals of Internal Medicine 153, no. 10 (November 15, 2010): 655-657.]

Tuesday, November 30, 2010

Friday, November 26, 2010

Survey: One-Third of UConn Researchers Dislike CITI Program

A 2007 survey of researchers at the University of Connecticut found that more than one third were dissatisfied with the Collaborative Institutional Training Initiative (CITI) program in human subjects research.

The UConn IRB and Office of Research Compliance offered the survey to about 350 researchers, of whom 114 (33 percent) returned it. Part of the survey asked respondents about the CITI Program:

7 Questions asked respondents to rate different aspects of the CITI course on a scale of 1-7 (1=least, 7=most). 4 out of these 7 questions asked if the CITI course increased understanding of risks and protections for human subjects in research. There were no statistical differences in the answers received on this group of 4 questions.

53% rated this group 5 or above
16% rated this group 4, moderate
31% rated this group 3 or below

Similar rates were received for overall satisfaction with the CITI course:

54% rated it 5 or above
9% rated it 4, moderate
37% rated it 3 or below

The course did appear to have an impact on the respondent's understanding of the Federal
Regulations. On this criteria,

72% rated it 5 or above
4% rated it 4, moderate
24% rated it 3 or below

The course had a negative impact on the respondents' willingness to join an IRB:

29% rated it 5 or above
13% rated it 4, moderate
58% rated it 3 or below

These figures suggest wider dissatisfaction with CITI than one of its founders, Paul Braunschweiger, admitted in a 2006 presentation. That presentation (slide 60) reported that principal investigators gave the program an average of about 7.8 on a 10 point scale on overall satisfaction. Though the presentation did not show the distribution of researchers' responses, it would be difficult to get so high a mean if 37 percent of researchers offered negative assessments. We need more data.

The UConn survey also offered researchers the chance to write open-ended comments. The most common suggestions were that the training should be shorter, and that the course content "should be limited to a researcher's area of research." Researchers were happy with the online form of the course, with 74 asking for no change, and only 12 choosing the next most popular option: video instruction.

All of these results suggest the potential for online courses that are shorter than CITI and targetted to a specific research discipline, such as Macquarie University's Human Research Ethics for the Social Sciences and Humanities.

UConn also surveyed researchers on their views of the UConn IRB. But the university has only reported the mean ratings, not the distribution of responses, so it is impossible to say if the IRB earned as many unsatisfactory grades as did the CITI program.

Sunday, November 21, 2010

La Noue Reviews Ethical Imperialism

Political scientist George La Noue terms Ethical Imperialism "a powerful indictment of the IRB regime."

[George R. La Noue, Review of Zachary M. Schrag, Ethical Imperialism: Institutional Review Boards and the Social Sciences, 1965-2009, Law & Politics Book Review 20 No. 11 (October, 2010): 616-618.]

La Noue, who has himself written about the IRB controversy, notes that "Universities might seem to be a most unlikely place to welcome and implement a process that is in effect a form of prior censorship. Reconciling the IRB process with legal or professorial concepts of academic freedom is extremely difficult." He finds that "Schrag provides a carefully researched and well written historical perspective providing all members of the academy with essential information to reconsider the role of IRBs."

La Noue calls for more study of the constitutionality of current IRB regulations and practices, a subject I would prefer to leave to the law professors. He also concludes that

What is missing is comparative empirical research about the standards and procedures of a variety of IRBs in different settings. While it seems intuitively unlikely that the process is always fair, objective and consistent, from IRB committee to committee, campus to campus, beyond anecdotes what proof exists? Without an appropriate factual basis, courts would struggle with both the compelling interest and narrow tailoring prongs that constitute the strict scrutiny test that should apply to censorship. Schrag’s book provides a necessary and very carefully researched historical context for the debate about IRBs, but the next step needs to be taken by professional associations and social scientists to study their actual practice to see if the current system can be improved.

I concur, though I would suggest that we in fact need two separate branches of such emprical study. One would continue the work of Maureen Fitzgerald and Laura Stark, both of whom have observed committees in action without finding huge variation from campus to campus, or even--in Fitzgerald's case--country to country.

A second branch would look at the development of human research protections policies. Read through this blog, and you will find enormous variation in casts of characters involved in shaping university policies on human subjects, from research offices that feel free to make up whatever rules they want, to the participation of university-level faculty committees, to the involvement of departments most affected by a given policy.. I don't know of any scholarship that has examined this variation in depth.

I hope that other scholars heed Professor La Noue's call.

Thursday, November 18, 2010

Is Facebook Data Mining Human Subjects Research?

Recent law-school graduate Lauren Solberg finds that "data mining on Facebook likely does not constitute research with human subjects, and therefore does not require IRB review, because a researcher who collects data from Facebook pages does not 'interact' with the individual users, and the information on Facebook that researchers mine from individual users' pages is not 'private information.'"

[Lauren Solberg, "Data Mining on Facebook: A Free Space for Researchers or an IRB Nightmare?" article under review, University of Illinois Journal of Law, Technology & Policy 2010 (2). The article has been accepted for publication, but the journal is still soliciting comments.]

Solberg challenges policies now in place at Indiana University and the University of Massachusetts Boston, where researchers must get Facebook's written permission or the written permission of every individual who is studied. These policies, she argues, impose unnecessary burdens on researchers and IRBs alike. (The two policies are identical, but it's not clear which university borrowed from the other.)

She argues that most data mining projects do not meet the regulatory definition of human subjects research. Reading existing profiles is not interaction with an individual. Nor is a Facebook profile that is open to strangers private information, i.e., "information which has been provided for specific purposes by an individual and which the individual can reasonably expect will not be made public (for example, a medical record)." If a college admissions officer or a potential employer can read your profile, you've lost little by having an anthropologist read it as well.

This analysis seems sound, but it's not clear to me that anyone disagrees. In particular, the third university Solberg mentions, Washington University in St. Louis, applies its policy only to "
Any activity meeting the definition of 'human subject research' which is designed to recruit participants or collect data via the Internet.
" It then lists several examples, most of which involve interaction with living individuals. Thus, I doubt Solberg's claim that "researchers at Washington University need only inform Facebook users that they are recording information that is posted on their pages." Rather, if the project does not meet the definition of human subject research, then Wash U. researchers need not do even that much.

Solberg's article skirts some interesting questions. One concerns the boundaries of a reasonable expectation of privacy. Thus, Michael Zimmer gives the example of a study by Harvard graduate students of the Facebook profiles of Harvard undergraduates. If an undergraduate had made some information visible only to other Harvard students (a choice Facebook's software allows), and a Harvard student-researcher sees it, does that change Solberg's analysis?

A second question concerns the authority of university research offices and IRBs to insist that researchers abide by website terms of service. Notably, the Indiana and UMASS policies do not cite federal human subjects regulations as their authority. Rather, they claim that Facebook and Myspace "explicitly state that their sites are not intended for research but for social networking only."

Solberg writes that evaluating such claims is "outside the scope of this article," but they are interesting in three ways. First, they may be factually false; I could find no such explicit statements in the Facebook or Myspace terms of service. Second, they are divorced from federal regulation. For example, the Facebook terms of service do not distinguish between living and dead Facebook members, whereas federal human subjects protections apply only to the living. Finally, they are internally inconsistent. If Facebook and Myspace did prohibit the use of their sites for research, would not researchers still be violating the terms of service even if they got signed consent from individual members, as allowed by the policies? Just who are these two universities trying to protect?

Solberg concludes that "Unfortunately, and somewhat surprisingly, the OHRP has issued no guidance pertaining to Internet research in general, let alone guidance specifically relating to the issue of data mining on the Internet." To give the feds some credit, in summer 2010 (after Solberg wrote her article), SACHRP did sponsor a panel on the Internet in Human Subjects Research. It can take a long time from a SACHRP presentation to OHRP guidance, but the wheels may be moving on this one.


Note, 19 November 2010: The original version of this post identified Ms. Solberg as a law student. She has in fact graduated. I have also changed the link about Michael Zimmer's work from his SACHRP presentation to his article, "'But the data is already public': on the ethics of research in Facebook," Ethics and Information Technology 12 (2010): 313-325.

Wednesday, November 10, 2010

Comments: FWA Forms Should Reflect Common Rule

On October 4, I reported that OHRP was inviting comments on drafts of new FWA form and FWA Terms of Assurance.

Prior to the October 25 deadline, OHRP received comments from only five individuals and two professional organizations, all of which are posted at

Of these seven comments, three (including mine, of course) complained that the draft Terms of Assurance, like the existing ones, violate the Common Rule's pledge that an institution's statement of principles "may include an appropriate existing code, declaration, or statement of ethical principles, or a statement formulated by the institution itself."

No one made a case for retaining the discrepancy between the regulations and the forms.

Friday, November 5, 2010

IRBs and Procedural Due Process

A law student finds that "current IRB regulations fail to provide procedural due process as guaranteed by the Fifth and Fourteenth Amendments of the United States Constitution."

[Daniel G. Stoddard, "Falling Short of Fundamental Fairness: Why Institutional Review Board Regulations Fail To Provide Procedural Due Process," Creighton Law Review 43 (June 2010): 1275-1327]

Stoddard notes a number of measures that might protect researchers against capricious IRBs but which are not currently required:

Federal IRB regulations are silent . . . regarding a number of specific aspects of IRB function including public attendance of IRB functions, a researcher's opportunity to hear and cross-examine information opposing that researcher's research, and a researcher's right to privacy with regard to an IRB's media interaction. IRB regulations additionally fail to address whether an IRB should base its decision exclusively on evidence presented to it, whether a researcher should have a right to a hearing before the IRB suspends research, whether a researcher has a right to judicial review of an IRB decision, and whether a researcher has a right to an attorney. Federal IRB regulations also fail to include a researcher's right to have informal communications with an IRB, a researcher's right to present further evidence to an IRB following a rejection, a researcher's right to consult with personnel opposing that researcher's research in an effort to understand and prepare to challenge them, and an IRB's obligation to evaluate its own functioning procedures periodically. (1290)

All of these measures could be helpful, but the question for Stoddard is whether their absence violates procedural due process. To answer that question, he turns to Mathews v. Eldridge (424 US 319 - Supreme Court 1976), a 1976 Supreme Court case named for the same HEW secretary who was sued for violating IRB procedures in Crane v. Mathews, 417 F. Supp. 532 - Dist. Court, ND Georgia 1976.

Mathews states that

the specific dictates of due process generally requires consideration of three distinct factors: First, the private interest that will be affected by the official action; second, the risk of an erroneous deprivation of such interest through the procedures used, and the probable value, if any, of additional or substitute procedural safeguards; and finally, the Government's interest, including the function involved and the fiscal and administrative burdens that the additional or substitute procedural requirement would entail.

Stoddard, it turn, argues that "The inability to contest or appeal an IRB decision is a substantial procedural shortcoming when evaluated under the three prong Mathews [v. Eldridge] balancing test."

The problem with this argument is that Mathews does not require the right to appeal, nor do other key precedents. In particular, Stoddard would be more persuasive had he addressed head-on what I take to be the federal court decision that most directly addressed due process and IRBs: Halikas v. University of Minnesota. Though Stoddard cites the district court's denial of a preliminary injunction to the plaintiff in that case, an aggrieved researcher, he does not analyze the court's reasoning behind that denial: "An IRB proceeding is, simply, not a federal criminal prosecution. Such a proceeding is governed by contracts and federal regulations which do not require, or provide, the full panoply of criminal procedural rights . . . Dr. Halikas voluntarily entered into an employment contract and conducted his research under the aegis of the University and its research-regulatory regime. He received the process which is his due." [Halikas v. University of Minnesota, 856 F. Supp. 1331; 1994 U.S. Dist.]

Nor does Stoddard analyze the final judgment in that case, which was not published. [Case number 4-94-CV-448, Federal District Court, Fourth Division, District of Minnesota; filed 18 May 1994; Judgment entered 9 June 1996. I am very grateful to Dr. Dale Hammerschmidt, one of the named defendants in the Halikas suit, for providing me with a copy of this document. I have posted it on my website (see previous link) so it will be easier to find in the future.]

In that judgment, the court found that "as the Eighth Circuit Court of Appeals has determined in similar cases, the Constitution requires only that Dr. Halikas receive: (1) clear and actual notice of the charges against him; (2) notice of the names of those bringing the charges and the specific nature and factual basis for the charges; (3) a reasonable time and opportunity to respond; and (4) a hearing before an impartial board or tribunal." It did not include the right to appeal as a component of procedural due process under the Constitution.

The "similar cases" which Judge James Rosenbaum used to reach this result were two cases in which employees of public universities contested their firing: Riggins v. Board of Regents of Univ. of Neb., 790 F. 2d 707, 712 (8th Cir. 1985) and King v. University of Minn., 774 F. 2d 224, 228 (8th Cir. 1985), cert. denied, 475 U.S. 1095 (1986).

This comparison casts doubt on Dr. Hammerschmidt's claim that Judge Rosenbaum "formally recognized the concept that the opportunity to conduct research upon human subjects is a privilege, rather than a right." ["'There is no substantive due process right to conduct human-subject research': The Saga of the Minnesota Gamma Hydroxybutyrate Study," IRB: Ethics and Human Research. 19 (May - Aug., 1997): 13-15. This is amplified in Steven Peckman, ["A Shared Responsibility for Protecting Human Subjects," in Institutional Review Board: Management and Function, ed. Robert J. Amdur, Elizabeth A. Bankert (Jones & Bartlett Learning, 2006), 17.]

To the contrary, Riggins specifically states that "Public employees may have a property right in continued employment." And King involved the dismissal of a tenured professor. By invoking these precedents, the Halikas decision suggests that while research is not a substantive due process right, researchers may have procedural due processs rights comparable to those enjoyed by public employees and tenured professors. Halikas makes no mention of research as a "privilege."

Though the Halikas judgment was sufficient to decide the case before the court, it left unanswered many questions about the rights of researchers who face IRBs. Are the procedural protections set forth in King and Riggins adequate to protect the right to research, academic freedom, or a property right in continued employment? Do professors in their capacity as researchers deserve more, less, or equivalent protections as professors in their capacity as teachers? What rights, if any, might student-researchers claim? Would Halikas or King have been decided differently had the plaintiffs offered free-speech claims? Does the "human research" in the judgment refer to social research as well as the medical research that was the subject of the IRB proceedings against Halikas? (The final judgment describes the IRB as a "medical research review body.") Does a board or tribunal have to be competent as well as impartial?

Halikas leaves all these questions unanswered. A careful analysis of that case would be a good starting place for further legal scholarship on the due process implications of IRB policies.

Thursday, November 4, 2010

Hear Me Talk About My Book

Online Programming for All Libraries (OPAL) has posted recordings of my October 27 discussion of Ethical Imperialism as full streaming audio with text chat and a downloadable MP3 audio recording. The presentation lasts 64 minutes.

Thursday, October 28, 2010

Ohio State Restricts LGBTQ Research, Ponders Reforms

Two Ohio State University professors, James Sanders and Christine Ballengee-Morris, complain about IRBs' impacts on research and teaching in their fields and report on efforts at reform.

[James H. Sanders III and Christine Ballengee-Morris, "Troubling the IRB: Institutional Review Boards' Impact on Art Educators Conducting Social Science Research Involving Human Subjects," Studies in Art Education: A Journal of Issues and Research in Art Education 49 (2008): 311-327. Yes, it's two years old, but I just found out about it recently.]

Many of the complaints are familiar enough. The authors--one in arts policy, the other in art education--lament biomedical models, delays in approvals, and "lengthy boiler-plate consent forms." Yet the article advances the conversation about IRBs in two interesting ways.

First, the article highlights the difficulty of getting IRB approval to study lesbian, gay, bisexual, transgender, and queer self-identified youth. The authors would like to know "how LGBTQ students experience the World Wide Web, art and culture, and their self-image, or how they establish resilient behaviors." But, they find, "Conservative IRB interpretations of federal regulations requiring parental consent of all human subjects under 18, may have failed to protect the rights and welfare of LGBTQ adolescent research participants, and further dissuade researchers from studying all but (safe) consenting adult heterosexual subjects."

Second, the article describes reform efforts at Ohio State. Advised by colleagues to "be intentionally vague . . . speak in generalities, or simply not tell what we were actually doing," the authors did consider "lying to a repressive and controlling body that claims to care about human subjects' protections and then denies autonomy or voice to those living with repression." Instead, they joined 160 faculty members to petition their Office of Research to reconsider its policies.

The result was the issuance in 2007 of the Report of the IRB Working Group for Research in the Social and Behavioral Sciences." That report offers a number of constructive suggestions. For example,

  • Relaxing the requirement that all changes to a protocol be reported to the IRB, even if they "have absolutely no material impact on a human subject's participation in a study."
  • Accepting that interviewers cannot foresee in advance all the topics they may raise in a conversation.
  • Informing investigators about their right to appeal decisions to the IRB chair, the full board, or the institutional official.
  • Listing approved protocols, so researchers do not have to reinvent the wheel when submitting their own projects.

The 2007 report ends with a strong call for IRB policy to be shaped by the faculty. It specifically recommends the active participation of the University Research Committee, which is comprised mostly of regular facutly:

It is also important that there be continuing transparency and communication of IRB policy and procedure development among the faculty, the Office of Research, the IRB Policy Committee, and ORRP [Office of Responsible Research Practices]. To ensure that such consideration and implementation occurs, we recommend that an ad-hoc subcommittee of the University Research Committee be appointed for this purpose. This subcommittee should receive regular reports from the IRB Policy Committee regarding the development of new policies related to the Working Group's recommendations and suggestions, and from the ORRP staff regarding progress in staffing, website development, and electronic submission procedures.

In the longer term, it is important for the University Research Committee to participate actively in the human subject protection program at Ohio State, and to assess and suggest additional improvements to the operations of ORRP and the IRB. We strongly encourage the University Research Committee to set up a means to do so.

As of their writing, however, Sanders and Ballengee-Morris had yet to see improvement:

In short, one is required to think through every possible contingency and clearly communicate how such contingencies would be addressed. While the process itself strengthens the research design, the unreasonableness of some alternative scenarios posed by those unfamiliar with the researchers' field of study have been stifling. In response, many students and colleagues have chosen to change methods or abandon their research problems, rather than be subjected to this arduous, frustrating, and at times, humiliating process.

Tuesday, October 26, 2010

Dreger Wants to Scrap IRBs

On the heels of Laura Stark's Los Angeles Times op-ed calling for the replacement of local IRBs with centralized boards of experts, historian Alice Dreger has published her own call for a national system of ethics review based on expertise and transparency.

[Alice Dreger, "Nationalizing IRBs for Biomedical Research – and for Justice," Bioethics Forum, 22 October 2010.]

Troubled by her IRB's approval of a project she considers unethical, and by Carl Elliott's White Coat, Black Hat: Adventures on the Dark Side of Medicine, Dreger concludes that the system of local review is ineffective:

We’ve reached the point where many people in medicine and medical ethics don’t even expect IRBs to act as something other than liability shields for their universities. But do patients who come to us only to be turned into subjects know that? Do they know that there is literally a price on their heads put there by research recruiters?

I’ve come to believe we need a radical solution. Maybe what we need is a nationalized system of IRBs for biomedical research, one that operates on the model of circuit courts, so that relationships cannot easily develop between the IRBs and the people seeking approval. This system could be run out of the Office for Human Research Protections and involve districts, similar to the federal courts system. Deliberations would be made transparent, so that all interested parties could understand (and question) decisions being made.

Think of the advantages: the possibility of actually focusing on the protection of human subjects first and foremost, free of conflicts of interest; the possibility of having nothing but trained professionals (not rotating unqualified faculty and staff) sitting on review panels; the possibility of marking biomedical research as clearly different from the social science and educational research unreasonably managed by many IRBs; the possibility of much greater transparency to those interested in seeing what’s going on; the possibility of having multi-center trials obtain a single approval from one centralized IRB, rather than trying to manage approvals from multiple local institutions. And the possibility of shutting down the deeply opaque, highly questionable private IRBs Elliott describes as being increasingly used by universities. (Go ahead, call me a Communist for caring about the Common Rule.)

Her Communist leanings aside, I don't know why Dreger presents her argument as a defense of the Common Rule, which fails to distinguish between biomedical and social research, puts ethics review in the hands of rotating unqualified faculty and staff, and keeps deliberations opaque. But her wish for the kind of coordination and transparency provided by the court system has a long lineage. I've quoted it before, and I'll quote it again:

The review committees work in isolation from one another, and no mechanisms have been established for disseminating whatever knowledge is gained from their individual experiences. Thus, each committee is condemned to repeat the process of finding its own answers. This is not only an overwhelming, unnecessary and unproductive assignment, but also one which most review committees are neither prepared nor willing to assume.

[Jay Katz, testimony, U.S. Senate, Quality of Health Care—Human Experimentation, 1973: Hearings before the Subcommittee on Health of the Committee on Labor and Public Welfare, Part 3 (93d Cong., 1st sess., 1973), 1050].

It is not lack of good intentions or hard work that leads IRBs to restrict ethically sound surveys while permitting unethical experimental surgery. It is the ignorance and isolation identified by Katz in 1973 and still in place today.

Wednesday, October 13, 2010

Stark Wants to Scrap IRBs

Sociologist Laura Stark is a careful observer of the IRB system, having based her dissertation on archival research and direct observations of three university IRBs. In 2008, I complained that the dissertation, "Morality in Science," reported but failed to condemn bad IRB behavior. In a newly published essay, Stark takes a more critical stance.

[Laura Stark, "Gaps in Medical Research Ethics," Los Angeles Times, 8 October 2010.]

In her essay, Stark traces today's IRB system back to systems established in the 1960s at the NIH Clinical Center, which performed experiments on "hundreds of healthy prisoners, conscientious objectors, unemployed people and students living in their hospital as subjects." She finds that system included two basic flaws: it failed to inform the public about what was going, and it gave no voice to dissenting members of ethics boards. These flaws, she argues, remain in today's system of ethics review.

To remedy them, Stark proposes that the new Presidential Commission for the Study of Bioethical Issues "rebuild the regulations from the ground up." She writes,

New rules should include these changes:

Replace the thousands of local review boards that labor independently at universities and hospitals here and abroad with a small number of ethics-review networks organized around specific research methods rather than around institutions. The networks would be better equipped to handle multi-site studies that are now commonplace, and would remove the political biases of some outlier institutions.

Consider the advantages and disadvantages of outsourcing ethics review to private companies, which review research for a fee.

Finally, empower research participants by posting the results of ethics reviews online. The current system includes community representatives who presumably speak on behalf of research participants, but that's not good enough.

Though the essay does not specifically mention IRB review of research in the social sciences and humanities, the dissertation gives examples in which the lack of transparency and lack of expertise impeded such projects. Thus, Stark's call for expert boards and published results of ethics review could address non-biomedical research as well as the medical research that is her chief concern. And while Stark presents her proposals as ways to ensure better protection for research participants, they could also benefit those researchers who now fall victim to inexpert boards. Because the current system fails both researchers and participants, reform can benefit them both.

Stark should realize that the changes she proposes would require more than "rebuild[ing] the regulations from the ground up," since the requirement for local IRBs is encoded in federal statute, not just regulations. But a wholesale reconsideration of the IRB system by the presidential commission would be a fine first step.

Monday, October 4, 2010

Tell OHRP Belmont Isn't Everything

On September 23, OHRP posted drafts of new FWA form and FWA Terms of Assurance. It is collecting comments on the forms until October 25.

Here is what I have come up with so far. I would welcome comments on this draft for the next couple of weeks; I'd like to submit this comment by October 15 to be sure I make the deadline.


To the Office for Human Research Protections:

Thank you for the opportunity to comment on the draft revision of the "Terms of the Federalwide Assurance for the Protection of Human Subjects." I have two comments on this draft.


I am disappointed that the current draft fails to correct a longstanding discrepancy between the Common Rule and OHRP's forms. 45 CFR 46.103(b)(1) requires that each institution receiving funding from a Common Rule agency submit an assurance that includes

A statement of principles governing the institution in the discharge of its responsibilities for protecting the rights and welfare of human subjects of research conducted at or sponsored by the institution, regardless of whether the research is subject to Federal regulation. This may include an appropriate existing code, declaration, or statement of ethical principles, or a statement formulated by the institution itself.

By contrast, the draft Federalwide Assurance requires U.S. institutions to pledge that they will be guided either by the Belmont Report, the Declaration of Helsinki, or "other appropriate international ethical standards recognized by U.S. federal departments and agencies that have adopted the Common Rule." I am unaware of any documents in this third category, nor of any element of the Common Rule that requires federal approval of a statement of principles.

Thus, while the Common Rule offers institutions complete freedom in their choice of ethical principles, the current and proposed Terms of the Federalwide Assurance limit them to one or two documents. This is like guaranteeing the freedom of religion, then requiring every citizen to adhere to either the Lutheran Book of Concord or the Articles of Religion of the Methodist Church.

Instead, the first paragraph should reflect the provisions of the Common Rule. I suggest the following language:

"All of the Institution's human subjects research activities, regardless of whether the research is subject to the U.S. Federal Policy for the Protection of Human Subjects (also known as the Common Rule), will be guided by a statement of principles governing the institution in the discharge of its responsibilities for protecting the rights and welfare of human subjects of research conducted at or sponsored by the institution, regardless of whether the research is subject to Federal regulation. This may include an appropriate existing code, declaration, or statement of ethical principles, or a statement formulated by the institution itself. This requirement does not preempt provisions of this policy applicable to department- or agency-supported or regulated research and need not be applicable to any research exempted or waived under §46.101(b) or (i)."


The current draft allows non-U.S. institutions to comply based on "The 1998 (with 2000, 2002, and 2005 amendments) Medical Research Council of Canada Tri-Council Policy Statement on Ethical Conduct for Research Involving Humans."

This statement has two inaccuracies. First, the Medical Research Council no longer exists; it was replaced in 2000 with the Canadian Institutes of Health Research (CIHR). Second, the TCPS is authored not only by the CIHR but also by the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Social Sciences and Humanities Research Council of Canada (SSHRC). (That is what makes it a tri-council policy.)

Moreover, the Panel on Research Ethics plans to release a second edition of the TCPS in December 2010, and may amend it further while the new Terms of the Federalwide Assurance are still in effect. Rather than limit institutions to an outdated version of the TCPS, OHRP should allow non-U.S. institutions to abide by the current version.

Friday, October 1, 2010

U.S. Apologizes for 1940s Human Subjects Research

This is a little off topic for the blog, but today Secretary of State Hillary Clinton and Health and Human Services Secretary Kathleen Sebelius apologized to Guatemala for experiments done there in the late 1940s. Researchers led by a Public Health Service doctor conducted various studies of syphilis, some of which included the deliberate infection of people without their consent.

The story was brought to light by Susan M. Reverby, Marion Butler McLean Professor in the History of Ideas and Professor of Women's and Gender Studies at Wellesley College. Her article, "''Normal Exposure' and Inoculation Syphilis: A PHS 'Tuskegee' Doctor in Guatemala, 1946-48, will appear in the Journal of Policy History in 2011, in a special issue on human subjects research that I edited.

UPDATE: The New York Times has a more complete story, including a nice mention of the Journal of Policy History.

Wednesday, September 29, 2010

OHRP Issues Guidance on Withdrawal

On September 21, OHRP posted new "Guidance on Withdrawal of Subjects from Research: Data Retention and Other Related Issues." The document makes it clear that if a research participant in a social science study wishes to withdraw, the researcher is not obliged under federal regulations to throw out information that has already been collected:

May an investigator retain and analyze already collected data about a subject who withdraws from the research or whose participation is terminated by the investigator?

OHRP interprets the HHS regulations at 45 CFR part 46 as allowing investigators to retain and analyze already collected data relating to any subject who chooses to withdraw from a research study or whose participation is terminated by an investigator without regard to the subject's consent, provided such analysis falls within the scope of the analysis described in the IRB-approved protocol. This is the case even if that data includes identifiable private information about the subject.

Of course, in some cases, researchers may still choose to discard such data:

For research not subject to regulation and review by FDA, investigators, in consultation with the funding agency, certainly can choose to honor a research subject's request that the investigator destroy the subject's data or that the investigator exclude the subject's data from any analysis. Nothing in this document is intended to discourage such a practice. For example, an investigator studying social networks in a community may agree to omit all of the data they have collected from a subject of the study at the request of that subject.

(The clause about the FDA is due to the FDA's concern that withdrawals can skew the findings of clinical trials.)

This guidance strikes me as helpful. When reading sample consent forms, e.g., Cornell's, I am often left wondering what is meant by the boilerplate, "you are free to withdraw at any time," especially when it comes to interviews. If a researcher does a great interview and writes a dissertation chapter around it, can the narrator show up at the dissertation defense and pull the information? (This is apparently the case with undergraduate research at Bard College.)

Fortunately, OHRP says no.

Tuesday, September 28, 2010

Thanks, Law Professors!

Concurring Opinions, which describes itself as "a multiple authored, general interest legal blog," features an interview with your humble blogger, while the Legal History Blog also takes notice of Ethical Imperialism.

Sunday, September 26, 2010

Unfair IRBs Provoke Misbehavior

"Researchers who perceive that they are being unfairly treated are less likely to report engaging in 'ideal' behaviors and more likely to report misbehavior and misconduct," according to a survey of faculty at fifty top research universities in the United States.

[Brian C. Martinson, A. Lauren Crain, Raymond De Vries, Melissa S. Anderson, "The Importance of Organizational Justice in Ensuring Research Integrity," Journal of Empirical Research on Human Research Ethics 5, no. 3. (Sep 2010): 67–83.]

As the authors note, this is mostly a quantitative confirmation of earlier findings. Most relevant for this blog, they cite a 2005 article by Patricia Keith-Spiegel and Gerald P. Koocher that found that "The efforts of some institutional review boards (IRBs) to exercise what is viewed as appropriate oversight may contribute to deceit on the part of investigators who feel unjustly treated."

Like the Singer and Couper article in the same issue, this article presents a mass of quantitative data in a difficult form. Let me suggest that the Journal of Empirical Research on Human Research Ethics invest some money in decent graphs.

Monday, September 20, 2010

Survey Consent Form Language May Not Matter Much

Eleanor Singer and Mick P. Couper of the Survey Research Center of the Institute for Social Research at the University of Michigan find that the wording used to describe the confidentiality offered to survey participants may not play a big role in their decision to participate.

[Eleanor Singer and Mick P. Couper, "Communicating Disclosure Risk in Informed Consent Statements," Journal of Empirical Research on Human Research Ethics 5, no. 3 (Sept. 2010): 1–8.]

Singer and Couper sent out more than 150,000 e-mails to get 9,206 responses to a questionnaire about willingness to participate in a hypothetical survey. Respondents were significantly more likely to say they'd be willing to answer questions about work and leisure than about the more sensitive topics of money and sex. In contrast,

the precise wording of the confidentiality assurance has little effect on respondents’ stated willingness to participate in the hypothetical survey described in the vignette. Nor does adding a statement on the organization’s history of assuring confidentiality appear to affect stated willingness. However, these experimental manipulations do have some effect on perceptions of the risks and benefits of participation, suggesting that they are processed by respondents. And, as we have found in our previous vignette studies—and replicated in a mail survey of the general population—the topic of the survey has a consistent and statistically significant effect on stated willingness to participate.

Singer and Couper hint that researchers and IRBs should spend less time fretting about the wording of consent forms used by survey researchers, since it does not affect decisions and since it is hard to estimate the risk of disclosure. Rather, the real burden on survey orgnizations is to take precautions once they have collected the data.

Sunday, September 5, 2010

IRB is No Substitute for Shield Law

Education Week reports that researchers are dismayed by the release of data about teachers and students.

[Sarah D. Sparks, L.A. and Ariz.: Will Data Conflicts Spur a Chill Effect?," Education Week, 3 September 2010.]

The article discusses the decision by the University of Arizona to release some data in response to a subpoena. It claims that "The Code of Federal Regulations for the Protection of Human Subjects delegates confidentiality decisions to university institutional review boards, or IRBs, but in Arizona, the IRBs released the full data over the researchers' opposition." I believe this is incorrect on three counts:

  1. The Common Rule gives power to IRBs to review and approve research. Once the research was complete, it was up to the universities to decide whether to comply with the subpoenas, not the IRBs. Indeed, the open letter from the researchers states that "lawyers at the University of Arizona," not the IRB, turned over information. (The letter does complain that "researchers have received little or no support from their campus IRB, lawyers, or administration," but that's a different thing.)

  2. The use of the plural "IRBs" suggests that more than one university released data. As Education Week itself made clear, Arizona State did not release any data, and the Arizona State professor involved withdrew as an expert witness.

  3. Also as reported Education Week, the University of Arizona did not hand over "full data," but rather only the names of schools and school districts, not individuals.

That said, Gary Orfield, one of the researchers in the Arizona case, hits on a larger truth when he complains of the University of Arizona's behavior:

"I think it's tragic and very dangerous," Mr. Orfield said. "I was shocked at the way the [State of] Arizona people went after this data and that the universities just went along with it. It really calls into question not just the access to schools but the integrity of the IRB process." Mr. Orfield, Ms. Hannaway, and other researchers suggested researchers may need a federal shield law similar to state laws that protect reporters from being compelled to name sources. "We thought the IRBs served that purpose for us, but we were wrong," Mr. Orfield said.

Indeed, since the 1970s, social scientists have argued that shield laws make more sense for protecting the participants in social science research than do IRBs. [James D. Carroll and Charles R. Knerr, Jr., "A Report of the APSA Confidentiality in Social Science Research Data Project," PS 8 (Summer 1975): 258-261 and James D. Carroll and Charles R. Knerr, Jr., "The APSA Confidentiality in Social Science Research Project: A Final Report," PS 9 (Autumn 1976): 416-419.]

I haven't figured out how a shield law would apply to expert witness testimony. (Anybody looking for a good law review topic?) And even without such a law, Judge Collins's order seems to strike a good balance between the rights of research participants and those of parties to the lawsuit.

Still, it seems that in this case the IRB process left Orfield with a dangerously false sense of security.


The Education Week article also mentions an analysis of teacher effectiveness published by the Los Angeles Times based on 1.5 million test scores.. It quotes Felice Levine, the executive director of the American Educational Research Association, on the L. A. Times study: " think it would really have a crippling effect on all social science, education, and health enquiry if public employees in the sector couldn't be guaranteed the same confidentiality as any other research participant . . . In this economy, people are feeling pressed in a number of ways, and being a participant in a voluntary study is probably lower on one's list of priorities than is providing for oneself and one's children."

But the newspaper analysis was not based on a voluntary study, but rather on scores obtained under the California Public Records Act. Making the scores public in this manner may have been bad policy or bad journalism for other reasons, but I don't see what it has to do with voluntary participation in research.

Friday, September 3, 2010

Oral Historians Open Discussion on Principles and Best Practices

As noted on this blog, in October 2009, the Oral History Association replaced its Evaluation Guidelines with a new set of Principles and Best Practices. The new guidelines are considerably clearer in format, and they distance oral history from the biomedical assumptions of the Belmont Report.

Now the Oral History Association is further distancing itself from the Belmont Report by opening an ongoing discussion of the principles, including suggestion for additional revisions. Whereas the Belmont Report was prepared by a small group of people and has not been amended since 1978, the OHA Principles can remain a living document, revised in response to a discussion that is open to all.

Hat tip: AHA Today.

Wednesday, August 11, 2010

After Lawsuit, Arizona State IRB Hindered Native American Interviews

Kimberly TallBear, assistant professor of science, technology, and environmental policy at Berkeley, describes her encounters with IRBs there and at Arizona State University. At the latter, the IRB imposed conditions that made her abandon plans to interview Native Americans.

["Interview with Kimberly TallBear," GeneWatch, May/June 2010.]

As she puts it:

IRBs vary from university to university, and some are much stricter than others. For example, the Arizona State University IRB is, after the Havasupai lawsuit, incredibly strict where tribes are concerned. If you're going to do research with native populations, whether it's biological research or even social science research, you have to get approval from the tribal council before the university will even look at your protocol. On the other hand, I'm doing a project at Berkeley where I'm interviewing both genetic scientists and tribal government people, and Berkeley didn't look twice at my interview with indigenous people. I asked if they require some sort of documentation that I got approval from the tribe, and they said, "No, no, no, that's not a problem." So there are differences between IRBs as well as between disciplines . . .

I'm not an expert on IRBs, but I can speak from personal experience—I have worked at both Arizona State and Berkeley, so I have seen the huge differences in IRBs. In short, the difference is that ASU has been sued. Before the Havasupai suit, ASU was lax as well.

I was at ASU in 2006 and 2007. As a social scientist, I was interviewing a range of people—native people, scientists, regulators—and the IRB was very strict about allowing me to talk to tribes. I had interviewees at five or six tribes, which meant I would have had to go through each one of those tribes to get approval for those interview questions. So, in order to get approval for my science piece, I backed out of the Native American community member questions.

This was also really interesting: I study the culture and politics of genetic science, and I think they should have been more strict and careful about my research questions for scientists. In my work, scientists are potentially vulnerable subjects. Now, I don't actually think they are very vulnerable—I think they actually have a lot more cultural authority than I do in the broader world—but I'm a potential critic. While the native populations were seen as potentially vulnerable subjects, it didn't seem to have crossed the IRB's minds that scientists could be potentially vulnerable subjects, too.

It was the opposite at Berkeley, actually: they were much, much more concerned about my questions for scientists and protecting their confidentiality, and they seemed not at all concerned about my questions for indigenous people, at least from my perspective.

TallBear does not appear angry that the the ASU IRB's strictness forced her to "back out" of planned interviews. Rather, she seems to wish that IRBs were even stricter: "What IRBs require is a bare minimum of the standards that you have to meet to conduct ethical research. IRB approval doesn't constitute a thorough process." And, later, "you see people who have just decided they don't want to work with tribes, because they don't want to have to go through a tribal research review board, they don't want to let a tribal council or a tribal IRB have a say over whether they can publish something or not. I think that's a good thing . . . Go do something else!"

It is not clear from the published interview whether she believes that such discouragement is appropriate only for geneticists and other biomedical researchers, or if she is happy to let tribal governments control the writings of social scientists and journalists as well.

Friday, August 6, 2010

More Universities Uncheck Their Boxes

In 2006, the American Association of University Professors filed a Freedom of Information Act request for a list of all U.S. colleges and universities whose Federalwide Assurances (FWAs) did not check the box on the form pledging to apply federal regulations to all human subjects research, regardless of funding. The list contained 174 entries, though 12 of those were duplicates. (See "IRB Documents" for these lists._

In March 2010, I reported that OHRP estimated that 26 percent of U.S. institutions had unchecked their boxes, up from only about 10 percent in the late 1990s. Curious about this trend, I requested an updated list, and in April 2010 I received a spreadsheet showing all institutions (including hospitals, health departments, commercial labs, and other health institutions) with unchecked boxes.

Making sense of this list took some processing, which accounts for the delay between my receiving the spreadsheet and this post. I did my best to extract institutions of higher learning, and came up with a list of 207 colleges and universities. Then I compared that list to the 2006 list sent to the AAUP.

Only 60 institutions appear on both the 2006 and 2010 lists. One hundred and two had unchecked boxes in 2006 but not 2010, while 147 unchecked their boxes between 2006 and 2010.

Major research universities appear on both lists. Between 2006 and 2010, William & Mary, Johns Hopkins, Princeton, and the University of Connecticut, went from unchecked to checked. Meanwhile, those unchecking boxes included Arizona, Boston University, Brandeis, Emory, George Washington University, Illinois at Urbana-Champaign, Indiana, Michigan State, Minnesota, Northwestern, Notre Dame, Ohio State, University of Pennsylvania, Texas at Austin, Tufts, UCLA, and the University of Southern California. This suggests that the trend is for major research institutions to uncheck. (Apologies to major universities not mentioned; this is my eyeball list, not an effort to correlate the list to Carnegie rankings or anything.)

An unchecked box minimizes a university's exposure to federal oversight and sanction. It does not, however, necessarily change anything for a university's researchers. My own institution, George Mason University, unchecked its box sometime between 2006 and 2010, but the administration has told faculty that it intends to apply all federal regulations to all research, regardless of funding. I imagine the same is true at many of the institutions that have unchecked their boxes.

Update, 15 May 2012, to fix link to "IRB Documents."

Monday, August 2, 2010

Like the blog? You'll love the book!

I am proud to announce the publication of my book, Ethical Imperialism: Institutional Review Boards and the Social Sciences, 1965-2009.

The book and the blog are complementary. The former traces the history of IRB review of research in the social sciences and humanities from its origins in the mid-1960s through last year, while the latter documents the ongoing debate over such review. I hope that everyone with an interest in the present debate will share my interest in its past.

The Johns Hopkins University Press has graciously offered a 25 percent discount to readers of this blog: please download the "Now Available" flyer. Books should begin shipping by the end of next week.

Friday, July 30, 2010

Smithsonian Frees Oral History, Journalism, and Folklore

The Smithsonian Institution has posted a document entitled "HUMAN SUBJECTS RESEARCH FAQs." Although undated, the document appears, from its metadata, to have been last modified on 11 June 2010.

The document makes the Smithsonian the latest in a growing number of prestigious research institutions to provide oral historians, journalists, and folklorists explicit permission to do their work without contacting the IRB. Here are the key questions and answers:

6. Are there any examples of activities that aren't considered Human Subjects Research?

The following are specifically excluded from the definition of Human Subject Research and do not need to be reviewed by the IRB:
• interviews used to provide quotes or illustrative statements, such as those used in journalism;
• collection(s) of oral histories and cultural expressions (e.g., stories, songs, customs, and traditions and accounts thereof) to document a specific historical event or the experience of individuals without intent to draw statistically or quantitatively-based conclusions or generalizations;
• gathering of information from a person to elucidate a particular item (or items) in a museum collection;
• gathering of information from a person to assess suitability for and/or supplement a public program, publication, or cultural performance; or
• survey procedures, interview procedures, or observations of public behavior that are conducted for Smithsonian internal purposes only, the results of which will not be published or presented in a public setting (e.g., at conferences or professional meetings).

7. I think my project is an "oral history" and doesn't need to be reviewed by the IRB. How can I be sure?

The hallmark of an oral history is that it stands alone as a unique perspective rather than an item of data that can be qualitatively analyzed to reach a general conclusion or explanation. If your intention is to interview people who have a unique perspective on a particular historical event or way of life, and you also intend to let the individuals' stories stand alone, with no further analysis, the research is most likely oral history and you do not need to have the research reviewed by the IRB. However, if the surveys or interviews are conducted with the intention of comparing, contrasting, or establishing commonalities between different segments or among members of the same segment, it is safe to say your research will be regular survey/interview procedures, because you will be generalizing the results and your research may need IRB review.

While it is welcome, I can't say this is the most elegant policy. It is hard to track a researcher's intentions and post-interview decisions, rather than his or her conduct of the interviews themselves. And wouldn't a journalist gathering reactions to an event be "comparing, contrasting, or establishing commonalities between different segments or among members of the same segment"?

By contrast, Princeton University distinguishes among types of interviews based on the likelihood that the people being interviewed will understand that they are speaking for the record.

Wednesday, July 28, 2010

Hospital IRB Forbids Interviews with Patients

The Yale Interdisciplinary Center for Bioethics offers an interview project as one of its Cases in Research Ethics, which describe choices faced by hospital IRBs in Connecticut.

Case 3 concerns a nurse who was also a divinity school student, and who gained approval from her hospital's IRB to interview fifteen "hospital patients who were suffering from a progressive and/or life-threatening disease such as cancer" about their religious beliefs and practices and the role of religion in their feelings about their illnesses. Patients agreed to participate after "a thorough review of the purpose of the study, the nature of the questions and the time involved for participation."

Eleven interviews went fine. Then the twelfth patient "became agitated and demanded the researcher leave immediately. The researcher spoke with the hospital nurses and was informed that this subject had 'fallen away' from her prior religious involvement and had wondered if her malignancy was divine retribution for her lapse."

The researcher dutifully reported this as an adverse event. The IRB then reconsidered the project and voted 10 to 1 to forbid the researcher from interviewing the three additional patients.

As described in the case study, the IRB recognized that, collectively, it knew little about this kind of research. "While this IRB was routinely accustomed to addressing the standard types of adverse medical events seen in oncology drug trials, it did not consider the possibly significant adverse psychological consequences of asking these same subjects about their religious and spiritual beliefs vis-à-vis their disease."

Yet the IRB's awareness of its ignorance did not prevent it from stopping the research project. The case study does not give the reason for this decision.

Monday, July 19, 2010

SACHRP to Discuss Internet Research

The July 21 meeting of the Secretary's Advisory Committee on Human Research Protections will sponsort a panel entitled "The Internet in Human Subjects Research," featuring Elizabeth Buchanan, Montana Miller, Michael Zimmer, and John Palfrey. It should be an interesting session. Unfortunately (and ironically), SACHRP has stopped posting transcripts of its meetings, so it's not clear how much of the content will be available to Internet researchers. I am told the meeting minutes will be posted at some point.

Friday, July 16, 2010

Librarian Urges Cooperation with IRBs

Maura Smale, information literacy librarian at the New York City College of Technology, suggests that librarians "embrace research involving human subjects" and seek IRB approval to do so.

[Maura A. Smale, "Demystifying the IRB: Human Subjects Research in Academic Libraries," portal: Libraries and the Academy 10 (July 2010): 309-321, DOI: 10.1353/pla.0.0114]

Smale notes that librarians can interact with IRBs in two ways. First, they can serve as IRB members or consultants, helping researchers and reviewers inform themselves about a proposal. Better library research, she suggests, could have prevented the 2001 death of Ellen Roche, a volunteer in a Johns Hopkins University asthma study. Smale could also have mentioned that better library research might prevent unreasonable IRB demands.

Second, librarians can act as researchers. Smale offers as examples two of her own studies of student and faculty users of her library. She found value in the approval process:

While it was a lengthy and labor-intensive process, obtaining IRB approval was an experience with real value, not simply a bureaucratic hurdle to overcome. Applying to the IRB required us to think deeply and critically about the goals for our research project while still in the early planning stages of the study; navigating the IRB approval process helped us make our research project both stronger and more relevant. Additionally, because we created all of our materials for the IRB application, we were ready to get started on our project as soon as the IRB approval came through, which saved us time at the beginning of our study. (317)

Smale does note that approval took five months, leading the skeptic to ask whether the same deep thinking could have been achieved in less time by another form of review.

Most of Smale's article is less of an argument than an introduction to IRBs for librarians new to the concept. (309). While it serves reasonably well for this purpose, the article unfortunately includes some factual errors that deserve correction:

  • "Any study involving human subjects that meets the definition of research in the Belmont Report requires review by the IRB." (312) In fact, the Belmont Report has no legal force, and it is the definition of research in 45 CFR 46 that determines the need for IRB review. That this definition does not match the definition in the Belmont Report suggests the imprecision of the work of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (More on this in Ethical Imperialism.)

  • "There are three levels of IRB review—exempt, expedited, and full. The IRB evaluates each research project and determines the level of review required; researchers may not make this determination on their own." (312) Exempt means exempt; it is not a level of IRB review. The regulations do not forbid researchers from making the exempt determination. And not even OHRP's recommendations insist that an IRB be involved in that determination.

  • "Certain types of studies automatically meet the criteria for exemption set forth in the Common Rule, including research on 'normal educational practices' such as curriculum design, instruction, and assessment. Research involving use of previously collected data is also usually exempt. In both cases the subjects' anonymity must be preserved." (313) The "normal educational practices" exemption, 45 CFR 46.101(b)(1), imposes no requirement of anonymity. The existing data exemption, 45 CFR 46.101(b)(4), does not require anonymity if the data are publicly available.

  • "Library research projects that include procedures in which the researcher is in direct contact with the subject will usually be required to undergo expedited review by the IRB." (315) Perhaps this is the practice at Smale's institution, but the regulations exempt this kind of research unless "any disclosure of the human subjects' responses outside the research could reasonably place the subjects at risk of criminal or civil liability or be damaging to the subjects' financial standing, employability, or reputation." [45 CFR 46.101(b)(2)]. This would not seem the case in the kind of research Smale proposes concerning "the use of space in the library" or "collaboration between the library and the campus writing center." (318)

  • "It is worth noting that the underlying principles used by the IRB to evaluate projects involve ethical treatment of subjects and preservation of privacy and are similar to the recommendations of many discipline-specific professional organizations, including the Oral History Association and the American Anthropological Association." (316). For over a decade, the Oral History Association has been fighting IRB requirements and insisting on the differences between the ethics of medical research and the ethics of oral history. Smale does cite the CITI Program in support of this assertion, but she fails to notice that the CITI Program offers no support for its statement.
  • {See comments for a correction.]

I am grateful to Smale for sharing her experience and for her kind citations to this blog and to my scholarship. But I fear that she has too readily accepted the claims of IRB administrators and training programs, leading her to advise librarians to tolerate months of delay when they should be demanding swift exemption.

Monday, July 12, 2010

Social Work Research Gets Few Exemptions

Stephanie Valutis and Deborah Rubin, both of Chatham University, sought "to explore the attitudes toward, knowledge about, and practices of IRBs across colleges and universities as reported by BSW [bachelor of social work] and MSW [master of social work] program directors as they pertain to faculty and student research."

[Stephanie Valutis and Deborah Rubin, "IRBs and Social Work: A Survey of Program Directors’ Knowledge and Attitudes," Journal of Social Work Education 46 (Spring/Summer 2010): 195-212, DOI 10.5175/JSWE.2010.200800059.]

They sent a survey to social work programs around the country, receiving 201 responses. They asked both factual questions about the composition and operations of the IRBs, and questions about the program directors' attitudes.

Among the key findings:

  • Familiarity improves attitudes. "Respondents who reported higher levels of knowledge about their IRBs had more positive responses to several attitude questions." (201)

  • IRBs grant few exemptions for three types of social work research: closed case files (28 percent of IRBs consider them exempt from review), satisfaction surveys (23 percent), and staff interviews (16 percent). The article does not go into depth about what each type of research entails, why an IRB might choose to require review, or whether social work program directors believe such research should be exempt. (205)

  • IRBs take a long time to approve research. While about half of program directors reported that the exempt and expedited reviews took less than two weeks, 17 percent reported exempt reviews taking one month or longer, and 11 percent reported expedited reviews taking that long. Thirty-seven percent reported full reviews taking one month or longer. And this question produced many "Do not know" responses, so the true level of delay may be much higher. (205)

  • Some students aren't allowed to do research with human subjects. Seven percent of program directors reported that "social work students were not permitted to do research that required IRB approval." (206)

I have my doubts about the usefulness of this survey, for two reasons. First, the survey posed factual questions (e.g., "How long does it take for initial review of an expedited submission?") to program directors who had no easy way of finding out this information. The authors rightly note that "the many 'don't know' responses" suggest a lack of transparency in IRB operations. But a better survey would have reached IRB administrators or chairs as well, allowing for some comparison. [For an example of this type of survey, see Robert E. Cleary, "The Impact of IRBs on Political Science Research,"IRB: Ethics and Human Research 9 (May-June 1987): 6-10.]

As for the attitudinal questions, they only allowed respondents to agree or disagree with positive statements about IRBs, e.g., "The IRB process helps students learn research ethics." I can't credit the conclusion that "We did not find the frustration with the process and scope of IRB reviews discussed in the broader social science literature," when the survey offered no opportunity to register such frustration. In his pioneering IRB survey of 1976, Bradford Gray understood the need to give respondents a chance to react to more critical statements, e.g., "The review procedure is an unwarranted intrusion on an investigator's autonomy--at least to some extent." [Bradford H. Gray, Robert A. Cooke, and Arnold S. Tannenbaum, "Research Involving Human Subjects," Science, new series, 201 (22 September 1978): 1094-1101] This survey should have done the same.

Indeed, while Valutis and Rubin cite a fair amount of IRB-related scholarship, it is not clear that they read any previous surveys of this sort before designing their own. Rather, they report concerns about the use of "a new survey instrument." (209)

The article also shows some confusion about federal regulations. It states that "Calling research 'exempt' by federal guidelines means that the research poses no risk to human subjects." While it is true that the 1981 Federal Register announcement of the exemptions describes them as exempting "broad categories of research which normally present little or no risk of harm to subjects," little risk is not the same as "no risk." And the regulations themselves exempt some research, e.g., interviews with public officials, regardless of risk. Later, the article claims that "an example of criteria for exemption by federal guidelines is research that does not pose more than minimal risk to human subjects." Actually, that's the criterion for expedited review, not exemption. Finally, the article claims that "Federal regulations require that IRBs make IRB membership available by name, role on the board, and earned degrees, but this information may not be widely disseminated." Indeed, that information is included on federal assurances, but those assurances are rarely made public.

Valutis and Rubin have raised important questions about how IRB oversight affects the education of social work students. But complete answers will require further research.

Friday, July 2, 2010

Librarians Ponder IRB Resolution

On June 29, at the American Library Association's Annual Conference, Melora Ranney Norman proposed a "Resolution on Institutional Review Boards and Intellectual Freedom."

Norman, a former chair of the ALA's Intellectual Freedom Round Table, noted that

Despite the fact that walking down the street is more dangerous than any conversation could ever be, on some college and university campuses, assertions of liability or vague, unproven risk are allowed to trump any actual proof of risk or danger, to the detriment of the preservation of knowledge and the human record.

Libraries are all about preserving and providing access to the human record with all its pimples, bumps, and bruises. Many of us have heard a quote attributed to Jo Godwin asserting that "A truly great library contains something in it to offend everyone." If the human record is not created to begin, how can we collect, preserve, and provide access to it?

She then called for the ALA to "[support] the American Historical Association in its position on oral history Institutional Review Board exemption, and [join] with the American Association of University Professors in recommending that 'research on autonomous adults whose methodology consists entirely in collecting data by surveys [or] conducting interviews . . . be exempt from the requirement of IRB review—straightforwardly exempt, with no provisos, and no requirement of IRB approval of the exemption.'"

The ALA Council voted to refer the resolution to the Intellectual Freedom Committee, the Library Research Roundtable, the Library History Roundtable, and the Committee on Professional Ethics. The American Historical Association hopes "they will reconsider the decision and support our efforts after further review."

Norman has also posted a Q & A, IRBs and Intellectual Freedom.

Hat tip: Rob Townsend.