Thursday, December 25, 2008

The Costs of Ethical Review

In his article on "Regulatory Innovation," discussed here earlier, Scott Burris complains that

the core problem with the Common Rule is the IRB’s power to treat its insights and risk–benefit calculations as ‘‘right answers’’ that may be imposed at no cost to the IRB upon researchers whose own ethical reflection may have led to different, equally defensible conclusions.


Robert Dingwall concurs in his essay, "The Ethical Case Against Ethical Regulation in Humanities and Social Science Research," 21st Century Society 3 (February 2008): 1-12. Though Dingwall is British, he notes that the system there looks "very like US Institutional Review Boards, and their analogues in Canada and Australia." (4) British boards, and British rules in general, fail to account for the costs of ethical review.

This has real consequences. Dingwall relates his own experience:


A colleague and I were recently commissioned by the NHS [National Health Service] Patient Safety Programme to study the national incidence and prevalcence of the reuse of single-use surgical and anaesthetic devices, and to consider why this practice persisted in the face of strict prohibitions. Part of this involved an online survey, using well-established techniques from criminology to encourage self-reporting of deviant behaviour, so that relevant staff in about 350 hospitals could complete the forms without us ever needing to leave Nottingham. However, a change in NHS ethical regulation meant that we needed approval from each site, potentially generating about 1600 signatures and 9000 pages of documentation. Although we never planned to set foot in any site, it would also have required my colleague to undergo around 300 occupational health examinations and criminal record checks. As a result, we were unable to carry out the study as commissioned and delievered a more limited piece of work. Other estimates suggest that the practice we were studying leads to about seven deaths every year in the UK and a significant number of post-operative infections. The ethical cost of the NHS system can be measured by the lives that will not be saved because our study could not investigate the problems of compliance as thoroughly as it was originally designed to. (10)


This is a stark example, but Dingwall sees it as emblematic of a general drag on social research that has consequences for the future of free socieites. Ethical regulation of humanities and social science research, he argues, contributes to "a waste of public funds, serious information deficits for citizens, and long-term economic and, hence, political decline . . . " (10)

Dingwall discounts the need for oversight, arguing that humanities and social science researchers "do nothing that begins to compare with injecting someone with potentially toxic green stuff that cannot be neutralised or rapidly eliminated from their body if something goes wrong. At most there is a potential for causing minor and reversible emotional distress or some measure of reputational damage." (3) I think this takes the case too far. See Sudhir Venkatesh’s Gang Leader for a Day for a recent example of a social scientist who seriously hurt people by breaking their confidences. (The book is recent; the incident took place in the early 1990s.) Dingwall's own research, had it exposed a physician who was illegally re-using devices, would have done irreversible harm to that physician. Rather than arguing that such harms are impossible, Dingwall would be better off arguing that they are a) rare, and b) not likely to be prevented by the forms of prior review now in place.

The Belmont Report calls for "systematic, nonarbitrary analysis of risks and benefits . . . This ideal requires those making decisions about the justifiability of research to be thorough in the accumulation and assessment of information about all aspects of the research, and to consider alternatives systematically." If we were to hold regulatory regimes to the same standard, we would find ample risks, few documented benefits, and no consideration of alternatives.

Sunday, December 14, 2008

Burris on Compliance vs. Conscience

Scott Burris, a law professor and author of at least three earlier articles on IRBs and human subjects regulation, takes on the Common Rule in "Regulatory Innovation in the Governance of Human Subjects Research: A Cautionary Tale and Some Modest Proposals," Regulation & Governance 2 (March 2008): 65-84.

Wednesday, November 26, 2008

IRBs vs Law and Society

The Law and Society Association (LSA) has posted “The Impact of Institutional Review Boards (IRBs) on Law & Society Researchers," a 2007 report by the association's Membership and Professional Issues Committee.

In the spring of 2007, the committee put out a general call for comments from association members, receiving 24 replies. Committee members also interviewed association members, taking special interest in those who had served on IRBs. This produced some accounts of frustrating encounters that led to the destruction of research, especially cases involving research on crime and punishment, a special concern of the association.

One particularly sad story came from "Respondent 019":


I knew when completing my [questionnaire] that I would have some difficulty getting my topic approved because it related to a protected population. I completed the [questionnaire] honestly and accurately and finally heard back that I had to make substantial revisions to my proposal. Not only did I take all of the IRB’s recommendations for review into account when completing a second questionnaire, but I even had frequent contact with the compliance coordinator and the secretary to make sure I was doing everything correctly. I submitted a second [questionnaire] that covered all the areas that caused problems with my first proposal. Finally, I heard back, and I was rejected on a whole other set of criteria that IRB never mentioned when they rejected me the first time. Finally I ended up changing my topic enough so that I no longer had to deal with IRB because they had pushed back the start date of my research so far that I couldn’t risk not getting approved again.


Other accounts describe IRB blocakge of research concerning mental health facilities that house sex offenders and prisons in the United States and Turkey.

The report concludes with three sets of recommendations:

First, it wants to restrict IRB jurisdiction: "the LSA should strive to minimize the scope of the IRBs regulations over non-behavioral studies and make the procedures of approving behavioral studies as smooth and expedient as possible."

Second, the report calls for a program of research and education, ranging from conference panels and publication to a statement of best practices for research. The goal is "a nuanced and contextual view of the IRB process, one that moves away from hard and fast 'rules' for most social science research, allowing for optimal protection of human subjects without inhibiting research goals."

Finally, the report claims that "the most successful IRBs (in terms of 'customer satisfaction') are those with a decentralized 'sub-board' system," citing UCLA and Macalester as examples. I find this unpersuasive, given that the report is based on part of the work of Jack Katz of UCLA, who seems pretty unsatisfied with his IRB.

While not directly inconsistent, these three recommendations are in some tension with each other, as well as with the evidence presented in the report and positions taken by the LSA. The report's collection of horror stories--unrelieved by any reports of IRB contributions to ethical research--calls into question the propriety of any IRB review of social research, and the report's analysis suggests that there is no legal requirement for such review. Put together, these sections of the report support Katz's conclusion that "the optional decision to push all ethical review of social science and humanistic research through a prior review sieve is not only massively inefficient, it is also counterproductive where risks are most serious." ["Toward a Natural History of Ethical Censorship," Law & Society Review 41 (December 2007), 807.]

By contrast, the calls for additional research, a more flexible IRB process, and decentralization all resemble the stance of Felice Levine and others, who advocate working within the current system.

In particular, the report raises the question of LSA's position on the scope of the "behavioral research"mentioned in the National Research Act and other forms of interactions with human beings. "There has been no mention, or expressed intention regarding human subject research in law and the social sciences," the report notes, "and clearly not all research in law and society is 'behavioral.'" If this is the case, then many of OHRP's policies and interpretations of the Common Rule lack a statutory basis. Yet in December 2007, five months after the report was presented to the association, the Association signed onto Levine's comment to OHRP, in which she wrote of "social and behavioral sciences (SBS)," conflating the very categories the report wishes to keep distinct.

The July 2008 newsletter of the association mentions the report, but it does not state whether the association has accepted its committee's position or taken any action in response. I hope the LSA will continue work in this field, and that it will consider further the question of whether all survey, interview, and observational research should be made subject to a law governing "behavioral research."

Saturday, November 22, 2008

OHRP Continues Indiana-Bloomington Investigation

In September I reported on the OHRP investigation of Indiana University-Bloomington, ably covered by the Bloomington Herald-Times. Along with several stories, an editorial, and at least two op-eds, the newspaper posted heavily redacted copies of the OHRP letter and the IU reply.

On October 3, I submitted a Freedom of Information Act request for the unredacted OHRP complaint. This week, I received a reply, dated November 17, stating that "the subject matter of your request is the subject of an open and ongoing investigation. Release of any additional information at this time could reasonably be expected to interfere with ongoing proceedings." Hence, I received no information. We still don't know what Indiana-Bloomington did to bring down the federal hammer, or if the hammer will strike again.

Meanwhile, the crackdown has disrupted research, especially for social scientists. As the Herald-Times reported on October 8 (Nicole Brooks, "IU Research Oversight Office Has More Staff, But Projects Still Delayed"),

The need to “attend to the compliance issue speedily” led to Bloomington researchers using the same proposal forms as IUPUI faculty, according to [Research Affairs Committee chairman Stephen] Burns. These forms are designed for medical research, and are “more complex than needed,” especially for social science researchers.

This has caused some faculty — and students — to not bother with some research projects, Burns said. And some students are changing their thesis topics so they don’t include human subjects research, he said.

Before the compliance issue came into play, when Bloomington campus faculty used their own form and not IUPUI’s, this was a problem, Burns said. Research topics are becoming more and more diverse, and the divide between what information is necessary for different kinds of research is widening, he said.


The upshot is that faculty and students at a major research university are abandoning their research because of secret allegations against their university's administration. It's all very well for OHRP to claim (as Ivor Pritchard did at the October SACHRP meeting) that OHRP enforcement actions are rare. But even the rare crackdown, if as severe as this one, is enough to have IRBs nationwide quaking in fear, and putting regulatory compliance above all other considerations.

Friday, November 21, 2008

Can IRBs Handle the Nuances of Anthropological Ethics?

In a November 14 essay in the Chronicle of Higher Education ("New Ethical Challenges for Anthropologists"), Carolyn Fluehr-Lobban reports on the work of the Commission on Engagement of Anthropology with the U.S. Security Community, of which she is a member. The commission was established by the American Anthropological Association in 2006, in response to complaints that anthropologists had acted unethically by participating in the Department of Defense's Human Terrain System and other national security programs.

Fluehr-Lobban offers a carefully nuanced discussion of the questions of secret research and research that harms its subjects. Both require subtlely. As she notes,

even in agencies well known for their secrecy, like the Central Intelligence Agency, such terms as "transparency" and "disclosure" have become more common, and secrecy less easy to define; "classified" government documents can be accessed by scholars and journalists, while truly top-secret materials deal with intelligence research rather than anthropology. Moreover, scholars may not be able to discern whether their work contains secret material when projects are compartmentalized, and when their contribution is only a segment of a project whose wider mission is unknown. In short, anthropologists who provide "subject matter expertise" may not know the direct or indirect impact of their engagement.


Likewise, harm can be hard to predict, or even define. As Fluehr-Lobban explains,

Anthropologists have been deployed to Iraq and Afghanistan as part of Human Terrain Teams embedded with combat troops. Part of the work they do unquestionably causes harm to some people -- but it may prevent harm to others. In addition, we know so little about what the teams do, or the projects they are part of, that objective evaluation is impossible at present.


What does this tell us about IRB review of social science research? Well, that's not clear either.

On the one hand, the commission asks anthropologists to "be assured that adequate, objective review of the project has been conducted, ideally by external reviewers."

On the other hand, Fluehr-Lobban suggests that such review would require expertise not found on the typical IRB. She concludes,

Ideally, decision making occurs in a group process where the relevant disciplinary, cultural, and government-agency stakeholders are at the table . . . Consultation with professionals in related disciplines who have been grappling with issues of engagement -- for example, psychologists who have debated their role in identifying what would constitute "soft torture" and their alleged involvement in interrogations in Abu Ghraib and Guantánamo -- is also recommended, as well as with those who have been historically engaged without serious controversy -- for example, political scientists working as consultants on terrorism for defense and intelligence agencies.

Perhaps the best advice will come from one's own disciplinary colleagues. To that end, a group, "Friends of the Committee on Ethics," may be set up to offer informal, private advice about research ethics. As we think through the various issues that secrecy and doing no harm demand of us, such a committee will have an important role to play in helping define increasingly complex anthropological practice.


These latter recommendations do suggest a role for interdisciplinary consultation, but they are no endorsement for the current IRB system, which makes no provision for assuring review by relevant stakeholders, professionals who have grappled with the issues, or disciplinary colleagues. I hope that Fluehr-Lobban's commission will explore the implications of its findings for the appropriate mechanisms of ethical review.

(Thanks to the Research Ethics Blog for bringing this to my attention.)

Saturday, November 15, 2008

Comments Oppose New Regulations on Training

As reported by the Chronicle of Higher Education, OHRP recently released the replies it had received in response to its July call for comments on education and training requirements. I thank Chronicle reporter David Glenn and OHRP associate director Michael Carome for supplying me with copies of the comments.

As I see it, the comments pose three main questions.

Tuesday, November 4, 2008

Chronicle of Higher Education on OHRP Training Comments

The Chronicle of Higher Education quotes your faithful blogger in "Scholars Mull Rules for Training in Research Ethics," by David Glenn, 4 November 2008.

The story concerns the eighty or so comments received in response to OHRP's July call for comments on education and training requirements. Glenn notes that by and large, the comments were skeptical about the need for new guidance, and particularly skeptical about regulations. As he reports, "the American Association for the Advancement of Science, the Association of American Universities, the Association of American Medical Colleges, and a consortium of large social-science organizations [all] said that before the federal government issues new rules, it should carefully study whether training actually improves researchers' conduct."

I will offer some of my own comments on the comments in coming posts.

Monday, October 27, 2008

Report from SACHRP, October 2008

Today I attended the public meeting of the Secretary's Advisory Committee on Human Research Protections (SACHRP). Most of the day's discussion concerned the slow pace with which SACHRP's recommendations over the years have been implemented in any way; some have been sitting for years without action. OHRP officials offered detailed explanations of the complexity of policy-making process and OHRP's lack of resources. While these issues help explain OHRP's neglect of questions important to the social sciences and humanities, today's discussion had little of direct importance to scholars in those fields.

Here are a few tidbits of potential significance.

What is the difference between guidance and regulation?



Christian Mahler, a lawyer with HHS's Office of General Counsel, explained that only regulations can be enforced; as a legal matter, institutions are free to ignore OHRP guidance documents. But he and other officials acknowledged that in practice, the difference may not be great. As acting OHRP director Ivor Pritchard conceded, "when we issue guidance . . . people look at every phrase, clause, use of punctuation to see what was meant by OHRP.” He noted that institutional officials may believe the safest course is to comply with all OHRP guidance.

Pritchard said that OHRP tries to differentiate, when drafting guidance, between "must" statements (that indicate OHRP's interpretations of the regulations) and "should" statements that can be ignored if an institution has good reason. He also stated that the best course might be to issue guidance documents that offer multiple ways to comply with the regulations, though it's not clear that OHRP has ever issued such guidance.

Finally, Mahler pointed to the Office of Management and Budget's 2007 "Final Bulletin for Agency Good Guidance Practices.” That bulletin notes that


The courts, Congress, and other authorities have emphasized that rules which do not merely interpret existing law or announce tentative policy positions but which establish new policy positions that the agency treats as binding must comply with the [Admistrative Procedure Act]’s notice-and-comment requirements, regardless of how they initially are labeled. More general concerns also have been raised that agency guidance practices should be better informed and more transparent, fair and accountable. Poorly designed or misused guidance documents can impose significant costs or limit the freedom of the public.


Despite that last sentence, the bulletin seems designed more to limit regulation of economic affairs than to safeguard the freedom of the public. Still, if its approach were followed, OHRP might not be able to get away with so many arbitrary decisions.

What is research?



In February 2007, then-OHRP director Bernard Schwetz told the New York Times that OHRP would, by the end of 2007, issue guidance on what is and is not research under the regulations. OHRP is almost a year overdue in keeping that promise, but apparently it is still at work. Pritchard noted that "We have been working on a guidance document on the definition of ‘research’ for several years now.” Pritchard did not indicate how far along that process is, or whether OHRP will solicit public comment before promulgating that document.

What is an ideal consent process?



In her presentation, Elizabeth Bankert, co-chair of SACHRP's Subpart A Subcommittee, addressed the problem of IRB insistence on long, complex consent forms. She noted that while the regulations requiring consent forms have not changed since 1991, IRBs have been demanding ever more detailed forms, and have gained a reputation for "wordsmithing" and "nit-picking." Not only does this erode investigator trust and respect for IRBs, but it also "diminishe[s] the consent process for subjects."

Bankert--drawing on work by subcommittee member Gary Chadwick--challenged the presumption that "the form must contain every piece of information and in the same detail as required in the consent process," which leads to such long forms. She called upon OHRP and FDA to endorse the use of shortened consent forms: about 3-4 pages for a clinical trial, and just one for "surveys, etc."

Bankert offered these suggestions as a way to decrease some burdens on IRBs and investigators, but she then suggested that IRBs would still need to review the consent process. She did not explain how the same incentives for nit-picking wouldn't lead to endless fretting about the consent process, rather than the consent form. As SACHRP member Jeffrey Botkin pointed out, IRBs and investigators turn to long forms out of self-preservation. And as David Foster noted, FDA inspectors and AAHRPP accreditors have demanded ever longer forms. Before Bankert and Chadwick can address the problem of silly forms, they need to understand the system that produces them.

Predictably, almost all the discussion of consent forms centered around clinical trials. For example, Bankert offered sample executive summaries for a drug trial and a request to store blood or tissue for future research, but no comparable document for a social science project. And one committee member, expressing her concerns, forgot to speak of "human subjects" and instead started talking about protecting "patients." Given the complexity of the issue, and the dominance of SACRHP by medical researchers and bioethicists, I would expect that any change will be designed for clinical trials, and then imposed on social researchers.

Sunday, October 26, 2008

Menikoff to Head OHRP

The Department of Health and Human Services has announced the appointment of Dr. Jerry Menikoff as the director of the Office for Human Research Protections (OHRP). Menikoff holds degrees in medicine, law, and public policy, and has served as a law professor and public official.

A quick glance at Dr. Menikoff's CV suggests that while he has published extensively on the ethics of medical experimentation, he is written little on questions of the regulation of social science. The exception is his article, "Where’s the Law? Uncovering The Truth About IRBs and Censorship," Northwestern University Law Review 101 (2007): 791-799, which I described in a January 2007 blog entry. In that article, Menikoff suggested that the 46.101 exemptions were sufficient to avoid any conflict with the First Amendment.

Menikoff does understand that many IRBs have abused social researchers. "There are surely too many instances in which IRBs and others fail to understand, and properly administer, the regulations," he writes. (793, n. 9) Later, in a response to an Inside Higher Ed article about this blog, he wrote, "As to social science and behavioral studies, I do support reforms to relax the rules somewhat, though the claims that the system constitutes censorship under the U.S. Constitution are overkill."

It is not clear, however, that Menikoff understands how rarely the 46.101 exemptions are applied, and how much responsibility OHRP bears in their disappearance. And in his Northwestern piece, Menikoff praises OHRP for "concluding . . . that much of the work performed by oral historians, in sitting down with people and getting information from them, does not fall within the category of doing 'research.'" (798) He seems not to know that OHRP so contradicted itself that its pronouncement had no effect on most universities.

If Menikoff uses his position to revive 45 CFR 46.101, to restore the agreement with oral historians, and to correct other lapses between the regulations as written and as enforced by OHRP, his accession to the OHRP directorship will be great news not only for scholars in the social sciences and humanities, but also to participants in medical research, since resources can be reprogrammed for their protection. Under the leadership of a lawyer, perhaps OHRP will begin to obey the law.

Wednesday, October 22, 2008

45 CFR 46.101 Is Still Dead

At the September AAAS meeting, I learned of the June 2008 report, "Expedited Review of Social and Behavioral Research Activities," put out by the Social and Behavioral Research Working Group, Human Subjects Research Subcommittee, Committee on Science, National Science and Technology Council. The brief report (11 pages, including a great deal of white space) offers thirteen "brief illustrative examples" of "social and behavioral research activities [that] qualify for expedited review . . . assuming that they also meet the standard of minimal risk."

While presenting itself as an effort to help researchers, administrators, and reviewers "avoid needless misunderstanding and delays in the review process," the document threatens to add misunderstandings and delays by suggesting review for many projects that should be exempted from IRB oversight.

Here are the thirteen examples, to which I have added numbers for discussion purposes. The categories listed at the end indicate the reasons the working group believes the examples are eligible for expedited review, based on the 1998 list of categories.

Sunday, October 19, 2008

AAAS Hears Clashing Views on IRBs

The American Association for the Advancement of Science (AAAS) has posted a summary of a September 22 meeting on IRBs and the social sciences, in which I took part.

As the press release, "AAAS Meeting Explores Ways to Improve Ethics Panels that Oversee Social Science Research," notes, "from both researchers and administrators alike, there was general agreement that the system can and should work better," but participants disagreed about what a better system would look like.

Social researchers recounted some horror stories, the most vivid of which was Gigi Gronvall's:

Gigi Gronvall, a senior associate at the Center for Biosecurity of the University of Pittsburgh Medical Center, recalled her efforts to do a survey of scientists who do dual-use research in biology, such as work on viruses that might have military as well as civilian application. The IRB at Johns Hopkins University, where the biosecurity center was located at the time, asked Gronvall to give a three-page consent form to all those she interviewed. It included a warning that the respondent, in agreeing to answer Gronvall's questions, ran the risk of being investigated by government agencies, being "exploited by hostile entities," or even being kidnapped.

The IRB's members, Gronvall said, "were totally over-identifying with my subject population." The result was a six-month delay in the survey project, during which Gronvall almost lost her funding. "The IRB should be on your side," she said. "That's not how I felt during this."

Gronvall said that blocking or delaying research on a controversial topic can mean that it will be explored only in the news media, without any IRB-style protections for those being interviewed.


In addition to such personal experiences, some participants warned of fundamental flaws in the IRB system. I described the origins of IRB review as the work of physicians, psychologists, and bioethicists who had no understanding of the methods and ethics of social scientists, and assembled no evidence of widespread abuse by them. Joan Sieber, editor of the Journal of Empirical Research on Human Research Ethics, suggested that the horror stories are not abberations, but common.

IRBs had their defenders, particularly from scholars and consultants with a background in medicine. Anne N. Hirshfield, associate vice president for health research, compliance and technology transfer at the George Washington University, claimed that "IRBs can only work if there is mutual attention to a common goal—conducting ethical research that protects the rights and welfare of participants." But she also argued that "it is the obligation of the P.I. to know what the IRB needs and to stage the argument. Give the citations to show that your work is not risky." In other words, researchers must prove a negative, while IRBs are free to conjure up scientist-kidnappers.

Perhaps the most impressive presentation was that of Janet DiPietro, associate dean for research at Johns Hopkins University's Bloomberg School of Public Health, who described her efforts to reform the IRB there. If all IRBs were managed by someone as thoughtful as she, we'd have far fewer complaints. But there aren't enough DiPietros to go around, so her achievements do not make a good case for granting administrators nationwide such power.

The press release concludes with a statement from Mark Frankel, staff officer for the AAAS's Committee on Scientific Freedom and Responsibility:

Some Committee members believe that the balance is awry because IRB's are imposing unwarranted and arbitrary demands on proposed research by social and behavioral scientists. This raises serious issues related to scientific freedom insofar as such actions lead to potentially valuable research that is inappropriately altered, unduly delayed, or not done at all.


While noncommital about specific policy recommendations, this recognition that IRB review is a threat to scientific freedom is in itself an important finding. I look forward to further AAAS efforts to explore this problem and contribute to its resolution.

Wednesday, October 15, 2008

Treeroom Trade

John Mueller kindly alerted me to Gautam Naik, "Switzerland's Green Power Revolution: Ethicists Ponder Plants' Rights," Wall Street Journal, 10 October 2008.

Naik reports that the government of Switzerland has required researchers to "conduct their research without trampling on a plant's dignity," based on an April 2008 treatise, The Dignity of Living Beings With Regard to Plants.

The treatise itself notes that the members of the Federal Ethics Committee on Non-Human Biotechnology could not reach consensus on most of the moral questions they considered, and some worried about over-regulation. Nevertheless, "The Committee members unanimously consider an arbitrary harm caused to plants to be morally impermissible. This kind of treatment would include, e.g. decapitation of wild flowers at the roadside without rational reason," though even then the committee members disagreed over why such decapitation is impermissible. Nor does the report lay out criteria for a "rational reason." What if it's fun to decapitate wildflowers?

[Disclosure: As a boy, I liked to knock the heads off dandelions with a stick. Guess I won't be getting a visa to Switzerland any time soon.]

Naik's article focuses on the threat to genetic research, but if there's anything that four decades of human subjects regulation has taught us, it's that government officials refuse to recognize distinctions between lab experimentation and social science. Perhaps we should expect restrictions on the observation, surveying, and interviewing of plants as well. In six months Washington's cherry blossoms will be out, and I pity all those poor trees, left helpless as hundreds of thousands of people come to gawk at their exposed genitals.

Monday, October 6, 2008

A Conscientious Objector

In a column in the Hastings Center's Bioethics Forum, historian and ethicist Alice Dreger explains why she declines to submit oral history proposals to IRBs:


To remain “unprotected” by my university’s IRB system—to remain vulnerable—is to remain highly aware of my obligations to those I interview for my work. Without the supposed “protection” of my IRB, I am aware of how, if I hurt my interviewees, they might well want to hurt me back. At some level, I think it best for my subjects that I keep my kneecaps exposed.


Compare this stance to the position put forward by Charles Bosk in 2004:


Prospective review strikes me as generally one more inane bureaucratic requirement in one more bureaucratic set of procedures, ill-suited to accomplish the goals that it is intended to serve. Prospective review, flawed a process as it is, does not strike me as one social scientists should resist.


Who takes research ethics more seriously: the researcher who submits to inane requirements, or the researcher who resists?

For more on Dreger's work, see The Psychologist Who Would Be Journalist.

Tuesday, September 30, 2008

Crackdown at Indiana University

The Bloomington Herald-Times reports (August 10-12) on problems with human subject reviews at Indiana University in Bloomington (IUB). Though the paper does not give details of what went wrong, it does state that in the summer of 2008, two whistleblowers in the human subjects office successfully the appealed negative evaluations they had received after airing complaints. Their immediate supervisor, Carey Conover, has been reassigned, But Conover's boss, Eric Swank, has been promoted to executive director for research compliance for Bloomington and other Indiana campuses, with a salary bump from $79,000 to $119,600.

Moreover, the university has moved to protect itself by layering on more administration. According to an August 17 Herald-Times column by the university provost, Bloomington's new president "has expanded the budget for research compliance by $4.3 million - the single largest addition to his budget - in order to create a university-wide organization of well over 100 people, professionals whose sole mission is to preserve and protect the university's research mission." And starting July 1, all IUB studies have been sent to the Indiana University-Purdue University Indianapolis (IUPUI) IRB, where, the university promises, they will be met with an "AAHRPP-accredted HRPP" and legions of "CIP-certified staff members." Meanwhile, Bloomington IRB members and staff were sent for reeducation by Jeffrey Cohen, who, no doubt, told them to review oral history.

None of this is reassuring to social scientists back at Bloomington. Writing in the Herald-Times on September 14, Noretta Koertge, a specialist in research ethics, urged "the university to take this opportunity to resist bureaucratic mission creep." Lower on the chain, informatics PhD student Kevin Makice frets that the dust-up will delay his research to the point that he will have to rely on theory and public data to meet a conference deadline. He writes, "The human-computer interaction crowd often goes to [the Computer/Human Interaction conference] talking about the woes of the research approval process only to hear how much simpler it is on other U.S. campuses and seemingly non-existent off the continent. Now, with IUPUI overburdened by serving multiple campuses--which apparently is in the long-term restructuring plans anyway--we miss the days of it just being too complicated."

Back in 2005, the Illinois White Paper on IRBs complained that the "death penalty" of shutting down all research at a university in response to a single IRB violation. This penalty, the paper warned, was largely responsible for IRBs' terrified emphasis on regulatory compliance. It looks like Indiana researchers will suffer for the sins of the research administrators.

Monday, September 29, 2008

AHA Comments on IRB Training

The American Historical Association has posted a copy of the comments on IRB training and education it sent to OHRP in response to the July notice in the Federal Register. The AHA letter states that historians "are concerned that the proposed training program will reinforce the tendency to treat all research as if it was conducted in the experimental sciences" and that "the proposed training program would only cover what should be assessed by the review boards, and does not include room for discerning among different types of research methods."

Wednesday, September 24, 2008

Final Comments on Training Requirements

A couple of readers told me my Draft Comments on Training Requirements
proposed an unrealistically burdensome training regime. I don't think that asking researchers and IRBs alike to learn about the documented ethical challenges of a given line of research is unduly burdensome, and neither did the architects of the present system. I have added the following to my comments, which I submitted today:

---

Lest all this sound like too much work, let me quote the Belmont Report's own recommendations:

the idea of systematic, nonarbitrary analysis of risks and benefits should be emulated insofar as possible. This ideal requires those making decisions about the justifiability of research to be thorough in the accumulation and assessment of information about all aspects of the research, and to consider alternatives systematically. This procedure renders the assessment of research more rigorous and precise, while making communication between review board members and investigators less subject to misinterpretation, misinformation and conflicting judgments . . . The method of ascertaining risks should be explicit, especially where there is no alternative to the use of such vague categories as small or slight risk. It should also be determined whether an investigator's estimates of the probability of harm or benefits are reasonable, as judged by known facts or other available studies.
Standardized training does not produce "the accumulation and assessment of information about all aspects of the research," it does not make explicit the "method of ascertaining risks," and, most significantly, it does not provide the IRB or the investigator with "known facts or other available studies." What it does produce is a false sense of expertise among IRB board members, leading to the very "misinterpretation, misinformation and conflicting judgments" against which the National Commission warned.

Wednesday, September 10, 2008

Draft Comments on Training Requirements

Back in July I reported that OHRP was seeking comments on its requirements for human subjects training for investigators and IRB members. The deadline for comments is September 29.

Here is a draft of of my comments. I would appreciate comments on the comments prior to the deadline.

---

Dear Dr. Carome,

Thank you for the Request for Information and Comments on the Implementation of Human Subjects Protection Training and Education Programs, published in the Federal Register on July 1. I would like to offer some brief comments on this issue.

In tracing the debate over IRB review of the humanities and social sciences as it developed over the past forty years, I have yet to come across anyone who suggests that scholars should conduct research without first receiving training of some sort. The whole purpose of a university is to teach researchers to form their inquiries along lines that will produce the best results, in ethics as well as knowledge. When it comes to human subjects protections, the question is what form of training will produce those results. So far, two general models have been proposed, and I would like to offer a third.

One model demands that every university researcher, regardless of her scholarly discipline or her subject of study, complete a basic, online course in medical ethics and regulatory compliance. The CITI Program, founded by a medical researcher and a medical research administrator, exemplifies this approach. The CITI Program has the great virtue of administrative convenience. A university research office can, in a single memo, declare that all investigators must complete the program, and it can easily monitor that they have done so. But it is not clear that such mandates serve the cause of ethics, particularly when researchers are not conducting medical or psychological experiments. While the program makes efforts at including non-biomedical perspectives, the sections on such disciplines as oral history and anthropology are written by people with no expertise in those fields. The result is that much of the material in those sections is irrelevant, inaccurate, or highly dubious in its interpretations. Such programs also reduce complex ethical problems to simplistic statements to be chosen on a multiple-choice test. While I cannot offer published citations or hard data, I know anecdotally that the requirement to complete such training breeds contempt for the whole review process in many researchers.

In 2002, the Social and Behavioral Sciences Working Group on Human Research Protections pioneered an alternative approach. Rather than preparing the same curriculum for all fields, it devised reading lists specific to each discipline. For example, materials prepared for the American Sociological Association included that association's code of ethics and essays written by sociologists (see http://www.aera.net/humansubjects/courses/asa_notebook.htm). Scholars asked to complete such training are likely to take it far more seriously than a program whose medical origins cannot be disguised. On the other hand, devising a single training regime for an entire discipline will still subject some researchers to a great deal of irrelevant material. For example, an ethnographer may not need nuanced instructions on forming survey questions, nor a survey researcher instructions about participant observation.

Finally, I would like to propose a third model that goes beyond the Working Group's approach. When scholars describe the training that most influenced their ethical decisions, they are less likely to cite general codes and principles than the work of other researchers who faced very similar challenges. Criminologist Michael Rowe put this very well in his essay, "Tripping Over Molehills: Ethics and the Ethnography of Police Work," International Journal of Social Research Methodology 10 (February 2007): 37-48. Rowe wrote,


It is the nature of ethnographic research that the principles contained in methodological textbooks or professional codes of conduct will be stretched and perhaps distorted as they are applied in dynamic situations. Since policing is unpredictable, the ethical dilemmas police researchers might face cannot be easily anticipated . . . If an absolute code of ethics is not feasible, researchers must be prepared to be reflexive in terms of ethical dilemmas and the methodological difficulties experienced in securing informed consent and meaningful access to research subjects. (48)


The best preparation for observing police work, he explains, is reading other accounts of observing police work. I believe this emphasis on specificity would hold true for most qualitative research (and a great deal of quantitative work as well).

I would suggest, then, that rather than impose universal, top-down training like the CITI Program, or even more specific top-down training like the Working Group curricula, OHRP empower researchers to devise their own ethical reading lists of materials most relevant to their work, just as they choose their own methodological models. A researcher seeking IRB certification could present an annotated bibliography, showing that he had investigated the problems he was most likely to encounter and the ways that other scholars had dealt with those problems. Researchers should also be able to use courses and seminars they have completed as evidence of their prepraration.

I assume the goal of any training requirement would be to get researchers to think seriously about the ethical problems they will face. Asking them to research those problems themselves will be far more effective than any multiple-choice test.

Monday, August 25, 2008

Can Satire Match IRB Reality? Comment on Lederman's "Modest Proposal"

The last entry in the PoLAR symposium for which I will offer comments is Rena Lederman's "Comparative 'Research': A Modest Proposal concerning the Object of Ethics Regulation."

Lederman, an anthropologist and sometime member of the Princeton University IRB, challenges the regulatory defintion of research: "a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge." She correctly notes that the definition was crafted to distinguish the practice of medicine from biomedical research. As she puts it,

U.S. human subjects research regulations (known since 1991 as the “Common Rule” but formally set in place in the early 1970s) derive from earlier National Institutes of Health guidelines based on specifically biomedical experience and ethical problematics. Their logic goes something like this: First, medical therapy is appropriately evaluated in terms of individual patient interests, because its central concern is the direct improvement of individual patient well-being. Second, medical research is appropriately evaluated in terms of society’s and science’s interests, because its central concern is the production of knowledge “generalizable” beyond individual cases. And third, although physical risks to persons are inherent in both medical research and therapy, the risks to individuals are qualitatively greater in research (where individual persons are not the central concern) than in therapy (where they are). Consequently, research needs special oversight. (312)


[It's worth noting that the research definition entered the regulations as a result of congressional mandate; the National Research Act of 1974 instructed the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research to consider "the boundaries between biomedical or behavioral research involving human subjects and the accepted and routine practice of medicine." The result was the defintion of research now encoded in regulations.]

Lederman then explains that the boundaries between research and therapy remain fuzzy even for biomedical topics. A case study of an individual patient offers some contribution beyond therapy, but is it generalizable? What about quality improvement, like the Johns Hopkins checklist?

Finally, Lederman produces her "modest proposal":

If there are indeed other ways of knowing the world that are similarly entangled in the everyday but not yet benefiting from IRB oversight, doesn’t fairness dictate that all of these modes be surveilled in the same manner? What would happen if ethnographers made common cause—all in (or all out)—not just with ethnographically inclined sociologists, political scientists, religion scholars, and folklorists but also with urban planners, architects, engineers, literary and cultural studies scholars, and colleagues in college and university writing programs—all of whom are engaged in varieties of research-with-human-participants? (320)


She then goes on to note similarities between the methods of ethnographers and novelists, asking why the latter have not been swept into the IRB dragnet.

The problem with Lederman's self-described "parodic" comparison is that some regulators and IRBs are already enacting parody as policy. As Lederman notes, "IRBs are already involving themselves in the lives of writing teachers, journalists, and others who had not heard of “human subjects research” until their own work came under scrutiny." (324, n. 10)

Other countries have taken this further. The Australian National Statement on Ethical Conduct in Human Research (2007) observes that the British definition of human subjects research "could count poetry, painting and performing arts as research," then fails to offer a defintion of human subjects research that clearly excludes those endeavors. It then goes on to state that "the conduct of human research often has an impact on the lives of others who are not participants," raising the possibility that a novelist might violate Australia's ethical standards without even talking to anyone. (p. 8) More recently, in February 2008, a Canadian committee proposed adding a chapter on Research Involving Creative Practices to Canada's Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans. The draft chapter suggests that some creative processes are outside the purview of ethics boards, but it does not make a good case for any review of creative work, except that "researchers from a wide range of disciplines" would feel cheated if artists got special treatment.

It's too late for reductio ad ridiculum; the ridiculous is at the gates.

Like Bosk's essay, Lederman's expresses frustration with IRB oversight of ethnography without clearly calling for an end to such oversight, much less offering a strategy to achieve that goal. She reports, offhand, that when oral historians seemed to have escaped IRB jurisdiction, the anthropologists she knew "were briefly thrilled, but there was no notable effort to follow suit." (318) Why this passivity? I am less interested in why ethnographers have not made common cause with novelists than in why they have not made common cause with themselves.

Tuesday, August 19, 2008

Oral History Association Plans Revised Guidelines

The Oral History Association has posted a call for suggestions for revision of its Principles and Standards. Oral historians who find themselves trying to explain their work to IRBs often rely on this statement, so it needs to be as clear as possible. I encourage concerned oral historians to join in the revision process.

Monday, August 18, 2008

Critiques of Consent Forms

Two items in the November 2007 PoLAR symposium question the utility of written consent forms. Both offer important insights about the difficulty of relying on written consent, though neither presents a persuasive alternative.

Wednesday, August 13, 2008

AHA Calls for Comments on Training

At AHA Today, the American Historical Association's blog, Rob Townsend calls for responses to OHRP's recent invitation to comment on training for IRBs. As Townsend notes, the very phrasing of OHRP's questions suggest a continued inability to remember that the office's policies have effects beyond medical research. On the other hand, he notes, some kind of training mandate is likely, so it would be best for historians not to remain silent.

Sunday, August 10, 2008

Reform or Revolution? Comment on Bosk, "The New Bureaucracies of Virtue"

Following the introduction, the first substantive piece in the PoLAR symposium is Charles L. Bosk, "The New Bureaucracies of Virtue or When Form Fails to Follow Function."

In the past, Bosk has advocated living with IRB review. As he wrote in 2004

Prospective review strikes me as generally one more inane bureaucratic requirement in one more bureaucratic set of procedures, ill-suited to accomplish the goals that it is intended to serve. Prospective review, flawed a process as it is, does not strike me as one social scientists should resist. After all, we agree with its general goals: that our informants should not be subject to untold risks, that they be treated with decency, that their confidentiality and anonymity be safeguarded, when feasible. Given this, we should not waste our energies resisting a process that has an enormous amount of both bureaucratic momentum and social consensus behind it. Instead, we should focus our energies on reforming and revising procedures; we should fix the system where it is broken.

[Charles Bosk, "The Ethnographer and the IRB: Comment on Kevin D. Haggerty,'"Ethics Creep: Governing Social Science Research in the Name of Ethics'," Qualitative Sociology 27 (December 2004), 417.]


But at some point in 2005 or 2006, an IRB seems to have really pissed him off. In this essay, he writes that "having now been on the receiving end of IRB objections that I find incomprehensible, I appreciate my colleagues' multiple frustrations." (204) He now seems to think the system is not merely broken, but so defectively designed that it cannot be repaired:

The presumption of prospective review—that our subjects are in need of protection— has embedded within it an insulting distrust of our integrity and motives. The insult inherent in a regulatory regime based on distrust deepens when the barriers the review system places between us and the doing of our research appear to protect powerful institutions from close scrutiny more than they guarantee the well-being of our research subjects. For me, the most serious defect of the current regulatory system is that the requirements of policy reduce and trivialize the domain of research ethics. In the process, our ability to conceptualize, discuss, and make sense of the ethical problems of ethnographic work is dulled. As we do our work, we face ethical dilemmas aplenty, almost none of which have to do with the dual mandate of prospective research review—the adequacy of the consent process, which is invariably reduced to concern about a "formal document" or potential risks to subjects. (194)


Bosk's essay is rich in ideas--too many, really, for an essay of this length. I will do my best to unpack them.

The New Bureaucracies of Virtue

Pity the poor blogger. When I started this blog, I expected to report on the occasional journal article on IRB review of the social sciences. Instead, I find that journals insist on publishing special symposium issues with several articles on the topic. Rather than sipping a beer I must down a six-pack.

In the coming weeks I plan to take on the November 2007 issue of PoLAR: Political and Legal Anthropology Review (Vol. 30, Number 2), which includes eight articles totally 147 pages, some of them based on an October 2006 symposium at Cornell. In their introduction to the symposium, organizers Marie-Andrée Jacob and Annelise Riles, write,

Although we certainly do not defend the current regulatory framework of research, we also wanted to press the pause button on the ambient criticism of IRBs and accompanying expressions of fears and anxieties about their impact on research and free speech. Instead, we wanted to trigger a discussion that would harness, among other things, these practical anxieties in the service of a larger theoretical and epistemological inquiry. (183)


As someone more interested in the practical than the theoretical or epistemological, I'm not sure this is my thing, but I'll do my best. And while I can't promise to comment on all the essays, here's the table of contents:

SYMPOSIUM: Papering Ethics, Documenting Consent: The New Bureaucracies of Virtue


  • Marie-Andrée Jacob and Annelise Riles, "The New Bureaucracies of Virtue: Introduction"

  • Charles L. Bosk, "The New Bureaucracies of Virtue or When Form Fails to Follow Function"

  • Amy Swiffen, "Research and Moral Law: Ethics and the Social Science Research Relation"

  • Jennifer Shannon, "Informed Consent: Documenting the Intersection of Bureaucratic Regulation and Ethnographic Practice"

  • Marie-Andrée Jacob, "Form-Made Persons: Consent Forms as Consent’s Blind Spot"

  • Stefan Sperling, "Knowledge Rites and the Right Not to Know"

  • Adriana Petryna, "Experimentality: On the Global Mobility and Regulation of Human Subjects Research"

  • Rena Lederman, "Comparative 'Research': A Modest Proposal concerning the Object of Ethics Regulation

Sunday, August 3, 2008

Social Scientists Debate Defense Department Funding

Today's Washington Post reports contrasting reactions to a Department of Defense plan to give $50 million in grants to social scientists to study such issues as China's military and political violence in the Islamic world. [Maria Glod, Military's Social Science Grants Raise Alarm," Washington Post, 3 August 2008]

Some anthropologists quoted in the story seem to reject any military sponsorship as unethical. David Price, whose book on anthropologists during World War II is on my reading list but not yet in my library, objects that the program "sets up sort of a Soviet system, or top-down system. If you look at the big picture, this will not make us smarter -- this will make us much more narrow. It will only look at problems Defense wants us to in a narrow way." By contrast, Rob Townsend of the American Historical Association notes that "hopefully, a project like Minerva will provide some historical perspective before, rather than after, it is needed."

The Post correctly explains that this debate is a replay of controversies in the 1960s, when the Pentagon and CIA sponsored studies of Latin America and Southeast Asia, including the infamous "Project Camelot." Throughout the 1960s and 70s, scholars struggled to find ways to lend their skills and insight to sound public policy without sacrificing their intellectual independence and integrity. Obviously, this is not an easy thing to do, and questions of sponsorship remain among the most difficult ethical problems faced by social scientists.

For a blog about IRBs, the salient point is the irrelevance of the Belmont Report to such questions. The authors of that report were steeped in the history of medical experimentation, and the report reflects their concerns about past abuses of poor ward patients, Nazi concentration camp prisoners, and the rural black men enrolled in the Tuskegee syphilis study. They knew nothing of Project Camelot, anthropology's "Thai affair," or less spectacular concerns about corporate sponsorship. As a result, the Belmont Report, while getting rather specific about such medical concerns as selection of subjects, says nothing about the conflicting duties to sponsors, subjects, and the truth. When applied to social science, the report gives the wrong answers to some questions, and no answers to others. And if anyone were to attempt to write a Belmont-style report on the ethics of social science, they would find various scholarly disciplines clashing over programs like this one.

Tuesday, July 29, 2008

The Dormant Right to Ethical Self-Determination

The Common Rule, in 45 CFR 46.103(b)(1), requires that each institution receiving funding from a Common Rule agency must submit an assurance that includes


A statement of principles governing the institution in the discharge of its responsibilities for protecting the rights and welfare of human subjects of research conducted at or sponsored by the institution, regardless of whether the research is subject to Federal regulation. This may include an appropriate existing code, declaration, or statement of ethical principles, or a statement formulated by the institution itself.


In contrast, OHRP's Federalwide Assurance requires U.S. institutions to pledge that


All of the Institution's human subjects research activities, regardless of whether the research is subject to federal regulations, will be guided by the ethical principles in: (a) The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, or (b) other appropriate ethical standards recognized by federal departments and agencies that have adopted the Federal Policy for the Protection of Human Subjects, known as the Common Rule.

Saturday, July 26, 2008

Report from SACHRP, Part 3: When Consent Means Censorship

A third item of interest from this month's SACHRP meeting concerns rules about research on Indian reservations.

According to a handout provided at the meeting, in March 2008, Dr. Francine Romero--an epidemiologist and former member of SACHRP--proposed that the Common Rule be amended to specify that


For human subject research to be conducted wtihin the jurisdiction(s) of federally recognized American Indian or Alaska native (AIAN) Tribal government(s), the IRB shall require documentation of explicit Tribal approval for the research. This approval shall come from the Tribal Council or other agency of the Tribal government to whom such authority has been delegated by the Council.


The Subpart A Subcommittee decided that while amending the Common Rule was neither "efficacious, expeditious, nor appropriate," it apparently thought the overall idea a good one, and recommended that OHRP develop guidance to assure that researchers get permission from Tribal governments to do research within their jurisdiction. In the general discussion, various SACHRP members and other federal officials debated whether OHRP was the right office to handle the task, and they modified the recommendation to include other HHS agencies.

As I pointed out during the public comment period, similar rules in Canada have deterred historians from including First Nations Canadians in their research, and give Band Councils veto power over who in their communities gets to talk with a university researcher. And in California, a Tribal government used an IRB to suppress research on labor conditions in casinos. But at no point during the SACHRP discussion did anyone consider the effect the recommendation would have on social science research.

Since 1966, IRB policies have been determined by bodies dominated by medical researchers, and SACHRP is just the latest in a long list. However much medical researchers and administrators may want the trust and respect of social researchers, they simply cannot keep in mind the rights and responsibilities of social scientists when something like this comes up. For medical researchers, it seems, more consent is always better, and they forget that one person's consent is another's censorship.

In related news, today's New York Times reports that the U.S. military has suppressed photographs of American casualties in Iraq by insisting that photojournalists obtain written consent from the troops they photograph:


New embed rules were adopted in the spring of 2007 that required written permission from wounded soldiers before their image could be used, a near impossibility in the case of badly wounded soldiers, journalists say . . . Two New York Times journalists were disembedded in January 2007 after the paper published a photo of a mortally wounded soldier. Though the soldier was shot through the head and died hours after the photo was taken, Lt. Gen. Raymond T. Odierno argued that The Times had broken embed rules by not getting written permission from the soldier.

[Michael Kamber and Tim Arango, "4,000 U.S. Deaths, and Just a Handful of Images," New York Times, 26 July 2008]

Friday, July 25, 2008

Report from SACHRP, Part 2: The Calcified Common Rule

Part of the SACHRP discussion last week concerned a provision of the Common Rule to which I had not paid much attention. As the Subpart A subcommittee noted, 45 CFR 46.117(c)(1) provides that


An IRB may waive the requirement for the investigator to obtain a signed consent form for some or all subjects if it finds . . . that the only record linking the subject and the research would be the consent document and the principal risk would be potential harm resulting from a breach of confidentiality. Each subject will be asked whether the subject wants documentation linking the subject with the research, and the subject's wishes will govern . . .


Several committee members noted that this last bit--about asking the subject if she wants the documentation that an IRB has determined will put her at risk--is pretty stupid. David Forster noted that offering a signed document can create unnecessary distrust. Neil Powe and Daniel Nelson suggested that it would be a significant burden for a researcher to devise and gain approval for a consent form on the off chance that a subject will demand one. Everyone seemed to agree that this provision is never enforced, and that it would be a bad idea if it were.

But what to do about it? As members of an official body, the committee members were clearly uncomfortable recommending that IRBs ignore a provision of the Common Rule. Yet they all seemed to think that amending the Common Rule was impossible.

This kind of defeatism distresses me. Since the Common Rule was promulgated in 1991, we've amended the Constitution, added an executive department to the cabinet, and brought professional baseball back to Washington, D.C. I'm sure it's a pain in the neck to bring together all the Common Rule signatories, but can't it be done every seven years, or ten? Or are we to endure these kinds of errors for a century?

I have not yet figured out who put in the provision that subjects be offered documentation even when it threatens them. The National Commission recommended no such requirement, yet it appeared in the draft regulations of August 1979. Someone in the Department of Health, Education, and Welfare made a mistake thirty years ago, and now we're stuck with it.

Wednesday, July 23, 2008

Report from SACHRP, Part 1: A Systems Level Discussion

On July 16 I attended the second day of the open meeting of the Secretary's Advisory Committee on Human Research Protections (SACHRP, pronounced sack-harp) in my home town of Arlington, Virginia. This was the first time I have observed such a meeting, and I am sure there is much I missed for want of context. But in this and following posts, I will record a few impressions.

The most interesting part of the meeting came at the end, when the committee's chair, Samuel Tilden, invited committee members to participate in "a systems level discussion" of today's human subjects protection regime. Not all committee members offered comments, and I was disappointed that anthropologist Patricia Marshall, the sole social scientist on the committee, did not do so. But the members who did speak displayed a range of viewpoints.

The most enthusiastic advocates of the status quo were Jeffrey Botkin and Daniel Nelson. Botkin described himself as an "unabashed advocate of current system." He noted that IRBs rose in response to documented abuses in medical research, such as those detailed by Henry Beecher in 1966 ["Ethics and Clinical Research," New England Journal of Medicine 274 (16 June 1966): 1354-1360]. Today, he noted, most researchers know the rules. While the system may let an occasional unethical project slip through, there is no "hidden underbelly of unethical research."

This is an important point, and I remain agnostic about whether IRBs are appropriate for medical research. But I am also sure that Dr. Botkin understands that even beneficial drugs can have nasty side effects, and that he would not prescribe the same drug to treat all ailments. I would be interested to know what he considers the social science analogue to Beecher's article. For if we are to judge today's system by its ability to avoid documented problems of the past, we need to know what we are trying to avoid for every type of research we regulate.

Nelson declared that the "Subpart A Subcommittee" he co-chairs decided early in its existence that "there is general consensus that the Common Rule is not 'broken.'" Yet in his system-level talk, he conceded that the power granted by the Common Rule to local IRBs results in arbitrary decisions (he called this "variability") and "well-intended overreaching." He noted that the only sure way to eliminate all risky research is to eliminate all research.

Other committee members, while not calling for changed regulations, were more explicit about current problems. Lisa Leiden, an administrator at the University of Texas, has heard from a lot of upset faculty, and she is looking for ways to relax oversight. This would include "unchecking the box," that is, declining to promise to apply federal standards to research not directly sponsored by a Common Rule agency. Without going into specifics, she suggested that the federal standards are too stringent, and that the University of Texas system, if freed from them, would craft exemptions beyond those now offered by the Common Rule. Overall, she is looking for ways to move from a "culture of compliance to one of conscience."

Liz Bankert, Nelson's co-chair of the subcommittee, also showed her awareness of the overregulation of social research, and her frustration with IRBs' emphasis on regulatory compliance. "I've gone to IRBs all over the country," she reported. "They are thoughtful, sincere, really intelligent groups. To have all this brainpower sucked into the vortex of minimal risk research is not efficient." It also contributes to what Bankert sees as a lack of mutual respect between IRBs and reseachers. She blamed the problems on a "fear factor which has been developing over the past several years."

Both Leiden and Bankert implied that it was the interpretation of the regulations, not the regulations themselves, that caused the problems they have identified. Without saying so explicitly, they seemed to blame the OPRR of the late 1990s for scaring IRBs all over the country into letter-perfect regulatory compliance, at the expense of research ethics.

In contrast, two committee members seemed willing to reconsider the regulations themselves. David Strauss hoped for a system that was "clinically and empirically informed," terms that no one could apply to the regulation of social research. And he recognized that the regulations are not divine revelation. "We shouldn't be reviewing research that we don't think needs to be reviewed because some folks 30 years ago, at the end of a long, hot day, decided to use the word 'generalizable,'" he explained. "We have to have language that makes sense to us."

Finally, Tilden himself described the Common Rule as largely broken. He noted that the 1981 regulations--which have changed only slightly since--were accompanied by the promise that most social research would not have to undergo IRB review. The fact that so few social science projects escape review, he concluded, showed that the exemption system has collapsed. Rather than try to shore it up again, he suggested that concerns about confidentiality be separated from other risks, and that projects whose only risks involved breaches of confidentiality be evaluated only for the adequacy of their protections in that area.

This last proposal interests me, because when scholars talk seriously about the wrongs committed by social science researchers, they almost always come back to questions of confidentiality. If IRBs were restrained from making up other dangers--like interview trauma--and instead limited to more realistic concerns, they could potentially do some good.

In sum, I did not get the impression that, in Nelson's words, "there is general consensus that the Common Rule is not 'broken.'" Strauss and Tilden, in particular, seem to understand that the present system has wandered far from the stated intentions of the authors of the regulations, and from any empirical assessment of the risks of research or the effectiveness of IRBs. I hope they will continue to think about alternative schemes that would keep controls on medical experimentation without allowing federal and campus officials free rein to act on their fears.

Thursday, July 17, 2008

Political Scientists to the Rescue?

The final essay in the PS symposium is Sue Tolleson-Rinehart, "A Collision of Noble Goals: Protecting Human Subjects, Improving Health Care, and a Research Agenda for Political Science."

Tolleson-Rinehart addresses the question of quality improvement in health care, such as the checklist publicized by Atul Gawande. As she notes, her "essay is not about the influence of IRBs on political science research" and is therefore largely outside the scope of this blog. That said, she makes some observations relevant to the regulation of social science research.

While sympathetic to individual IRB members, Tolleson-Rinehart takes a dim view of the system as it now operates:


IRBs are understandably, and necessarily, driven by their limiting case: the possibility of invasive or dangerous procedures performed with vulnerable individuals or populations, without adequate regard for the hallmarks of protection, respect for persons, beneficence and nonmaleficence, and justice elucidated in what is widely known as the Belmont Report, and adopted from the time of the Belmont Report’s release as our fundamental ethical principles. It is not surprising, given IRBs’ role as the protector of the vulnerable, that the general IRB perspective on “minimal risk” and risk-benefit comparisons is a conservative one.

IRBs are also terrified. All IRB professionals I know work in real fear that their IRBs could be the next ones to cause an entire university’s research to be shut down. Shutdowns in recent years at Harvard, Duke, and Johns Hopkins give IRBs every reason to be fearful of making what OHRP considers to be an error. The reasonable suspicion that researchers regard IRBs as obstacles to, rather than facilitators of, research, must further IRB professionals’ sense of being embattled.

Reviewers on IRB committees are our very hardworking colleagues who are usually not given adequate release time to meet their committee responsibilities, and who are not able to benefit from truly extensive and nationally standardized training, nor do they have anything like a consensus metric for evaluating the spectrum of risk in different research contexts and for different populations.

All these sources of strain might determine the conservative approach to human subject protections. When social science research (including quality-improvement research) occurs in a biomedical context, or when health care and health policy require evaluation, the conservative stance can become dysfunctional. IRB assessments of my own students’ work provide a clear example of one of the ironic and unintended consequences of the absence of agreed upon and broadly understood metrics for assigning risk in different research contexts. IRBs have a comparative lack of familiarity with how social science methods— such as those used in quality-improvement research—may differ from some other methods of clinical research in the risks they pose to subjects. (508)


She adds that while her own students submit "substantially similar" protocols, their


IRB determinations range from declarations that the project is “not human subjects research” at all to “research that requires full consent,” with every intermediate determination. I frequently have students working simultaneously on very similar projects, one of whom must go through a tedious consenting process taking as much as four to five minutes of the beginning of telephone interviews (with the busy elites at the other end chafing at these long and unnecessary prefaces to the first question), while another student researcher is not required to secure any kind of consent at all. The single source of variation across these cases is not the research question or the method, but the IRB reviewers and their familiarity ~or lack thereof ! with in-depth interviewing and the unique protections already available, via one’s position, to the “powerful research subject.” (509)


This is damning enough, but Tolleson-Rinehart insists that the "point of this vignette is not to criticize IRBs." (509) Rather, she argues that


political science is well prepared to analyze and make normative (but evidence-based) recommendations about the politics of human subjects research. We can help define what it is, and the circumstances under which it is generalizable knowledge, even though it may not necessitate a conservative approach to protections. We can construct frameworks to achieve a more precise understanding of how to balance risks and benefits. Those frameworks might even lead us to formulate what would amount to a multidimensional risk scale. Finally, political science can contribute to the construction of theoretical and methodological underpinnings for the content of truly national standards for IRB training curricula. These would improve IRB reviewers’ understanding of different research methods to go beyond mere compliance with federal regulations and become real resources and decision aids for hard-pressed reviewers who may have to evaluate research they aren’t familiar with. (509)


Finally, Tolleson-Rinehart notes that while the Association for the Accreditation of Human Research Protection Programs and Public Responsibility in Medicine and Research mean well, both emphasize regulatory compliance over actual research ethics. She argues that political scientists can go beyond compliance questions to work on "a common epistemology of the philosophical, ethical, and political foundations of human subjects research." (510)

All of this sounds fine, and I hope that Tolleson-Rinehart and her colleagues get to work on her agenda. But as my recent exchange with Levine and Skedsvold suggests, the most immediate question for political scientists may be to figure out how to make the regulatory system more responsive to developments in research. We seem stuck with a 1974 law and 1991 regulations that cannot be changed, even when everyone agrees they need updating.

Monday, July 14, 2008

Can We Patch This Flat Tire?

The fourth article in the PS symposium is Felice J. Levine and Paula R. Skedsvold, “Where the Rubber Meets the Road: Aligning IRBs and Research Practice.” Both authors been involved in IRB debates for several years, and this article reflects their sophisticated understanding of some of the issues involved. But for an article published in a political science journal, it is disappointingly insensitive to the power dynamics that govern IRB-researcher relationships.

Unlike symposium participants Tony Porter, Dvora Yanow and Peregrine Schwartz-Shea, Levine and Skedsvold do not question the premise that IRBs help promote ethical research. Instead, they assert that there is no fundamental conflict between IRBs and social science researchers: "federal regulations, professional ethics codes, and research practice may have shared goals but tend to speak with different languages—creating frustration and skepticism in a system that could potentially work quite well if transformations are made." (502) Based on that assertion, they suggest four such transformations, ranging from the bold to the timid.

Friday, July 11, 2008

The Biomedical Ethics Juggernaut

The third contribution to the PS symposium is Tony Porter, "Research Ethics Governance and Political Science in Canada."

Porter laments that "the history of research ethics governance in Canada reveals recurrent concerns expressed by political scientists and other SSH [social sciences and humanities] researchers that indicate the inappropriateness of the [ethics] regime for SSH research, and that also create the impression that the regime is a juggernaut that continues on its trajectory, relatively impervious to criticism." (495)

Porter then offers a helpful capsule history of the debates leading up to Canada's present policy statements. From an American perspective, they look pretty good. In contrast to the Belmont Report, which calls for informed consent and harms-benefit assessment without specifying the types of research to which it applies, Canada's Tri-Council Policy Statement declares:

certain types of research— particularly biographies, artistic criticism or public policy research—may legitimately have a negative effect on organizations or on public figures in, for example, politics, the arts or business. Such research does not require the consent of the subject, and the research should not be blocked merely on the grounds of harms-benefits analysis because of the potentially negative nature of the findings. (496)


Unfortunately, Porter finds that in practice, research ethics boards ignore such guidance. For his own article, he was asked to specify questions in advance, destroy data, and write long explanations of his research plans. And he warns of even stricter regulation ahead.

Porter attributes the imposition of biomedical ethics and regulation on non-biomedical research to the clout that biomedical researchers have in government and universities. There are more of them, they have more money, and they care more about ethics--since they face more serious ethical challenges. As a result, "the growth of a biomedically oriented but unified research ethics regime has appeared as a seemingly unstoppable trend in Canada." (498) Rather dismally, Porter suggests that the only thing that will stop that trend is its own ability to alienate researchers until "opposition on the part of SSH researchers will increase and the legitimacy of the arrangements will be damaged, as will the ability of the regime to elicit the degree of voluntarism and acceptance that is needed to sustain it." (498)

Perhaps for lack of space, Porter does not consider another possibility: that the social sciences will internalize the medical ethics implicit in the "unified research ethics regime." The American Anthropological Association took a big step in this direction in 1998, with the adoption of a code of ethics that comes close to rejecting the idea that research "may legitimately have a negative effect on organizations or on public figures." If the ethics regime grows stronger in Canada and elsewhere, and more social scientists follow the AAA's line, it may be that young people interested in "critical research," as Porter puts it (496), will seek careers in journalism, rather than in university scholarship. To use a Canadian example, if Russel Ogden were writing for a newspaper, no one would be blocking his research.

Monday, July 7, 2008

Ideas on Fieldwork Are Oldies but Goodies

The second article in the PS symposium on IRBs is Dvora Yanow and Peregrine Schwartz-Shea, "Reforming Institutional Review Board Policy: Issues in Implementation and Field Research."

The authors argue that "the character of its implicit research design model, embedded in its historical development, . . . renders IRB policy problematic for ethnographic and other field researchers." (483) Specifically, they contend that ethnographers are likley to have trouble meeting IRB demands that their protocols spell out procedures for selecting subjects, obtaining informed consent, disgusing the identity of participants, balancing risks and benefits, and protecting the data they collect. (489)

Fieldwork, they argue, is just too unpredictable to be planned out so thoroughly in advance. They note,

Field researchers must enter others’ worlds, and are expected to do so with care and respect, and these worlds can be complex, unbounded, and in flux. Instead of rigidly delimited, predesigned protocols laying out research steps that are invariable with respect to persons and time, which subjects can be handed as they step into the world of the medical researcher, field research often requires flexing the research design to accommodate unanticipated persons and personalities and unforeseen conditions.


And, they find,

extending [the Belmont] principles to other, non-experimental research settings without making the underlying mode of science and its methodology explicit and without exploring their suitability to non-experimental scientific modes and methodologies has resulted in a hodgepodge of ethical guidance that is confused and confusing. Those guidelines do not give the many serious ethical problems of field research design and methodologies the sustained attention they deserve. (491)


All of this sounds perfectly sensible. What suprises me a bit is the authors' belief that they are the first to make these arguments:

The proposals that we have seen to date for reforming IRB policy (e.g., Carpenter 2007) all tinker with the existing system. None of them, to the best of our knowledge, has yet identified and engaged the underlying methodological frame—experimental research design—shaping that policy and its implementation. Policy reforms that address resource, organizational, and other features of the existing policy leave that framing and its prosecution in place. The impact of these policies on field research is, however, serious, extending IRB policy to these other forms of research in the absence of systematic evidence of their having harmed research participants. If we are to have policies to ensure the protection of human participants in all areas of research, those policies need to be suited to other than just experimental research designs in ways that are commensurate with their own potential for harms. It is vital that recognition of the misfit between existing experimentally based policy and field research design and methodologies also be on the table in discussions of IRB policy reform. (491)


In fact, ethnographers have been complaining about the imposition of experimental research ethics on non-experimental research for thirty or forty years. Anthropologist Murray Wax, in particular, eloquently distinguished experimental research from fieldwork in just the way that Yanow and Schwartz-Shea do. See, for example, his essay, "On Fieldworkers and Those Exposed to Fieldwork: Federal Regulations and Moral Issues," Human Organization 36 (Fall 1977): 321-28. Indeed, despite a long bibliography, Yanow and Schwartz-Shea cite none of the many IRB critiques written in 1978-1980, when the IRB regulations were being overhauled.

I don't fault Yanow and Schwartz-Shea too much for not knowing this history. It is one of the tasks of the historian to save others from having to reinvent the wheel, and I hope my book, when finished, will make such a contribution.

Yanow and Schwartz-Shea end their article with "A Call for Action," most of which is fairly vague. IRB critics are split between those who seek to "tinker with the existing system," and those who seek to exclude large categories of research from any IRB jurisdiction. Yet it's not even clear on which side of this divide these authors fall. For example, they want APSA to "Issue a statement calling for reform of IRB policy in a substantive way that protects the interests of APSA members." (492) Lovely, but what should such a statement say? They demand reform without defining it.

More promising is their call for more research. They note,

There is much that we do not know about the kind(s) of field research political scientists are doing today . . . We need more systematic, policy-oriented research about members’ field research practices, and we call on APSA to take the lead in conducting or facilitating it . . . (491)


They mention the possibility of an APSA handbook on ethical issues and current regulations.

This sounds a bit like the effort undertaken by the American Psychological Association in the preparation of its 1973 Ethical Principles in the Conduct of Research with Human Participants. As described in the first chapter of that book, rather than sit together and lay down some rules, the drafting committee surveyed the APA membership and assembled thousands of descriptions of real research projects that had raised ethical issues. The descriptions became the basis for an ethical guide directly relevant to the needs and values of the APA's members.

Around the same time, APSA itself undertook a similar effort, on a smaller scale, by conducting a study of actual cases in which researchers faced problems with confidentiality. Unfortunately, the full study seems not to have been published. A brief summary was published as James D. Carroll and Charles R. Knerr, "The APSA Confidentiality in Social Science Research Project: A Final Report," PS 9 (Autumn 1976): 416-419.

Whether or not a detailed ethical study would help ethnographic political scientists with their IRBs, it would be a great resource for scholars who want to do right by the people they study. I hope APSA--and other scholarly societies--will consider such a project.

Saturday, July 5, 2008

Human Subject of Biomedical Research Angry!

Peter Klein at Organizations and Markets notes a brief dialogue concerning medical research ethics in The Incredible Hulk. Interestingly, the scientist involved suggests not the weighing of autonomy, beneficence, and justice demanded by the Belmont Report, but rather a prioritization of autonomy, allowing the subject, rather than an ethics committee, to decide whether the potential benefits justify the risks. Some ethicists of the 1970s proposed such a prioritization, but the National Commission rejected it.

The only movie I can think of off the top of my head, in which a comparable scene depicts an ethical debate in the social sciences and humanities, is Songcatcher. I haven't seen the movie, but even in the trailer they're arguing about when research becomes exploitation. Maybe I should watch the whole thing.

Friday, July 4, 2008

When Seligson is Non-Seligson

The first article in the July 2008 PS symposium is Mitchell A. Seligson's “Human Subjects Protection and Large-N Research: When Exempt is Non-Exempt and Research is Non-Research." While it's great to have someone interested in the contradictions of IRB regulations, the absurdity of the present regime seems to have left Seligson hopelessly confused, and his incoherent essay calls for both expansion and contraction of IRB authority.

Rather than trying to outline his argument, let me just list some of the questions to which he poses contradictory answers.

1. Should social science and humanities research follow the Belmont Report?



Early in his essay, Seligson attacks the Belmont Report as irrelevant to social science research, especially survey research. He particularly dislikes its call for an assessment of risks and benefits, noting


the problem of assessing risk is especially vexing for all of those who rely on large-N studies, typically in the field of survey research. Ironically, when only a handful of subjects are used in a campus laboratory-based experiment, the IRB is likely to approve the project with no objection. But survey research, which invariably relies on large-N studies, is viewed with suspicion by many IRBs simply because the risk, however small, is seen as being replicated 1,000 or more times, since most samples strive for confidence intervals of 63% or better. Protocol analysts, who are used to seeing laboratory experiments and focus groups with samples of fewer than 100, are often taken aback when they confront the large sample sizes inherent in most survey research. And when they do, they question why such a large sample is needed. As a result, it is not at all uncommon to have IRB protocol analysts ask survey researchers to cut down their sample sizes. (479)


He also is skeptical of the Common Rule, especially its protections for pregnant women--irrelevant and damaging to survey research. And he quotes--seemingly with approval--the AAUP's 2006 recommendation "that research whose methodology consists entirely of collecting data by surveys, conducting interviews, or observing behavior in public places be exempt from the requirement of IRB review.”

But then Seligson turns around, lamenting that "historians are not only exempt from IRB control, they have no requirement or even need to take human subjects protection training and pass tests on their knowledge of the principles and rules. Literature faculties often have no knowledge at all of human subjects protection." (480) He wants "faculty members in a broad range of institutions to familiarize themselves with the IRB regulations and to take the tests to demonstrate their knowledge of same," including "the Belmont principles." (482)

Why? Why should faculty members be required to familiarize themselves with guidelines that Seligson has told us are inapplicable to their work? Does he just want company in his misery?

2. Can researchers be trusted?



Seligson thinks that IRB regulations did not help survey research, because


Long before human subjects regulations and the invention of IRBs, survey researchers in all fields instinctually knew that by guaranteeing anonymity they would encourage frankness on the part of respondents. . . . Political scientists who carry out surveys have been aware for decades of the importance of guaranteeing anonymity to their subjects. (480)


If this track record weren't enough, he notes that governments and universities trust political scientists to behave ethically in other aspects of their work.


Even though political scientists conducting educational tests and surveys are exempt from federal regulation, they are not, after all, exempt because the federal government believes we cannot be trusted. What is so strange here is that in countless other important ways, we are trusted by that same federal government. When we grade tests taken by our students, we are not allowed to discriminate on the basis of race, creed, national origin, sexual preference, etc. Yet we are not asked to sign a statement saying that we will not discriminate before ~or indeed after! we grade each exam or before we determine final grades. We hold office hours, but are not asked to submit an application prior to each office hour, not even prior to the start of each term, to the affirmative action offices on our campuses that we will not sexually harass students. We submit articles to conferences but are not asked to submit signed statements saying that we did not plagiarize the material. (481)


Since political scientists have proven more or less trustworthy in these areas, Seligson wants IRBs "to stop assuming, . . that we are all guilty of violations of human subjects rights unless we can prove otherwise." (482)

That's all very nice, but he's unwilling to extend the trust to researchers in other fields. He writes,


some humanists may be naive about the risks involved in disclosing names of subjects. One can imagine many kinds of risk to respondents. One such risk is dismissal of employment from an employer who either might not like the views expressed in the oral history or testimonio or deems them harmful to the company’s welfare. Potential employers might look at the oral history information and deny a position based on the statements contained therein. Another risk could be ostracism at work or in one’s neighborhood for expressing politically unpopular views. One can even imagine law enforcement officials using oral histories to prosecute individuals for revelations that suggest criminal behavior. (480)


In other words, Seligson does not trust interview researchers to have the same instinctual knowledge of ethics he ascribes to survey researchers, he ignores oral historians' sixty-year record in favor of hypothetical abuses, and he assumes historians are guilty of violations of human subjects rights unless we can prove otherwise. Perhaps he wants us to get approval before grading tests as well.

3. Can IRBs be trusted?



Overall, Seligson takes a dim view of those in charge of human subjects regulations, whom he terms "overzealous bureaucrats, both federal and on campuses," and wants retrained. (482) He even relays the follwoing anecdote:

A very senior IRB official at one university, in order to impress upon a political science faculty member his omnipotence, asked, “Do you ever use the library to read books about President Bush?” When the response was affirmative, he said, “Unless you file for IRB approval before opening those books, you will be held in violation, since Bush is a human, is living, and the books almost certainly contain personal information.” (480)


I'm willing to believe a lot of bad things about IRBs, but even I can't swallow a story like this without names and dates attached.

Yet while portraying IRB officials as power-mad bureaucrats, Seligson wants to expand their jurisdiction "to cover all studies of any kind that obtain data on living humans." (482) Wouldn't that include a book about President Bush?

Seligson concludes that "the roadmap to the future should be clear." Maybe it should be, but this article isn't helping. Fortunately, the other essays in the symposium are better researched and reasoned.