Sunday, December 14, 2008

Burris on Compliance vs. Conscience

Scott Burris, a law professor and author of at least three earlier articles on IRBs and human subjects regulation, takes on the Common Rule in "Regulatory Innovation in the Governance of Human Subjects Research: A Cautionary Tale and Some Modest Proposals," Regulation & Governance 2 (March 2008): 65-84.

Burris begins with a bleak picture of the status quo:

The IRB is not purely or even primarily a body for deliberating ethical questions. Though ill-equipped for the task, the IRB is now an oversight agency expected to spot bad actors and monitor researcher behavior. The IRB may engage in rich ethical deliberation, but it is not a dialogic institution with respect to the researcher, who generally takes no part in the IRB’s deliberations. No mechanism informs IRBs of the real cost of the ‘‘small’’ changes they demand. From the researcher’s point of view, the IRB can be a faceless bureaucracy at its most unchecked, not required to give reasons for arguably arbitrary decisions from which there is no clear right to appeal.

Nor is it a particularly efficient or effective bureaucracy. The level of paperwork required to conduct research has grown steadily, even as it is widely agreed that researchers, IRBs, and the OHRP spend too much time documenting routine compliance. Agonizing over the paperwork problem does serve one purpose: it takes attention away from more fundamental problems. Foremost among these is the sparse evidence that the system is actually protecting research subjects. (67)

Burris blames these troubles on what he sees as fundamental flaws in the Common Rule.

The Common Rule system was not designed by people thinking about regulatory technique. To the extent there was a design at all, the Common Rule reflected a dream of radical regulatory purification. This was to be a legal system without lawyers, a political system without politics. Values would be aired, and fair decisions made, by sensible people of good will. Consistency and fairness would arise naturally from the process and principles deployed. (68)

Such expectations, he suggests, were naive, for they left IRBs with two, incompatible missions: "virtue promotion and oversight" (78). In other words, he doubts that the current system can follow Greg Koski's call to "move beyond the culture of compliance, to move to a culture of conscience and responsibility." [Philip J. Hilts, "New Voluntary Standards Are Proposed for Experiments on People," New York Times, 29 September 2000.]

Instead, Burris write,

As a deliberative body, the IRB has been deeply compromised by its authority. If we leave aside the apparently rare cases of extreme conduct, reasonable minds can and generally do differ on the application of principles such as justice, beneficence, and autonomy . . .

Ethical principles are about illumination, not adjudication . . . Yet the IRB, in spite of its roots in philosophical deliberation, is structurally required to act as an adjudicatory body. At the end of whatever thoughtful discussion takes place, it must produce a ‘‘right answer,’’ telling the investigator whether the protocol must be changed or abandoned. Even to the extent that deliberation does illuminate key issues, the typical absence of the researcher, the key moral agent in the matter, undermines the value of the exercise in promoting virtue.

The same features that might, in the absence of authority, make the IRB a fruitful deliberative body – its diverse composition and informal decision-making style – are toxic to its capacities as an overseer of research. (70)

Burris makes several interesting proposals, including "breaking the current regulatory system in two, building one regulatory approach for biomedical experimentation and another for social, behavioral, and epidemiological research." (74) But the key recommendation is "to keep ethics separate from the power to control the design and conduct of research." Burris wants to

deprive IRBs of the power to independently stop or alter a study at all, placing a burden on the IRB to make a case for changes to a higher authority (such as a university administrator). Such constraints would, in practical terms, require the IRB to persuade the investigator through discussion that a study had ethical problems. This would not only be likely to reduce erroneous changes but also give the researcher the opportunity to take part in ethical deliberation as an autonomous agent. As side benefits, an IRB speaking directly with the researcher might be better able to make credibility determinations or uncover mistakes, and would certainly get a better idea of the costs its proposed alterations would impose. (76)

In January 2007, I proposed an even bolder shifting of the burden, by making IRB review voluntary, at least for social science projects. And Burris might well agree; he isn't sure that any regulation is necessary for "social, behavioral, and epidemiological research, where there is virtually no risk of death or serious injury." (77)

I am intrigued by Burris's analysis of the tensions inherent in the IRB enterprise, but I am left with three questions.

First, how different is the IRB regime from what Burris describes as "traditional top-down regulation and hard law"? (66) Presumably regulators with other missions, like food safety and pollution control, would like to promote virtue as well as overseeing behavior. For example, the Code of Federal Regulations includes many requirements that manufacturers use "good engineering judgment." Is it any easier to reach consensus on engineering judgment than on matters of research ethics? It may be; the Common Rule's demands that "risks to subjects are minimized" and that "risks to subjects are reasonable in relation to anticipated benefits" are quite possibily vaguer and more ambitious than anything that engineers face. But if this is the case, it would be nice for Burris to show that the internal tensions of the Common Rule are worse than those afflicting other regulatory regimes.

Second, does Burris think that IRBs ever functioned well? The federal government has required IRB review of some research since 1966, yet almost all of Burrris's citations concerning the problems of IRB review were written after 1998. Robert Levine and Jonathan Moreno argue that before the crackdown of 1998 there were some "good old days" of "moderate protectionism." [Robert J. Levine, "Empirical Research to Evaluate Ethics Committees' Burdensome and Perhaps Unproductive Policies and Practices: A Proposal," Journal of Empirical Research on Human Research Ethics 1 (September 2006), 3; Jonathan D. Moreno, "Goodbye to All That: The End of Moderate Protectionism in Human Subjects Research," Hastings Center Report 31 (May-June 2001): 9-17.] Does Burris agree? If so, it's a little harder to blame today's problems on flaws inherent in a decades-old system. Harder, but perhaps not difficult. Burris writes of a "a one-way ratchet increases the number of reviews, the paperwork, and required training with each major scandal." (73) So he may believe that the ratchet was always waiting, and that the good old days were inevitably numbered.

Third, what is to be done with Burris's insights? Burris concedes that he doesn't know how to put his proposals into effect. "Ideas are a good start," he writes, "but the political and social forces that produced the current system continue to exert their powerful influence. People seem to like regulatory systems that purport to prevent all bad outcomes. They like to have institutions to blame when unavoidable harms transpire, and they want to be told that changes have been made to make us all safe again." (80) And "stopping this one-way ratchet . . . would require what has so far been absent: support of the political leadership in the face of public intolerance of even a small rate of error." (73) Without quite saying it, Burris is calling for legislative overhaul.

Burris's article makes an interesting contrast to Felice J. Levine and Paula R. Skedsvold, "Where the Rubber Meets the Road: Aligning IRBs and Research Practice," PS: Political Science & Politics 41 (July 2008): 501-505. In their reply to my comments on that article, Levine and Skedsvold wrote that "short of significant regulatory reform (which is not likely) or other creative solutions, social scientists will find themselves at this same place years from now. Even Congressional action (e.g., at least one bill is being redrafted now) is not likely to help matters."

While not as pessimistic as Levine and Skedsvold, I do agree that regulatory or legislative change would be difficult. Whateve the merits of Burris's plan to strip IRBs of their coercive power, his proposals are anything but modest.

No comments: