Friday, January 18, 2013

Armchairs vs. Evidence in the Journal of Law, Medicine & Ethics

Last week I promised some comments on the Winter 2012 issue of the Journal of Law, Medicine & Ethics, which features a symposium entitled "Research Ethics: Reexamining Key Concerns."

The contributions reinforced my sense that the IRB debate is in part a contest between evidence-based approaches and armchair ethics.

The most armchair-bound piece seems to be Alex John London's "A Non-Paternalistic Model of Research Ethics and Oversight: Assessing the Benefits of Prospective Review." Relying on such classic thought experiments as Hardin's "Tragedy of the Commons" and Akerlof's "Market for 'Lemons,'" he claims that IRB review

helps to provide a credible social assurance to the American people that social institutions, funded by their tax dollars and empowered to advance their health and well-being, work to: respect and affirm the moral equality of all community members; prevent the arbitrary exercise of social authority; and help to create a “market” in which the diverse stakeholders, often working to advance diverse ends, collaborate in a way that advances the common good.

Yet his analysis depends on unsupported claims like "If all researchers were ideally rational and knowledgeable . . . almost all protocols would be submitted in a form that would be acceptable with, at most, minor revisions. In this environment, IRBs would be able to quickly approve most protocols and their actual review would add little marginal value." In other words, he assumes that IRBs are competent at approving ethical proposals.

Such claims can be challenged using both armchair and empirical evidence. On the armchair side, London ignores the systems of incentives operating on IRBs (above all, avoid federal suspension of research funds) which lead them to impose onerous restrictions on innocuous protocols. If we were to take seriously the kind of economic modeling London offers, I suspect it would lead us to believe that IRB behavior fits models of rational ignorance, or possibly rational irrationality. But I lack the incentive to investigate this fully.

London also ignores empirical works such as those of Carl Elliott. London frets that bad studies will drain "the reservoir of public trust," but so do bad IRBs. Does London suppose that readers of White Coat, Black Hat or Elliott's opinion pieces finish with warm, trusting feelings about medical research approved by IRBs?

Steven Joffe's contribution, " Revolution or Reform in Human Subjects Research Oversight," is also largely speculative. "What might the consequences of abandoning the requirement for prospective oversight of research be?" he asks. "Given the lack of relevant data, it is not possible to provide an evidence-based answer to this question."

But relevant data do exist, in the form of before-and-after comparisons of countries that have expanded or contracted review requirements, and comparisons among countries with different regulatory regimes. For example, I have argued that in the fourteen years following the revision of 45 CFR 46 (1981-1995), social scientists in the United States faced few IRB restrictions yet did not produce the wave of scandalous research that might be predicted by IRB advocates.

In "More Than Cheating: Deception, IRB Shopping, and the Normative Legitimacy of IRBs," Ryan Spellecy and Thomas May present anlaysis more grounded in experience. They argue that "the current IRB system is flawed at a very fundamental level," basing this argument in part on the incentives operating on IRBs "to err on the side of not approving research," but also on Abbot and Grady's empirical work and on their own experience as IRB members. An IRB system this bad, they warn, teaches researchers to "ignore, avoid, or outright violate policies aimed at protecting research participants."

The loudest, clearest call for evidence-based reform is "IRB Decision- Making with Imperfect Knowledge: A Framework for Evidence-Based Research Ethics Review" by Emily Anderson and James DuBois. They recommend five steps:

1. Translation of Uncertainty to an Answerable Question
2. Systematic Retrieval of the Best Available Evidence
3. Critical Appraisal of Evidence for Validity, Relevance, and Importance
4. Application of Evidence to Make a Decision
5. Evaluating Performance

They then offer examples of how this process might be applied. For instance, an IRB worried that paying heroin addicts to participate in interviews would use the money to buy drugs might review the empirical research on the subject and learn that "existing evidence, although limited, suggests that six $70 cash payments over the course of five years will not contribute to an increase in drug use." Yet they note that "the gap between empirical research on research ethics and the application of evidence to IRB review is still quite vast," and that for decisions to be based on evidence, "the culture of IRB review and decision-making must change."

No comments: