Monday, March 25, 2013

Report from the National Academies Workshop

Last week I attended the Revisions to the “Common Rule” in Relation to Behavioral and Social Sciences Workshop sponsored by the National Academies.

I live-tweeted the event on my @IRBblog account, and I have collected those tweets on Storify.

What follows are what I consider some of the key messages from selected presenters. The statements following each name represent my summary of the remarks, not necessarily a quotation or paraphrase.

Connie Citro: Hey, We Did This Ten Years Ago


Connie Citro of the National Research Council was one of the editors of the 2003 report, Protecting Participants and Facilitating Social and Behavioral Sciences Research. She explained the origins of that report; while the 2002 report on medical research, Responsible Research: A Systems Approach to Protecting Research Participants, got funding from HHS, the National Research Council itself had to pay for the companion volume on social and behavioral science from its own funds. Citro suggested that the 2003 report could still guide policy, and she seemed miffed that the ANPRM paid so little attention to it. Social and behavior research, she argued, remain the neglected stepchild of human subjects regulation.

While I see flaws in the 2003 report (see Ethical Imperialism, 176-177), I agree that the ANPRM writers could have paid it more attention. And, as I noted recently, I wish the National Academies had paid more attention to the report's recommendation that "any committee or commission that is established to provide advice to the federal government on human research participant protection sould represent the full spectrum of disciplines that conduct research involving human participants."

Jeffery Rodamar: What, Me Worry?


Jeffery Rodamar of the Department of Education began with a quotation from Montaigne: "Nothing is so firmly believed as what is least known." But given what followed, he would have better started with Doctor Pangloss.

Rodamar's first major claim is that IRBs do not place a great burden on researchers. Looking at statistics on turnaround time, he noted that expedited reviews only take about a month from submission to final approvals, and full review takes, on average, a month and a half. Few studies are rejected outright.

Such analysis obscures a great deal. First, it does not consider the impact of these delays. For a multiyear drug study, a one month wait may not mean much. For a political scientist seeking to react to events, or a student hoping to get a project done during a one-semester research seminar, one month can be fatal. Nor would Rodamar's statistics pick up studies gutted by the requirements imposed by IRBs, studies withdrawn after the researcher abandoned hope, or studies never even attempted because the researcher knew it would be too difficult to get approval.

Rodamar then claimed that the University of Michigan survey showed that social scientists are not much more dissatisfied than medical researchers. I believe this to be a misreading of the data, which did not break down researchers by discipline.

Finally, Rodamar offered a grab-bag of studies he considered harmful to participants. He did not argue that the harms of these could have been foreseen by IRBs, and he even threw in some long-range consequences that IRBs are forbidden to consider.

Brian Mustanski: IRBs Don't Speak for Our Participants


Brian Mustanski reiterated the lessons drawn from his powerful 2011 article, "Ethical and Regulatory Issues with Conducting Sexuality Research with LGBT Adolescents: A Call to Action for a Scientifically Informed Approach."

He noted the divergence between IRBs demands and the lives of the people he studied. If an IRB tells participants that they might need psychological services, and that's not true, how is that informed consent? If participants report that being asked questions made them feel important, why does the IRB prohibit the researchers from telling that to other prospective participants? Why can't social researchers note that particpating may provide useful information to avoid HIV, if that is the case?

Charles Plott: Sometimes, There Really Is No Risk


Charles Plott of CalTech described lab experiments run by behavioral economists, political scientists, and others interested in decision making. This research has enormous consequences, since it can help policy makers devise efficient markets for things like airport landing rights, pollution permits, and radio spectrum licenses. And the most serious adverse impacts reported, among hundreds of thousands of participants, are things like being irritated by a question. To understand the risk of harm, he argued, you need to recognize when it doesn't exist.

Roxane Cohen Silver: IRBs Should Approve Generic Protocols


Silver, of the University of California Irvine, is a psychologist who studies random, unpredictable events, e.g., natural disaster, school shootings. Ideally, she wants research to start almost immediately, since people can't accurately reconstruct their emotional experiences weeks or months after an event.

She has persuaded her IRB to approve a generic disaster protocol, which could allow her to get exempt or expedited approval within 24 hours of a request. As she described it, this is based on her IRB's knowledge that she and her team have been trained to do research with sensitivity, and to refer participants to help when appropriate.

Though Silver did not put it this way, the IRB has in effect gone from a protocol-review model to a researcher-certification model, along the lines proposed by Greg Koski. And Silver seems to like this idea; she disapproves of the idea of researchers descending on a traumatized area (e.g., post-Katrina New Orleans) without adequate training.

Silver noted that the ANPRM's proposal to define trauma research as necessarily "emotionally charged" would impede scientific process; so would a 1-week waiting period.

George Alter: Regulations May Not Keep Pace with Technology


Alter, director of the Interuniversity Consortium for Political and Social Research (ICPSR), explained that since technology changes rapidly, it would be better for regulations to focus on risk, not specific technologies.

He also noted that the ICPSR is developing online training specific to data privacy. Could this be an alternative to the mortifyingly stupid CITI Program?

Laura Stark: Alternative Models Can Address Local-Precedent Problem


Stark, author of Behind Closed Doors noted that the ANPRM seeks to reduce variability among IRBs, a problem she attributes to their use of "local precedents" to decide cases. She presented some of the findings of her work; when she noted that IRBs often judge researchers' ethics based on grammatical errors and typographical mistakes, the audience giggled.

Stark noted three alternatives: study networks that allow centralized review; collegial review, such as the devolution of review to the department level; and the collection and dissemination of decisions and applications, e.g., Otago's TEAR.

Stark described variability as a problem for multi-site studies. When I asked her if the fact that two IRBs on the opposite sides of a river could produce wildly different judgments on an identifical study might indicate that at least one of the IRBs was simply wrong about regulations, ethics, or both, she said no, any result can be explained by community attitudes.

Thomas Coates: No Clear Line Between Social and Medical


Coates, whose PhD is in psychology and who directs the UCLA Program in Global Health, argued that there no clear lines to be drawn among social, behavioral, and medical research in a project designed, for example, to ask why some people resist medical regimens prescribed for them. He stressed the importance of local knowledge; for example, asking about homosexuality will bring greater risks in countries where homosexuality is illegal.

Lois Brako: An Unchecked Box Can Do a Lot of Good


Brako, Assistant Vice President, Regulatory and Compliance Oversight at the University of Michigan, described a long list of measures her university has taken to reduce problems for investigators. An automated system tells them if their research is free from review. Exemptions can be delivered within a day or two; expedited approval in about 14. Her office looks for types of research that deserves exemption, and gives 2-year approvals to reduce the workload of continuing review.

Some of this can be done within current regulations, but much of it only works for non-funded research. For example, current regulations specify elements of a consent process that are inapplicable to much social research. Inconsistent guidance from agencies is a big problem, and she thinks that some IRBs would benefit from OHRP telling them what they don't need to review.

[Editor's note: OHRP has never been good at this.]

In the Q&A, NAS Committee Member Richard Nisbett, also of the U of Michigan, told everyone that Brako had really turned things around there; before her arrival, the IRB could debate semicolons vs. commas, take weeks to grant exemptions, and reject consent forms identical to ones it had earlier approved.

Neither Brake nor Nisbett addressed the survey's findings that researchers don't think the IRBs are good at explaining their actions, and that only 44 percent of researchers believed that the changes made to their projects improved the protection of participants.

Committee member Robert Levine stated that the National Commission had recommended that each institution be able to decide which procedures could be expedited, and only later did HHS regulators change that to a fixed, federal list. As I pointed out to him later, the National Commission in fact recommended that any list crafted by an institution would need federal approval.

Rena Lederman: Regulations Should Embrace Their Inner Clinician


Reiterating the ANPRM response she helped prepare for the American Anthropological Association, Rena Lederman of Princeton argued that biomedical assumptions are embedded in the Common Rule, and that minor tinkering cannot fix that.

Consent requirements assume a thin, contractual relationship with subjects, she argued, whereas ethnographers seek thick relationships, not with "subjects" but with hosts, interlocutors, neighbors, collaborators, and friends. They do not find IRBs to be a welcoming space, since IRBs don't understand that ethnographers work in heterogenous, dynamic communities different from our own.

Adapting IRB regs to social science and humanities would require a regulatory revolution, probably beyond the capacity of the National Academies committee. What is really needed is a commission made up of experts from disciplines not served by current system of oversight.

Cheryl Crawford Watson: Longer Consent Forms, Less Informed Consent


Watson, of the National Institute of Justice, noted that informed consent is serious business when asking people about their criminal behavior. But when IRBs insist on consent forms that discuss nonexistent possibilities of physical injury from surveys, or inapplicable alternative treatments, participants may miss the important information.

Richard Nisbett: People Can Die for Want of Research


Nisbett, of the University of Michigan, is a member of the NAS Committee. His comments were the most forceful denunciation of IRB abuses.

Nisbett doesn't think that IRBs are effective at preventing unethical research by social sciences. Yes, people can name examples of inappropriate projects, but many of those have been approved by IRBs. IRBs can't teach people not to be stupid.

What they can do is block important research. Some of that research, even though not biomedical, can saves lives, if it teaches city planners how to prevent violent crime, or helps people avoid obesity. And some research, he believes, should never be reviewed. He noted history as an example. It is not enough for regulators to tell IRBs what to review, he argued. They must tell IRBs what not to review, with examples.

My Thoughts


I left the workshop with a few major observations.

First, most of the people attending seem to believe that the regulatory revision process is not as stalled as Tom Puglisi believes. Maybe we won't get a Notice of Proposed Rulemaking this spring, but perhaps by the end of the year.

Second, I did not hear anyone express support for the ANPRM's proposal of using the HIPAA privacy rule to govern human subjects research. The consensus is that the rule would be overprotective in some areas, underprotective in others.

Third, money talks. One participant--I think it was Plott--noted that the committee and the panelists almost all come from major research universities, not smaller colleges and universities who may have trouble affording competent IRB staffs. I would add that most of the projects discussed were major, grant-funded quantitative studies. Who will speak for the undergraduate?

The workshop's agenda cautioned that "observers who draw conclusions about the committee's work based on today's discussions will be doing so prematurely." That's just as well, since I would not want to predict where the committee will go from here. The committee members heard from some who want only tinkering around the edges of the current system, some who want wholesale rethinking, and some who would like relatively bold reform (such as greatly expanded lists of exemptions) within the current system. I can't guess which will prove the most persuasive.

No comments: