Monday, June 30, 2014

A Bit of Historical Perspective on the Facebook Flap

IRBs and behavioral research are all over the news, as a result of a paper that manipulated the news feeds of 689,003 Facebook users.

[Kramer, Adam D. I., Jamie E. Guillory, and Jeffrey T. Hancock. “Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks.” Proceedings of the National Academy of Sciences 111, no. 24 (June 17, 2014): 8788–90. doi:10.1073/pnas.1320040111.]

Michelle Meyer has posted a detailed analysis of the regulatory context, explaining multiple ways a project like this could have been approved. She concludes that "so long as we allow private entities freely to engage in these practices, we ought not unduly restrain academics trying to determine their effects."

[Meyer, Michelle N. “How an IRB Could Have Legitimately Approved the Facebook Experiment—and Why That May Be a Good Thing.” The Faculty Lounge, June 29, 2014.]

I have little to add to Meyer's excellent post, except a bit of historical perspective. Psychological experiments—whether in the lab, in the field, or online—fall outside my main area of concern, but perhaps I can offer a few relevant points.

1. Psychological Field Experiments Have a Long History

Over at Slate, Katy Waldman presents the Facebook experiment as a human rights violation, quoting James Grimmelmann, who in turn claims that "informed consent [is] the ethical and legal standard for human subjects research."

If that were true, we'd need to reconsider not only Facebook's latest manipulation, but a line of research--social psychology field experiments--dating back roughly half a century, in which researchers put on some kind of a performance for unwitting subjects to see how they'd react.

In looking for examples from the 1970s (when the human subjects regulations were crafted), I came across this peach: Peter Suedfeld, Stephen Bochner, and Deanna Wnek. “Helper-Sufferer Similarity and a Specific Request for Help: Bystander Intervention During a Peace Demonstration.” Journal of Applied Social Psychology 2, no. 1 (March 1972): 17–23. doi:10.1111/j.1559-1816.1972.tb01260.x.

According to the abstract (why bother reading the article, we're doing Facebook today),

Eighty randomly selected male participants in the April 1971 peace demonstration in Washington, D.C. were approached by a young women E who asked them to help her friend who was feeling ill. The “friend” was a young male E, in either conventional or “hip” clothing, who was displaying either a “Support Nixon” or a “Dump Nixon” sign. The dependent variable was a 5-point ordinal scale of cooperation with a series of specific requests, which ranged from going over to the distressed E to providing bus fare and help for both Es to leave the area and go home. All 80 Ss went to the E and 79 helped to some extent. There was more helping behavior in the morning than in the afternoon, when the program of activities had intensified; with Ss who were tested in the afternoon, the E displaying a “Support Nixon” sign attracted less helping behavior than the “Dump Nixon” condition. The dress manipulation (implicit attitudinal similarity) had no effect.

I doubt that Facebook's algorithms served up a more depressing news feed message than "Support Nixon."

2. Controversy Over Field Experiments Have a Long History

These kinds of research projects can't offer informed consent of the sort Grimmelmann believes is required by federal law. But federal law has never required informed consent in all human subjects research.

Members of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research were at least somewhat aware of the ethical challenges of field experiments. In their December 1977 and January 1978 meetings, they discussed the problem. For example, in December 1977, staffer Bradford Gray told the commission,

There is a lot of field experimentation in social psychology, where something is fit into a public place, and observations are made. It might be something that is put in a store window, or it might be a wallet left on the street, or it could be any number of things. It could be walking a person dressed in a particular way down the street and observing the response. There are all sorts of things like this that are done. There are whole journals that publish this stuff."

Commissioner Joseph Brady interjected, "testimony from Alan [sic] Funt, it seems to me."

Perhaps as a result of this discussion, the Commission's 1978 Institutional Review Boards: Report and Recommendations stated that

An IRB may waive the informed consent requirement in [social science] research when it finds a number of factors to be present. The behavior to be studied must in some sense be public, e.g., responses of businesses or institutions to members of the public, or social behavior in public places. Nondisclosure must be essential to the methodological soundness of the research, and must be justified by the importance or scientific merit of the research. Further, the research must present no more than minimal risk and be unlikely to cause embarrassment to the subjects.

After some debate, this led, in 1981, to the the current provision [46.116(d)] allowing IRBs to waive informed consent requirements. (More on that provision in Meyer's post.)

This is not to say people have to like field experiments, or even that the American Psychological Association's Code of Conduct permits them. (See Standard 8.05).

But critics of the Facebook experiment should at least be aware that we are talking about a mode of research that existed long before Facebook, and that federal ethics advisors and regulators specifically decided that it should proceed.

3. It's Nice to Hear from the IRB

When the news first hit, both Grimmelmann and Meyer were forced to guess about whether the paper had been reviewed by an IRB and, if so, how that IRB reached its decision to approve the study; the original article offered no explanation of if or how it had gotten IRB approval.

That mystery has been cleared up a bit, with a Cornell press release explaining that the Cornell IRB had concluded that the Cornell researchers were "not engaged in human subjects research." But this was exceptional. Most IRBs do not publish their rulings in any form.

That's a pity. Back in 1973, Yale law professor Jay Katz, a leading expert on human experimentation, told Congress of “the current uninformed and secretive climate which pervades research decision-making. At present, decision-making remains divorced from pertinent prior decisions of other committees or from scholarly and public evaluation and criticism.”

Katz insisted that important decisions needed to be published, so they could be read and discussed nationwide. “The result,” he predicted, “would not only be better thought out decisions, but also a more complex system of controls which, in effect, [would take] into account much broader sources of information as to societal values . . . I regard such a development, analogous to the experience of the common law, as the best hope, for ultimately providing workable standards for the regulation of the human experimentation process.”

If Congress had required the publication of important IRB decisions, all the folks now outraged about the Facebook paper would be able to read not only the brief press release about the reasoning of the Cornell IRB, but also the reasoning of other IRBs that have approved even more annoying social experiments. Then we'd be having a more informed debate over the ethics of this kind of research.

No comments: