[Bernstein, Michael. “The Destructive Silence of Social Computing Researchers.” Medium, July 7, 2014. https://medium.com/@msbernst/9155cdff659.]
Writing about the controversy over the recently published study of Facebook users' posts after their news feeds had been altered, Bernstein laments the degree to which the debate has been dominated by "communication scholars, sociologists, policy folks and other really smart researchers." (Oddly, he leaves bioethicists off of that list.) While acknowledging their "insightful analyses and critiques," he calls for social computing researchers to push back against those who take clinical medical trials to be the norm.
Informed consent seems to be the crux of the issue. Should we require it? There are many forms: opting in for each study, a one-time “opt in to science!” button on each site, or advertisement recruiting. What about debriefing afterwards?
Regardless of the moral imperatives, let me start by saying as a designer of social systems for research that any such requirement will have an incredibly chilling effect on social systems research. IRB protocols are not the norm in online browsing, and so users are extremely wary of them. Have you ever tried putting a consent form inline on your social site? I have, and I can tell you that it drives away a large proportion of interested people who would probably actually want to participate if the interface were different. It looks scary. It’s opt-in, and defaults are powerful. Forget that it’s there to protect people — it makes the entire site look like something underhanded is going on. “I just came to check out this site, I don’t want to be agreeing to anything weird.” It’s the wrong metaphor for today.
Indeed, the lab experiment has been the wrong metaphor for other kinds of research since 1965.
The real problem, Bernstein suggests, is users' misunderstanding of the websites they use. "Thousands of online experiments are being run every day by product managers at Google, Facebook, Starbucks, Microsoft, and the Obama Campaign. Let’s take a user-centered approach and understand what peoples’ expectations are."
That's going to be an uphill climb, since it seems that peoples' expectations are rather confused. My favorite illustration so far is a Washington Post editorial:
Users agree to terms and conditions when they join the social network. In-house experiments, called “A/B testing,” are routine, too. They observe how users react to small changes in format and content, such as a bigger icon or a different shade of blue. The purpose is to improve user experience on the site.
But this crossed an important line: Unlike typical A/B testing, Facebook tried to directly influence emotions, not behaviors. Its purpose was not to improve user experience but rather to publish a study.
So it's OK to manipulate people's behavior without their knowledge, but not their emotions? Why is that so, and how could one possibly tell the difference?