Sunday, July 5, 2015

Exemption by the Numbers?

Two computer researchers describe a system at Microsoft Research designed to provide automatic approval for low-risk studies. Rather than follow the Common Rule’s exemption model of requiring IRB review if any of a series of conditions is met, the Microsoft system assigns numerical values to aspects of a proposal that bear some risk to participants. Proposals with a low total get immediate approval from an Excel spreadsheet.


[Bowser, Anne, and Janice Y. Tsai. “Supporting Ethical Web Research: A New Research Ethics Review.” In Proceedings of the 24th International Conference on World Wide Web, 151–61. WWW ’15. Republic and Canton of Geneva, Switzerland: International World Wide Web Conferences Steering Committee, 2015. doi:10.1145/2736277.2741654.]


The authors, Anne Bowser and Janice Tsai, reviewed 358 research papers and identified “124 cases where the ethics of a research study may be questionable.” Of these 124, only 13 involved elements so dubious (e.g., conducting research not in accordance with a website’s policy) that they themselves merited ethical scrutiny, while the remaining 111 “flags” are raised “when each of two questions are answered in a particular way.” The authors don’t give vivid examples, but the idea seems to be that sensitive questions alone might not merit review, nor would loose confidentiality protections. But sensitive questions combined with loose confidentiality protections demand attention.


It’s an interesting concept, but not one that this paper describes in detail. For instance, here is the authors’ description of evaluating their system:


Nine participants from our Redmond lab submitted 10 distinct research proposals… . Of the 10 proposals submitted, five were approved through automated expedited review; three were approved through human expedited review; and, two were designated for full board review. Though a few usability issues were noted, the majority of authors characterized the system as easy to use. Additionally, most appreciated the use of logic to avoid answering unnecessary questions: the order “makes sense.”


This sounds as though they learned more about interface design than whether the automated scoring system correctly sorted the ten proposals by degree of risk. The paper does not explain how closely the numerically scored system compares to the existing Common Rule exemptions or the judgment of researchers or ethicists, or how it would have scored such controversial studies as the Facebook mood experiment.

No comments: