Sunday, September 28, 2014

Briefly Noted: Whitney, Bell, Elliott

Lacking time for full comment, I briefly note the publication of these two important, critical essays. Citations omitted from the quoted passages.

The Shell Game


[Whitney, Simon N. "The Shell Game: How Institutional Review Boards Shuffle Words." Journal of Translational Medicine 12, no. 1 (August 14, 2014): 201. doi:10.1186/1479-5876-12-201.]

Popular IRB guides ignore . . . subtleties and mangle the standard definitions. One handbook claims that "coercion means that a person is to some degree forced, or at least strongly pushed, to do something that is not good for him or her to do. In discussions of research regulation the term 'undue influence' is often used to describe the concept of coercion". This manual thus expands the narrow concept of coercion to include persuasion.

A second handbook agrees: "Coercion can be subtle: persuasion, argument, and personality can be used to compel an individual to act in a certain way.... Coercion—including all the subtle forms—has no place in research". There is, of course, no such thing as subtle coercion. A guide to IRB management and function claims that in recruitment for clinical trials, "the possibilities for misinforming or disinforming potential subjects abound" and "the possibilities for inadvertent, unintentional coercion, or undue influence are also high". Inadvertent or unintentional coercion is oxymoronic.

With encouragement from these guides, IRBs reject the standard meaning of the word and use "coercion" to refer to any statement, however innocuous, that might encourage trial participation. Some IRBs believe, for instance, that it is coercive for a consent form to mention that a study is funded by the National Institutes of Health.

Censorship in the Name of Ethics


[Bell, Kirsten, and Denielle Elliott. "Censorship in the Name of Ethics: Critical Public Health Research in the Age of Human Subjects Regulation." Critical Public Health 24, no. 4 (September 3, 2014): 385–91. doi:10.1080/09581596.2014.936727.]

Although the extent of the problems continue to be debated, the last few years have witnessed a growing institutional awareness that change is indeed necessary. For example, in December 2010, Canada's Interagency Panel on Research Ethics released revised national human ethics research guidelines that aimed to be more social science 'friendly'. Similarly, the US Office of Human Research Protections is currently toying with the possibility of sweeping changes to its national regulations. The proposed framework specifically highlights the over-regulation of social and behavioral research and the 'unwarranted variability across institutions... in how the requirements are interpreted and implemented'. Under the proposed regulations, many types of social science and behavioral research with 'competent adults' would be exempt from review.

However, somewhat ironically, just as those tasked with oversight have started to talk of scaling back research ethics regimes (or at least reining in their scope), elsewhere we see movement in entirely the opposite direction. Beyond the ways requirements for ethics review have become tied up with publication (and funding), an ever-expanding array of organizations have begun to develop their own procedures around ethics review. Although their impetus is typically a desire to ensure the research needs of the populations they serve are met, their proliferation illustrates the ways in which the existing problems have tended to produce more oversight and regulation rather than less. In many respects, this speaks to the self-perpetuating aspect of audit culture, whereby its rituals of verification create the very mistrust they are designed to dispel.

[2015-11-13. Edited to correct the link to the Bell-Elliott paper.]

No comments: