In his PS contribution, Brian Calfano worries that the Montana postcard fiasco will lead IRBs to impede political science field experiments, especially at teaching institutions.
[Brian R. Calfano, “‘I’ Does Not Mean Infallible: Pushing Back against Institutional Review Board Overreach,” PS: Political Science & Politics 49, no. 02 (April 2016): 304–8, doi:10.1017/S1049096516000251.]
Field experiments are perhaps the most controversial of political science research methods, and my intention in this article is to encourage those at institutions where IRBs have proven hostile to the methodology. Although the Montana mailer controversy has made life difficult for scholars using field-based interventions, fallout may not be evenly distributed across institutions when sorted by type (e.g., R1s versus more teaching-oriented institutions). Although I am unaware of any available statistics on this issue, it is reasonable to assume that those at institutions where IRBs lack experience with field-experiment methodology (and, presumably, refuse to be educated) may be more likely to face a capricious denial of their research proposal. Given their increased classroom instruction and service responsibilities, IRB members at teaching institutions—on average—have less time and motivation to be familiar with cutting-edge research techniques versus those at research-intensive schools (exceptions notwithstanding). Unless these IRBs are willing to be educated on the method, a “perfect storm” of IRB ignorance and/or recalcitrance and the desire of newer faculty to make tenure and/or “move up” the institutional ladder means that bias against field experiments is likely to affect scholars at “teaching” institutions (i.e., liberal arts colleges and Master’s-granting institutions) the most. This means field experiments may become an off-limits methodology for a large swath of political scientists whose IRBs are less aware of discipline-specific research trends, although there is no guarantee that IRBs at R1 institutions are always supportive of field-based interventions either. And, it is why simply allowing IRBs to pass unfair judgment on field experiments as if the “I” stands for “infallible” is intolerable.
As Calfano’s repeated use of the terms “may” and “likely” indicate, he is speculating about possible future trends rather than reporting empirical findings.
I do question the assumption that IRBs at research institutions are less inclined to hamper research. It is equally plausible that IRBs at R1 institutions, like Mayer’s University of Wisconsin-Madison, are more likely to see their role as protecting the stream of research funding, and are therefore more risk averse, while teaching colleges, like Macalester, allow for innovation.