Oakes and Silver "encourage researchers to investigate scientifically IRB oversight of behavioral and social science research. Such research could determine whether IRBs are consistent in applying the federal regulations, whether IRBs are taking advantage of the flexibility that’s built into the regulations, and whether relationships between IRBs and social scientists are less strained on campuses that have separate IRBs to review behavioral and social science research. The resulting data could shed light on ways to relieve tensions between these two groups."
That sounds good, but it's a bit disappointing that the article does not acknowledge the considerable research already completed on this topic, much of it already cited on this blog. Nor does it remark on the curiosity that IRB oversight has continued for four decades without anyone knowing if it does any good.
Moreover, the article states as fact some beliefs that should be investigated with just the sort of research it calls for. I hope Oakes, Silver, and AAHRPP will allow data to challenge some of their own presuppositions.
Here are some questions that could be answered by further research.
1. When did IRB review of social science go bad, and why?
AAHRPP thinks it already knows the answer to this one. The article claims that "Tensions began building in the late 1990s in response to increased government scrutiny of research involving human participants," and that "the regulations have not changed. What’s new is their enforcement and, in many instances, that enforcement is overdue." I am at work on a history of IRB review of social science and humanities research, and I think more research can challenge this view.
The first part of the AAHRPP claim is doubtful; social scientists have protested IRB regulations since 1966, and tensions have waxed and waned since then. If we take this longer view, then the assertion about the immutability of the regulations is wrong; the regulations--first promulgated in 1974--changed twice, in 1981 and 1991. And the 1991 revisions greatly expanded the reach of IRBs. The 1981 regulations exempted survey, interview, and observational research unless it “deals with sensitive aspects of the subject’s own behavior, such as illegal conduct, drug use, sexual behavior, or survey or interview procedures is use of alcohol" and if "the subject’s responses, if they became known outside the research, could reasonably place the subject at risk of criminal or civil liability or be damaging to the subject’s financial standing or employability. The 1991 regulations, in contrast, eliminated the "sensitive aspects" clause and added potential harms to reputation to the list of triggers for IRB review. These changes were made over the objection of social scientists. And they set the stage for the conflict of the 1990s and today.
The claim that "what's new is [the regulations'] enforcement," is only half-true. Also new is the guidance issued by OPRR/OHRP since 1995 that reversed previous policies.
Finally, the article claims that "enforcement is overdue." Really? What errors did social scientists commit in the 1980s--a decade of relatively light regulation?
2. Why do IRBs sometimes delay or prohibit social science research?
Dr. Oakes states, “IRB members are not those folks who are looking to thwart your study. They are peer researchers who have a job to do.” But clearly some IRB members are looking to thwart studies, or else studies wouldn't get thwarted as often as they do. The question is how many IRB members do this, and why.
One part of this question concerns membership. Oakes's claim that IRB members "are peer researchers" depends on an odd definition of peers. In the NIH peer review process, for example, proposals are reviewed by study sections whose members are chosen for their expertise. The NIH's Center for Scientific Review requires, among other things, that
"* Expertise is the paramount consideration when developing/updating a study section roster.
"* Each scientific area reviewed by the study section needs appropriate expert representation."
IRBs theoretically must include experts on each type of research reviewed, but Oakes knows as well as I that this requirement is often ignored. Additional research might indicate how often a researcher faces an IRB with no expertise in the methods under review.
Then, of course, some IRB members are not researchers at all, but the "one member whose primary concerns are in nonscientific areas" required by the regulations. As Laura Stark's dissertation suggests, these members can be particularly undisciplined in their meddling.
3. What types of research now fall subject to IRB review?
Like PRIM&R, AAHRPP thinks that IRBs review only two kinds of scholarship: biomedical research, and something called "behavioral and social science research." The article states, "AAHRPP’s Founding Members, Board of Directors, Council on Accreditation, and Supporting Members all include representatives of organizations engaged in behavioral and social science research."
This statement suggests the fallacy of the undistributed middle term:
* Ehnographers are represented by organizations engaged in behavioral and social science research.
* Organizations engaged in behavioral and social science research have a voice in AAHRPP.
* Therefore, ethnographers are represented by organizations that have a voice in AAHRPP.
The second premise is "undistributed," since it is not true that all organizations engaged in behavioral and social science research have a voice in AAHRPP.
Here's a counter example:
* Countries in South America, Africa, and South Asia are not in North America or Europe.
* Countries from parts of the world other than North America and Europe are permanent members of the UN security council and the G-8.
* Therefore, countries in South America, Africa, and South Asia are permanent members of the UN Security Council and the G-8.
In committing this fallacy, AAHRPP lumps together a dozen or more scholarly disciplines--each with its own history, methods, and ethics--into a single category: "behavioral and social science research." The AAHRPP website does not list the disciplinary affiliations of members of its Board of Directors, Council on Accreditation, or list of site visitors, but if there's a journalist, historian, or folklorist in the lot, I'll be surprised.
To take the example I know best, oral historians do not expect psychologists, social workers, or education researchers to understand or represent their interests. AAHRPP (like PRIM&R) should find out how many disciplines are now subject to review, and include representatives from all of them.
4. What models of ethical review exist, and what models might we imagine?
The article asks "whether relationships between IRBs and social scientists are less strained on campuses that have separate IRBs to review behavioral and social science research." But that is only one of several alternative systems in place on various campuses. For example, Macquarie University delegates ethical review to a number of subcommittees with special expertise in certain fields. And the University of Pennsylvania allows researchers using some social science methods to forego "a fixed research protocol." And we can imagine even more models, some of which would require redrafting present regulations, others of which might not.
I appreciate AAHRPP's call for research, and I hope it agrees that research is most valuable when the answers are not predetermined.