In response to concerns that researchers could not be trusted to watch other researchers, in 1981 the Department of Health and Human Services added the requirement that "each IRB shall include at least one member whose primary concerns are in nonscientific areas; for example: lawyers, ethicists, members of the clergy." [PDF] The language of the provision shows that the department did not imagine applying these rules to nonscientific researchers (such as lawyers, ethicists, or members of the clergy), but that's not the point here. The point is that the regulations' drafters expected IRBs to be dominated by experts "possessing the professional competence necessary to review specific research activities," with one or two lay members added to represent the conscience of the community.
Recently, some have called for a greater role for non-expert members. In 2001, for example, the National Bioethics Advisory Commission recommended that "members who represent the perspectives of participants, members who are unaffiliated with the institution, and members whose primary concerns are in nonscientific areas . . . should collectively represent at least 25 percent of the Institutional Review Board membership," rather than the single member now required.
But a bigger problem may be not a lack of lay representation, but a lack of expertise. In their 2002 paper, "The Crisis in Human Participants Research: Identifying the Problems and Proposing Solutions," Anne Wood, Christine Grady, and Ezekiel J. Emanuel described this problem in IRB review of clinical research:
A single IRB often reviews research on a wide variety of scientific topics and research settings, some of which are not aligned with the scientific expertise of the board members. For instance, IRBs may need to review research studies using drugs or interventions that lie outside the expertise of any of the members. Similarly, IRBs may review research studies being conducted in a developing country when none of the IRB members have first hand experience much less professional expertise concerning the health care infrastructure or social and cultural practices of that country. Further, vital information about experimental drugs or interventions may not be published but only known by experts in the research area through conference presentations or word of mouth. To remedy these deficiencies in information, IRBs can consult experts or investigate external materials some of which may not even be published. This requires a commitment of IRBs limited resources and time, and therefore may not be readily done. Indeed, it is fair to say that IRBs are relatively passive, responding to the information provided rather than actively seeking information in addition to that submitted in the research protocol. Under these circumstances, IRBs can make poor decisions about the permissibility of a study that can sometimes result in avoidable harms to participants.
In non-clinical research, the problem may be greater. Even when non-biomedical research is reviewed by a board nominally specializing in "social and behavioral" research, those categories are too great to assure researchers that their projects will be reviewed by anyone in the same discipline. Many IRB horror stories arise from reviews by boards without expertise. For example, Jennifer Howard's Chronicle of Higher Education article, "Oral History Under Review," mentioned the fact that "no historians currently sit on Purdue's social-sciences IRB, which is chaired by Richard D. Mattes, a professor of foods and nutrition." That background led Mattes to declare, bizarrely, that talking to someone is "not different from creating a database of DNA." Whatever Mattes's knowledge of nutrition, when it comes to history, he is clearly a layperson.
I have seen three types of proposals to remedy this problem.
First, some call for frustrated researchers to join their local IRBs, thus bringing their expertise to the board (see various comments on the article, "Reviewing the Reviewers," at Inside Higher Ed). If the board is truly intransigent, this is obviously not going to help, since one member cannot sway a board. More likely, however, boards will delegate the task of review to the most expert member. This is what happened to anthropologist Rena Lederman when she joined Princeton's review board, as she reports in her essay, "The Perils of Working at Home." She could not understand the ethics of the social psychologists who dominated the board, and they could not understand ethnography. Rather than hash out the differences, members avoided evaluating proposals in other disciplines:
The potential for cross-disciplinary conflict latent in the panel's work was . . . systematically defused by an explicit etiquette—characteristic of the university, generally—of disciplinary autonomy. Those of us on the panel would, from time to time, remind ourselves that "we're not here to evaluate the research"; that is, technical evaluations of research design and significance were—within rather broad limits—understood to be the proper concern of disciplinary peers (e.g., departmental thesis advisors and external grant reviewers), not of the IRB. In this way, overt expressions of our respective disciplinary worldviews were muted.
In other words, each discipline left the others alone, which suggests that the whole concept of a board was abandoned. It sounds like a criminal waste of the time of eminent scholars.
A second suggestion is that ethical review of non-biomedical research be handled by departments. The National Science Foundation, in its "Frequently Asked Questions and Vignettes," suggests that individual departments create human research committees to review classroom exercises, which are not covered by federal regulations. Likewise, the AAUP's 2006 report on "Research on Human Subjects," while declining to "recommend alternatives to imposing the requirement of IRB approval on research that is not federally funded," nevertheless suggests that "schools might consider an alternative under which the approval required is limited to approval by the researcher's department or other appropriate academic unit."
Full devolution to departments would might work well in most cases, but it provides no clear paths for departments lacking the expertise to review potentially unethical proposals. For example, since most historians work primarily using documents alone, many history departments may have a single faculty member—or even none—who regularly conducts interviews. Or an anthropology department might lack an expert in conducting research in countries with authoritarian regimes and low rates of literacy. Should a graduate student present a research proposal for such work, such departments might not be able to give needed advice.
A third suggestion would replace the 4,000 – 6,000 local IRBs now in existence with some kind of centralized system more likely to match expert reviewers to particular proposals. This suggestion has come primarily from clinical researchers, such as Wood, Grady, and Emanuel, cited above. Another version, with references to several more, is Rita McWilliams, Carl W. Hebden, and Adele M. K. Gilpin, "Concept Paper: A Virtual Centralized IRB System." I won't comment on these proposals' fitness for biomedical research. For the humanities and the social sciences, I would say they seem extremely cumbersome and likely to result in needless paperwork for routine proposals.
On the other hand, a centralized system might prove very helpful in special cases beyond the competence of departmental reviewers. Were present regulations clarified or changed, centralization would not necessarily take the coercive forms envisioned by the clinical researchers. A history department ethics committee seeking outside expertise could, for instance, post a query on H-Oralhist, which already serves as a way to tap the collective wisdom of oral historians on matters ethical, technical, and methodological. Or, just as scholarly journal editors maintain lists of experts ready to review manuscripts, professional associations could compile lists of researchers with experience in various difficult situations, who could be called upon to review proposals. Thus, combining proposals for devolution with proposals for centralization could yield a system of review that eased the path for routine research, while quickly matching hard cases to expert advisers.
Too many defenses of IRB review rely on the fallacy of the false dilemma, arguing that because some interview and survey projects present ethical challenges, all such projects require IRB review or formal exemption by an IRB delegate. Proposals for devolution to the departments, or centralization to national bodies, show that review by local IRBs is not the only way to get another set of eyes on a researcher's plans. Indeed, when it comes to finding reviewers who understand the proposals sent to them, it may be the worst way.