In comments on this blog's
introduction, bioethicist David Hunter of the University of Ulster asked me about my preferred alternative to IRB review, and I mentioned my hopes for departmental review (hopes shared by the AAUP). Lest our conversation get lost in the comments, I am moving it to this new posting:
DAVID HUNTER:I'd disagree on departmental review being best for two reasons.
1. While a committee which has some knowledge and expertise in the area of the project, too much expertise and it becomes too close to the subject matter. This can mean that it misses significant ethical issues because they are standard practice within a specific discipline. To give one example, psychologists often want to give part of their students grade (10%) for being involved in their research. Most RECs I am involved in don't allow this practice because it is felt it is unduly coercive. I imagine if a REC/IRB was entirely composed of psychologists they may disagree.
2. It is important for a REC to be substantially independent from the researcher, but this doesn't happen in departmental review, instead the REC has an interest in the research being let to go ahead.
My university presently runs on a departmental review model, and while I can't name names I have personally seen examples of both of the above issues coming up.
I've written about these problems here:
Hunter, D. 'An alternative model for research ethics review at UK universities' Research Ethics Review. (2006) Vol 2, No 2, 47-51.
(Which unfortunately isn't available online)
and here: Hunter, D. '
Proportional Ethical Review and the Identification of Ethical Issues Journal of Medical Ethics. (2007);33:241-245.
I certainly agree with you that IRBs shouldn't be dominated by medics and medical concerns, they instead should have a wide range of representation. I'm inclined to think though that the baseline ethical issues are similar and while different rules may be appropriate for different disciplines they flow out of the same background.
In terms of examples here are a few, I can't be too specific with details for reasons of confidentiality.
1. Study of sexual attitudes in school children. Asked very probing questions as one might expect, but didn't intend to get parental consent to carry out the research a parallel can be found here:
India Research Ethics Scandal: Students made guinea pigs in sex studyNo consideration had been given to what might have been done if there was disclosure of harmful behaviour etc.
2. Historian was going to civil war stricken country to interview dissidents about the war, intended to publish identifying comments (without getting consent for this) which were likely to be highly critical of the current regime.
3. Social scientist wanted to understand children's attitudes towards a particular topic. As a blind so that the participant would not know the questions they wanted to answers to, they proposed to use the becks depression index. This contains questions about self harm, future worth and was potentially very distressing, not at all appropriate as a blind.
4. Student wished to conduct interviews with employees of a company on an issue that could significantly damage the companies profitability. No consideration was given to how to best report this information to minimise harm to the company.
I'm inclined to think that any sort of research involving humans can lead to harm whether that is physical, social, financial, psychological or so on. As such the benefits and the risks need to be balanced, and it needs to be considered how to minimise that harm. That I take it is the job of the researcher. However, having sat on RECs for a while it is a job that sometimes the researchers fail at spectacularly, then it becomes the job of the IRB/REC. The difficulty is how, without full review by a properly constituted REC, do you identify those applications that have serious ethical issues?
ZACHARY SCHRAG:Thanks for these examples.
First, let me state that I am primarily interested in projects that fit Pattullo's proposal of 1979: “There should be no requirement for prior review of research utilizing legally competent subjects if that research involves neither deceit, nor intrusion upon the subject’s person, nor denial or withholding of accustomed or necessary resources.” Under this formula, the projects invovling children (who are not legally competent) and the project involving undergraduates (whose course credit is an accustomed or necessary resource) would still be subject to review.
That said, I have little confidence that IRBs are the right tool to review such research. As for child research, under U.S. regulations, and, I believe, the rules of most universities, the studies could be approved by three IRB members wholly lacking in expertise on child development. (The regulations encourage but do not require the inclusion of one or more experts when vulnerable populations are involved.) Were I the parent of a child involved in such studies (and I'm proud to say that both my children have furthered the cause of science by participating in language studies), I would greatly prefer that the protocols be reviewed not by a human subjects committee, but by a child subjects committee composed mostly or entirely of people expert in child research.
For the psychology course and the history project, the real question is whether a departmental committee can be trusted to enforce its own discipline's ethical code. The code of the
British Psychological Society forbids pressuring students to participate in an experiment. And the ethical guidelines of the the
Oral History Society require interviewers "to inform the interviewee of the arrangements to be made for the custody and preservation of the interview and accompanying material, both immediately and in the future, and to indicate any use to which the interview is likely to be put (for example research, education use, transcription, publication, broadcasting)." So yes, those sound like unethical projects.
Perhaps some departments would fail to correct these mistakes, just as some IRBs and RECs get them wrong. At some level this is an empirical question that cannot be answered due to the uniform imposition of IRB review. In the U.S., at least one university (the University of Illinois) had a system of departmental review in psychology that worked without complaint until it was crushed by federal regulation in 1981. With the federal government imposing the same rules nationwide, we can only guess about how well alternatives would work.
Moreover, departmental review would allow committees to bring in considerations unknown to more general ethics committees. For example, the British and American oral history codes require attention to preservation and access to recordings, something that and IRB/REC is unlikely to ask about.
I would also add that something close to departmental review is typical of the standard IRB, i.e., one in a hospital or medical school. It's true that the U.S. regulations require "at least one member whose primary concerns are in nonscientific areas" and "at least one member who is not otherwise affiliated with the institution and who is not part of the immediate family of a person who is affiliated with the institution." But the rest of the members can be biomedical researchers of one stripe or another. If that's good enough for the doctors, how about letting each social science discipline form an IRB of its members, with a community member and a non-researcher thrown in?
Still, if IRBs/RECs limited themselves to holding researchers up to the standards of the researchers' own academic discipline, I wouldn't be complaining.
Where we really disagree, then, is on project 4. You write, a "Student wished to conduct interviews with employees of a company on an issue that could significantly damage the company's profitability. No consideration was given to how to best report this information to minimise harm to the company."
That sounds a lot like this case:
Kobi Alexander's stellar business career began to unravel in early March with a call from a reporter asking why his stock options had often been granted at the bottom of sharp dips in the stock price of the telecom company he headed, Comverse Technology Inc.
According to an affidavit by a Federal Bureau of Investigation agent, unsealed in Brooklyn, N.Y., the call to a Comverse director set off a furious chain of events inside the company that culminated yesterday in criminal charges against Mr. Alexander and two other former executives. Federal authorities alleged the trio were key players in a decade-long fraudulent scheme to manipulate the company's stock options to enrich themselves and other employees.
After the March 3 phone call from a Wall Street Journal reporter, the FBI affidavit said, Mr. Alexander and the other two executives, former chief financial officer David Kreinberg and former senior general counsel William F. Sorin, attempted to hide the scheme. Their actions allegedly included lying to a company lawyer, misleading auditors and attempting to alter computer records to hide a secret options-related slush fund, originally nicknamed "I.M. Fanton." It wasn't until a dramatic series of confessions later in March, the affidavit said, that the executives admitted having backdated options. The trio resigned in May.
That's an excerpt from Charles Forelle and James Bandler, "Dating Game -- Stock-Options Criminal Charge: Slush Fund and Fake Employees," Wall Street Journal, 10 August 2006. As far as I can tell, Forelle and Bandler made no effort to minimize the harms to the companies they studied or the executives they interviewed. Their "Perfect Payday" series won the 2007 Pulitzer Prize for public service.
Your insistence that an interviewer minimize harm is a good example of an effort to impose medical ethics on non-medical research, and a good reason to get RECs away from social science.