Monday, February 18, 2008

Peter Moskos on What to Tell Your IRB

Sociologist Peter Moskos is author of Cop in the Hood, a book about police work he wrote based on his fieldwork as a Baltimore police cadet and police officer. He maintains a blog concerning the issues raised in the book (a splendid marriage of old media and new), and on February 12 he commented about the ethical issues raised by Sudhir Venkatesh's Gang Leader for a Day. In that posting, "Outing the insiders," Moskos wrote,

I’ve never been a fan of the I.R.B. Few professor are. I don’t think that overt non-experimental academic researchers should need approval to observe and interact with most human subjects. We’re not giving out experimental drugs. We’re not running experiments. We’re watching and talking and living. I don’t even like the term “human subjects.” It’s dehumanizing. They're people, damnit! It’s condescending to think that adults aren’t smart enough to make their own decisions about what to say to whom. And if they’re not, well, such is life.

Nor am I convinced that research subjects who harm others deserve institutional protection. I believe academics should act under a code similar to journalists. But federal law disagrees with me. And the press has explicit constitutional protection that professors don’t.

In a comment on his blog, I pressed him to elaborate on these points, and he graciously responded with a second posting, "More on IRBs." I recommend reading the whole post, but here are some key points:

1. An IRB doesn't have to reject a proposal to stifle research. As Moskos notes, "the simple nuisance and fear of conflict with an IRB limit social-science research."

2. IRBs are set up to review protocols in advance, but that's not how ethnography works. Moskos is grateful to Harvard's IRB for requiring him, at the outset of his fieldwork, to announce to his academy classmates who he was and what he was doing. But as his work progressed, he found keeping his IRBs informed about every change to be so tedious that it wasn't worth the effort. And now that his book is out, he is still in touch with some friends from the department. When does oversight end?

As I noted in August, the University of Pennsylvania has addressed some of these concerns in its policy on evolving research. I would like to learn how that policy is working.

3. IRBs apply the wrong ethical standards. They seek to ensure that no harm comes from research, when, in cases like Venkatesh's, the "risk of some harm from his research was so great as to be virtually inevitable." And IRBs "want a guarantee of confidentiality," which is not appropriate in all circumstances; Moskos did not witness any serious crimes, but he writes that "if, hypothetically, I witnessed a police officer rob and kill, or sexually abuse a 10-year-old child, or anally violate an innocent man with a plunger, I would feel little compunction legally and ethically to violate a vow of confidentiality."

This third argument is the one that most intrigues me, since it accepts the notion that good research can be harmful. Theoretically, IRBs can approve such research, so long as the benefits of the research are commensurate. But Venkatesh eloquently describes the ethnographer's uncertainty that his research will matter, either to the individuals studied or to society as a whole. Thus, it would be hard for most ethnographers to present the kind of cost-benefit analysis required by the Belmont Report.

Moskos' acceptance of harm seems fully consistent with the ethics of the American Sociological Association, but not with the code of the American Anthropological Association, which states that "anthropological researchers must do everything in their power to ensure that their research does not harm the safety, dignity, or privacy of the people with whom they work, conduct research, or perform other professional activities." Thus, Moskos's eloquent statement of his principles illustrates why folklorists, historians, sociologists, and others should reject being lumped together with anthropologists as "social scientists," lest they find themselves subjected to ethical standards not their own.

At the end of his post, Moskos shares some of the language that he used to get IRB approval without compromising his ethical principles, and he encourages other researchers to use this language to avoid promising written consent forms or absolute confidentiality. Whether or not a researcher is subject to IRB oversight, such thoughtful statements about the ethics of research in specific contexts are valuable--far more so than the standardized ethical training mandated by most IRBs. I hope that other researchers will both read Moskos's statement and consider writing their own.


Anonymous said...

I am still sorting out some of my reactions to this post. But, I feel that this issue is important enough to try to comment now.

I am disturbed by the last part of his statement; "They're people, damnit! It’s condescending to think that adults aren’t smart enough to make their own decisions about what to say to whom. And if they’re not, well, such is life."

I do agree that people are generally capable of interacting with other humans while maintaining a sense of self-protection. However, this seems to say to me, if they can't, oh well, so what. This implies that it is their fault for being stupid or naive or trusting?

My concern is that there is the implication in all of this that the researcher (or, whatever she.he would prefer as a label) is the only one capable of judging her/his own actions and intent. And, that outside interference is, in some fashion, bad.

I cannot help but feel that there is a certain level of hubris in this. To imply that the effort taken to ensure that people are protected from our nosiness into their lives is somehow constraining seems to me to miss the whole point. I don't 'enjoy' every law that is placed on the books but I try to obey them nonetheless because constraints are a price I pay for living with others. If everyone can chose to obey only those parts of the legal corpus that please them and ignore the rest, what kind of society do we end up with?

I don't wish to get into a flame war here. I am simply concerned that Dr Moskos seems to reject out of hand the need for oversight. I am firmly convinced that some oversight is needed even if, as currently structured, it needs to modified to accommodate other forms and methods.

Bill Hart
Rogers State University

Zachary M. Schrag said...

Thank you for your comment.

I still do not understand what problems in social research you think IRBs can solve. Here you suggest that people merit protection "from our nosiness," as if nosiness itself is a problem. But as an ethnographer, Moskos is paid to be nosy--just as he was as a cop.

You charge Moskos with "reject[ing] out of hand the need for oversight." But if you read his blog, you'll see that he began by cooperating with his IRB at Harvard. Only after years of fieldwork, and disappointing interactions with two university IRBs, did he come to his present position. That trajectory--of trying to work with IRBs before giving up on them--is shared by many social researchers, myself included.

By contrast, you seem to have come to these questions relatively recently, yet you are eager to proclaim what you "can not believe" and of what you are "firmly convinced." It is this putting of findings before research that worried me about the AAHRPP article. If we are to avoid hubris, wouldn't it be better to follow Moskos's example of drawing conclusions from experience?

Anonymous said...

Thank you for your reply. As I said at the beginning, my original thoughts were less digested than I would have preferred but contained the essence of what I meant.

I have been involved with IRBs for about 30 years, most recently as the chair of the brand-new IRB here at Rogers State. And, I too have had my own horror stories of unthinking, irrational and just plain silly requests. However, I still see the value of oversight into the process to protect the people with whom we work.

I used the terms 'I believe' and 'I am convinced' since that summed up my experience with IRBs both as a submitter and as a reviewer.

There is a burden to filling out the forms and keeping up with the required paperwork. The question is whether this burden is so onerous as to limit valid research.

There is always a tension between allowing research to process to hopefully answer substantive questions about important questions and the need to preclude harm. I would be the last person to say that the current system is optimal--by any measure. However, I have not seen any evidence presented that would compel the elimination of some system of safeguard.

Bill Hart
Rogers State Unversity

Zachary M. Schrag said...

Thanks for your reply. I have three responses.

First, I appreciate your distinction between the current system and your interest in "some system of safeguard." I don't know Moskos's position on this, but on this blog I have explored efforts to present alternatives to the IRB system defined in 45 CFR 46. The two most developed alternatives are what I'll call the peer review system and the certification system. In the peer review system, in place at Macquarie University in Australia, projects are reviewed by specialists in the methods under review, so that ethnographers review ethnographers and survey researchers review survey researchers. In the certification system, in place at the University of Pennsylvania, researchers who complete training relevant to their field may submit proof of such training "in lieu of a fixed research protocol." Would either regime satisfy you?

Second, you "have not seen any evidence presented that would compel the elimination of some system of safeguard." What evidence would suffice to convince you that the current system is fundamentally flawed? How many examples of poor decisions by IRBs are necessary? What kinds of abuses must they commit? What level of outrage must researchers attain?

Third, to turn the last question around, what evidence do you have that IRBs are helpful to survey, interview, and observation research? Earlier I asked you for examples of cases in which a participant in an oral history interview was wronged by the interviewer, but I have gotten none. In your 30 years of experience, how often was an IRB able to steer a social researcher away from unethical methods? I would like to suggest that in a free society, the burden of proof lies on those who would restrict speech between consenting adults, not the speakers.


Anonymous said...

Quick note; I am supposed to be attending meetings a t a conference.

I have not read it thoroughly but the Penn solution seems similar to ideas that we have tossed around here at RSU. I haven't have a chance to read it yet, I am basing this on your short descriptor. I look forward to reading both ideas and bouncing them off our faculty; both IRB members and faculty who have raised these issues with us.



PCM said...

Thank you Bill and Zachary for such interesting comments (even if I am a little late getting in this game).

Zach answers for me rather perfectly. I sense that Bill and I might actually be in agreement, by and large. I do think there should be some oversight (but not the current system). Part of my fear is that the language used by the IRB actually make things worse. Lofty abstract theory helps dehumanize the "subjects." The last thing we want is for researchers (I don't object to that term) thinking of people the same way the research scientists think of lab rats.

I am all for rational and non-cumbersome oversight (I think the best are talking to people to people in your field and talking to friends not in your field). Ultimately I don't trust the effectiveness of IRB oversight (for all the reasons Zach mentions in his comments).

Even worse, I’m afraid that the IRB makes it a game. The harder the game is, the more people will cheat to pass. Once people “pass” the IRB test, they may be less inclined to use their natural common sense and more inclined to hide things. The more things get pushed underground, the greater the potential for damage. Being open is key: with subjects, with colleagues, with friends.

I should mention that my "they're people!" line is a (perhaps obscure) reference to the 1973 movie Soylent Green. Too often I find myself substituted style for substance. The point I was trying to make comes before that: my objection to the term "human subjects" and “research subjects.”

The first step to ethical research is treating those with whom you study and interact (I must say that even I am tempted to use "subjects" as convenient shorthand) as people.

I learned from and with my subjects in part because I didn’t think of them as “subjects.” They weren't simply objects I observed, clipboard in hand, as they ran through the maze of life.

All this being said, I am very happy that the IRB made me be overt about my research. Telling a class full of Baltimore police recruits on Day One of the police academy that you’re from Harvard and you’re doing researcher isn’t the easiest or quickest way to be a “fly on the wall” (or make friends). But in my research, it was the best thing I ever did.

Being covert may have been the path of least resistance. I don’t know what I would have done if I had a choice on being overt or covert. Had I gone in covert, how would I ever have outed myself without betraying friends? What if word got out and I denied it? What would people be saying not that my book is coming out? Being overt made my life and my research better. If that was solely because of the IRB’s (and I honestly don’t know), then I thank the IRB.

Anonymous said...

PCM, I really enjoyed your comments (and your comments to a later post). I do think that we are in agreement to at least some extent. I agree that far too often the IRB becomes one more political game rather than truly fostering a concern to respect and honor the people who help us in our research. I am perhaps overly influenced by a medical model but I recognise the short-comings of most committees in practice. As the first-ever head of our newly-formed IRB, I am determined to 'do it right' whatever that really means. I cannot get away from the issue of protection--which I think is really being respectful of the people with whom we work. Also, I am not sure that it is a 'good thing' to leave the whole decision of what constitutes adequate protection to the researcher alone. Independent oversight is valuable, as you noted in your work. Finally, even if we all decide that all IRB's should be scrapped, it is still someone else (President or VPAA usually) who is legally responsible for ensuring that we comply with all appropriate rules and regulations.

BTW, they do hire republicans in academia, they are all in the Business school.

Bill Hart
Rogers State University

David Hunter said...

Just on the use of "subject" to refer to research participants that PCM refers to, I agree this is potentially problematic it is at least better than it was. I've recently finished reading Maurice Pappworth's 1960s book about ethical abuses in scientific research where he points out that at the time research participants were referred to as "material".