Between 2001 and 2005 she and her associates observed the workings of 29 ethics committees in Australia, Canada, New Zealand, the United Kingdom, and the United States--not finding significant differences across national boundaries. They have published several papers based on this work, most of which are available on the project's website. Since the themes of many of these publications overlap, I will focus my comments on three articles that I found particularly helpful.
M. H. Fitzgerald, "Punctuated Equilibrium, Moral Panics and the Ethics Review Process," Journal of Academic Ethics 2 (2005)
This article builds on Will C. van den Hoonaard, "Is Research-Ethics Review a Moral Panic?," Canadian Review of Sociology and Anthropology 38 (February 2001). In that article, van den Hoonaard deployed the concept of a "moral panic," introduced in 1972 by Stanley Cohen. In van den Hoonaard's words, "a moral panic is indicated by hostility and sudden eruption of measured concern shared by a significant segment of the population, with disproportional claims about the potential harm moral deviants are able to wrought . . . Moral panics involve exaggeration of harm and risk, orchestration of the panic by elites or powerful special-interest groups, the construction of imaginary deviants, and reliance on diagnostic instruments.“ Van den Hoonaard went on to show how Canadian ethics committees' treatment of qualitative research fits that definition.
In her article, Fitzgerald stresses that the ethics review process is not in a constant state of moral panic, but rather expands in spurts that resemble the punctuated equilibrium of evolutionary biology. More significantly, the moral panic that van den Hoonaard identified on the national level can also take place within a single institution:
local committees have their own folk devils and moral panics. These too sensitize committee members to consider 'worst case scenarios' in their discussions of applications so that a case of moral panic evokes further moral panic. Some committees referred to particular researchers who they saw as having been 'problems' in the past, commonly referred to in one meeting as "that researcher." Particularly long standing committees, especially those that have members who have been on the committee for significant periods of time, evoke memories of these in their deliberations. Sometimes this is done with a kind of code known only to local members. A particular name or phrase is used in the discussion of an application to remind members of previous cases, the problems associated with it, and how they were or were not addressed. Decisions related to the earlier case or cases are then use as historical precedent to make a ruling for a new case that may have, or at least appears to have, similar dimensions. In doing this, they deal with the moral panic evoked by the new case and try to prevent problems associated with the precedent from occurring again.
Moral panics, Fitzgerald continues, can take place even within a single meeting of a committee, which may stretch to seven hours:
there are regular and predicable periods of heightened activity during a meeting and periods where members are more likely to engage in slow and deliberate scrutiny and debate in relation to the applications being reviewed. These periods are not generally related to the actual applications, although a particularly interesting or problematic case, at least in the minds of some committee members, can create periods of static in the process and, on rare occasions form a transition point in the level of activity or speed of review. In between the periods of heightened scrutiny there are short bursts of accelerated activity where applications are reviewed with great swiftness, often only a few minutes per application.
The review process is so uneven that
applications that normally would not be subjected to in-depth scrutiny may be less likely to get through the process without questions to the applicant if it is reviewed at particular points in the review process or it is juxtapose against a critical case that evoked a kind of moral panic among committee members at that meeting or some time in the past. Thus it is not always the quality of the application or the issues it raises that are necessarily the most critical to the nature of or experience with the ethics review process.
M. H. Fitzgerald, P. A. Phillips, & E. Yule, "The Research Ethics Review Process and Ethics Review Narratives," Ethics & Behavior16 (2006): 377-395
This article traces in somewhat more detail the evolution of a moral panic. The authors explain that ethics committee meetings consist of exchanges of narratives, some of which have little to do with the proposal under review. That is, it is not the researcher whose version of the project gets debated, but the primary reviewer. On its own, that sounds pretty benign, and not different from the work of a peer review committee or a job search committee looking for scholarly merit. But the authors hint that the process does not always work well:
Sometimes the discussion gets off on tangents generated by the narrative, and sometimes people lose sight of how this does or does not relate to the actual application. Thus, at some point, as a result of this discussion, the committee came to think that an applicant had not addressed an issue when the applicant actually had, or that the applicant had said something he or she had not. In one meeting, a member said that the applicant had not addressed a point in the information sheet and that the applicant needed to do so before the person would approve the information sheet. Just as the person finished saying this, with the ethics officer writing it down, another member said, “But they did say that. It is right here on page x of y.” The committee had lost sight of what it was doing, not only because of a long and complicated information sheet (in this case an eight-page form), but because the discussion itself took the committee off on a tangent where they lost sight of what the applicant had actually provided. A common comment from researchers who have received letters that might fall into this category is, “Did anyone actually read the application?” If this committee member had not picked this up during the meeting and the ethics officer or chair had not checked it before the letter to the applicant was sent out, the applicant could reasonably wonder if anyone had read the application or information sheet.
The authors do not indicate how often committees flew off on such tangents, nor what types of proposals were more likley to receive such sloppy review.
Other types of narratives are more dangerous. There is the "what if" or "worst-case scenario" narrative, which leads committee members to compete to see who can dream up the most dreadful outcome:
each potential version of the story builds on the one before it, and each version becomes more and more serious until it gets to the worst scenario they can come up with, one that may significantly overestimate the kind, potential for, probability of, or seriousness of the risk . . . In some cases, the discussion focused on the likelihood of that scenario, but more often the scenarios seemed to take on lives of their own. They can develop to the point where an uninformed listener might wonder if this research was worth the risk because as the versions of the hypothetical develop they become more and more believable and members with a moral conscience are placed in a position where they feel they have to raise questions about whether or not the project should be approved or approved as presented. Again, often this situation was resolved by sending the issue back to the researcher to address. Given the low probability of some of these worst case narratives actually happening, it is not surprising that researchers, not having heard the discussion,wonder where such ideas come from.
Related to these narratives are what Fitzgerald et al. call the "personal experience" narrative:
a narrative based on the committee members’ own experiences, particularly in relation to research, or a story of a known other’s experience (friends, relatives, research participants). These narratives serve to contextualize, explain, or introduce a matter of some interest or concern in relation to the discussion. The narrative may or may not have direct relevance to the application being discussed, but becomes attached to it because of the context in which it is offered. These narratives are regularly related to the what if narratives to help support the possibility of such situations happening. In some cases they are, or take on the character of, urban myths or contemporary legends. They are told as true; they often cannot be attributed to any known person; there are often multiple versions, but in spite of the variations, they are told in such a way and with enough detail to be plausible.
The authors note that their "objective here is not to suggest whether or not such narratives should be a part of the process. Narratives are a natural part of verbal discourse and are always going to be a natural part of human gatherings such as ethics committee meetings. Our objective is to inform people that they are a part of the process and that they can, and do, affect the nature of the review process." That's too neutral for my taste. As a historian, I must agree that narratives are central to human life, but some narratives are better than others for certain tasks. Tellingly, the authors seem not to have observed the opposite of the personal-experience narrative: the scholarly narrative, in which a review member cites published evidence to suggest the ethical dangers of a proposal under review. For a process that is nominally part of an academic enterprise, is it too much to ask that ethics committees cite sources, rather than legends?
Thrown into this article is a comment that does not directly relate to narratives but that helps explain some committee decisions: "Few applications are approved as submitted. In some places, no applications are approved as submitted. There seems to be some need among committee members to make some comment or request some action."
This compulsive interference reminds me of George Orwell's Down and Out in Paris and London:
It is not a figure of speech, it is a mere statement of fact to say that a French cook will spit in the soup—that is, if he is not going to drink it himself. He is an artist, but his art is not cleanliness. To a certain extent he is even dirty because he is an artist, for food, to look smart, needs dirty treatment. When a steak, for instance, is brought up to the head cook's inspection, he does not handle it with a fork. He picks it up with his fingers and slaps it down, runs his thumb round the dish and licks it to taste the gravy, runs it round and licks again, then steps back and contemplates the piece of meat like an artist judging a picture, then presses it lovingly into place with his fat, pink fingers, every one of which he has licked a hunred times that morning. When he is satisfied, he takes a cloth and wipes his fingerprints from the dish, and hands it to the waiter. And the waiter, of course, dips his fingers into the gravy—his nasty, greasy fingers which he is for ever running through his brilliantined hair. Whenever one pays more than, say, ten francs for a dish of meat in Paris, one may be certain that it has been fingered in this manner. . . Roughly speaking, the more one pays for food, the more sweat and spittle one is obliged to eat with it.
Martin Tolich and Maureen H. Fitzgerald, "If Ethics Committees Were Designed For Ethnography," Journal of Empirical Research on Human Research Ethics 1 (2006): 71-78
Fitzgerald's investigations included hospital and general university ethics committees, with only three nonbiomedical committees among the 29 she observed, and most of her papers do not differentiate between the review of medical and non-medical research. This article is the clearest exception.
The authors report that "In our own research projects, as well as Fitzgerald’s extensive study of ethics committees in five countries (Australia, Canada, New Zealand, United Kingdom, and United States), we have yet to find an ethics committee that reflects qualitative epistemological assumptions."
They continue,
Too often qualitative researchers report their experiences of the positivistic ethics review system as antagonistic and quantitative. For example, the typical form of communication between the researcher and the ethics committee underscores this disjuncture: the fill-in-the-boxes-oriented questionnaire does not correspond to qualitative researchers’ opened-ended datagathering approach.
They cite numerous studies that document the frustration of ethnographers and other qualitative researchers faced with ignorant ethics committees. But the more interesting part of the article involves their proposed solution.
Ethics committees, they suggest, should ask qualitative researchers four open-ended questions:
1. What is the research project about?
2. What ethical issues does the researcher believe are raised by this project?
3. How does the researcher plan to address these ethical problems? . . .
[4.] What contingencies are in place if the research project changes its focus after the research has been approved and has begun?
These questions, they explain, "examine the researcher’s knowledge of ethics and her or his ability to predict possible changes and how they may be dealt with in the field." They avoid preconceptions drawn from medical experimentation, like the idea that every project requires written consent forms, or that every interaction with another person has a projected duration. I would like to suggest a fifth question: "What have you read about the ethical challenges posed by this kind of research?" If a committee were to accept a range of answers to this question, researchers would be relieved of the standardized, and often irrelevant, ethical training now imposed.
Tolich and Fitzgerald note that for ethnographic and other qualitative researchers, the best time to spot ethical hazards is usually not at the outset of an investigation, but near its end, just prior to publication. By this point, researchers no longer have to guess about whom they will meet and what questions they will ask, and they, along with ethical reviewers, can figure out what potentially harmful information has been collected. They call for manuscript reviewers and thesis examiners to engage in a kind of "ethical proofreading," to use Carole Gaar Johnson's term, to determine if a researcher has adequately identified and addressed the ethical concerns of the work.
Fitzgerald's Recommendations
Fitzgerald presents a grim assessment with the present state of affairs:
With few exceptions, the people involved are truly concerned about the ethical conduct of research, the enhancement of knowledge that can affect the human condition, and protection of the people involved from risks greater than those of everyday life. Despite these shared concerns, the review process does not always adequately address them . . . Long meetings with far too many applications covering a range of topics by a group of people with good but limited expertise cannot address the concerns. ("Punctuated Equilibrium")
She and Tolich doubt whether current ethics committees are flexible enough to enact their recommendations for review of ethnography:
Some committees already employ a procedure similar to what we describe. Some will move toward such a procedure. Some will not. Some will have purchased expensive online review software that will offer a challenge to any attempts to introduce an appropriate procedure for ethnographers and qualitative researchers. Second, the two-stage system would require ethics committees to address qualitative research by training both academic and lay members of an ethics committee. This requires strong leadership by persons who are knowledgeable about qualitative methods in general, and ethnography specifically. Unfortunately, it is easier to continue to use the positivist or medical model because it essentially fits the risk management exercise that ethics committees provide for institutions. When the risks to persons are known in advance, ethics review can be more structured and less ambiguous. Most people, including ethics committee members, have difficulty dealing with ambiguity. ("If Ethics Committees Were Designed")
They hint at review by units both smaller than university-wide ethics committees (i.e., academic departments) and larger ("websites might be developed based on the experiences of committees that do exemplary reviews of ethnographic research"). And in "Punctuated Equilibrium," Fitzgerald calls for "greater understanding of group processes that affect decisionmaking," as if a chair can eliminate irrationality by pointing it out.
Nowhere in these articles does Fitzgerald explain why, given all the many alternatives, university-wide ethics committees should be expected to play any positive role in the review of qualitative research. It is as if, having explained the difficulty of inserting a machine screw with a claw hammer, she expects the analysis alone to make the job easier. Wouldn't it be easier to replace the hammer with a screwdriver?
Though she shies away from saying so, Fitzgerald's arguments suggest that if ethics committees were designed for ethnography, they would not exist at all.
4 comments:
That’s very interesting. I’m glad to know there are some alternatives to an IRB. I wish them luck.
I have a three other thoughts which are hopefully relevant. Keep in mind that none of these thoughts was ever discussed with the IRB before my research began.
1) Sometimes it’s hard to even say who your research subjects will be. My research was about police. My research wasn’t really about those I policed, but of course at some level it is. I quote criminals and drug dealers and people I locked up (and normal good citizens, too). What was my duty to them? What about the harm I do to them? I’m actually arresting people in the course of my research.
How can a police officer with badge, handcuffs, and gun (and pen and paper) promise not to hurt somebody? I could have killed somebody. I don’t think the IRB would be too cool with that. Or was everything I did automatically OK because I was, by definition, on the right side of the law? Was my research ethical as long as I was playing “good” cop? What about being a cop in the war on drugs? How is that ethical if you think drugs should be legalized? But then should arresting criminals ever be considered a harm?
2) Of course I’m favorably biased toward my style of research because it’s what I did, but perhaps active-participant-observation research should be encouraged on ethical grounds. By actually being part of the police group I was studying, I was in a better position to judge the ethics of my research because I could apply it to myself (but this wouldn’t apply to those I policed).
I would argue that the active-participant-observation research is by its nature both more ethical and harder to get past an IRB (because you can’t describe events beforehand). There are many academic concerns about being active in the group you’re studying. But here I’m just focusing on the IRB (for some discussion related to P.O. research and “objectivity,” there’s a bit more at www.copinthehood.com).
3) There’s the issue of researcher ethics conflicting with police officer ethics. Cops have their own moral obligations (professionally and socially, legally and informally). Had there ever been a time when I had to chose between my obligation to the police and my obligation to the IRB, I would have gone with the police (of course, in the real world such dilemmas are never as clear as debating them in theory). I was paid to be a police officer, after all. To me, that trumped abstract theory about being a researcher.
Given that sentiment, how could an IRB ever approve a project where the researcher admits the committee is second to “other” concerns (even if these other concerns include an oath to defend the constitution)? I don’t think any IRB proposal to become a cop would ever pass muster. I don’t know of any.
I don’t see how being a police officer can be reconciled with the IRB. And yet what was so wrong with my research that it should have been preventing on principle? Who is to say that police officers can’t do ethical research? I sure hope not any university IRB. I hope nobody would advocate a blanket ban prohibiting police officers conducting academic research. What’s wrong with being a cop?
Of course these questions were avoided entirely in my interaction with the IRB. Any good system of review should have had me discuss these issues. Not necessarily to provide answers, but at least to make me ask these questions.
Perhaps some anthropologist will weigh in, but it seems to me that your work does bear similarities to the military's Human Terrain System Project, in which anthropologists worked alongside American troops. For reasons like those you mention--power relationships, conflicts of interest, the possibility of doing harm--anthropologists' participation in that project was condemned by the American Anthropological Association for just the reasons you mention. You should be glad you're a sociologist, not an anthropologist, I guess.
I quite agree that researchers, especially graduate students, should be encouraged to think through the ethical questions raised by their work, both in the planning stage and over the course of the research. Neither departments nor IRBs have shown themselves particularly good at this, which is why the search for alternative models is important.
Zach
I wonder if political biases influenced the objective decision making processes of the AAA. I just wonder if their position would be the same if an anthropologist wanted to conduct a study with, say, the Zapatista Front for National Liberation.
It makes me think of a story my father told me when his academic department was trying increase faculty diversity. He raised his hand and asked, "Why don't we hire a Republican?" They still haven't.
[Originally posted March 19, 2008, but later corrected for egregious typos.]
Post a Comment