Saturday, September 29, 2007

Roberta S. Gold, “None of Anybody’s Goddamned Business”?

Blogger's note: On September 3, Christopher Leo posted a query to the H-Urban list, asking about the effect of ethics review on urban research . Roberta Gold's response hinted that she had thought hard about the issue, so I asked her to share her thoughts on this blog. She has graciously agreed.

Friday, September 21, 2007

Bledsoe et al., Regulating Creativity

I am still working my way through the Northwestern University Law Review symposium on IRBs. Today's comments focus on Caroline H. Bledsoe, Bruce Sherin, Adam G. Galinsky, Nathalia M. Headley, Carol A. Heimer, Erik Kjeldgaard, James T. Lindgren, Jon D. Miller, Michael E. Roloff & David H. Uttal, "Regulating Creativity: Research and Survival in the IRB Iron Cage."

The article, based largely on events at Northwestern itself, is particularly effective at challenging three myths of IRBs and the social sciences:

Myth #1: Reports of IRB interference with research are overblown, since few projects are rejected and few researchers disciplined.




An example of this myth is Jerry Menikoff's contribution to the same symposium, in which he claims, "social and behavioral scientists who maintain appropriate communication with their institution's IRBs need not be shaking in their boots, fearing some career-ending enforcement action is about to come down from Washington."

Unlike Menikoff, Bledsoe et al., talked to some researchers, asking their colleagues about experiences with Northwestern's IRB. They report,



As a number of our colleagues have emphasized . . . both in person and in their responses to our email query, they alter their course not because of any real risk they perceive to their subjects but simply to pass IRB muster. Trying to reduce their own professional risk, they divert their work, choosing topics or populations selectively, or adapting methods that will entail less demanding IRB review and lessen the probability that they will have to make substantial changes before proceeding. IRB procedures, that is, can snuff out ambition even before the project begins.

The disturbing point is that it is the mere anticipation of onerous IRB review that can result in some alteration of the proposed protocol. Because of the potential for delays and the IRB tendency to intrude into each step of the research process, many social science faculty report that they think twice about taking on research topics, methods, and populations that IRB frames in the mode of risk. One respondent described the impact thus:

"The IRB has become a nightmare over the years that I have been a researcher. I'm sure most of this pressure is coming from the federal government, but the rigidity of the model (based on the medical sciences) and the number of hurdles/ forms, and the scrutiny (to the point of turning back projects for mispagination or other pithy errors, as has happened for some of my students) is just terrible. It is very discouraging, and I find myself thinking of EVERY new research project as it relates to the possibility of IRB approval."

Two respondents indicated that faculty had moved toward non-field projects in large part because of IRB. One faculty member even pointed specifically to concerns about IRB in a decision to make a career shift away from field-project themes and methods that might jeopardize the researcher's career:

"Since last year, my research became more theoretical in large part because of IRB requirements. I simply try not to do any research which would involve Panel E [the social science review panel at Northwestern]. . . . I no longer interview people during my trips abroad and try to limit the data gathering to passive observation or newspaper clippings."



An IRB that approves all social science projects submitted to it (and many, no doubt, do) may still crush research by making it so burdensome that researchers give up submitting proposals.



Myth #2: Medical IRBs are the problem, so an IRB devoted only to non-medical research is the solution.


This suggestion gets thrown out from time to time; for example, it appears as one of Dale Carpenter's admittedly "modest proposals for reform" in his own Northwestern Law Review piece. But Bledsoe et al. report that Northwestern already has a separate non-medical panel, and it doesn't sound pretty:



even a separate social science IRB enterprise suffers from internal tensions between the need for standardization, whether imposed by OHRP rules or by our own desires to ensure equity, and the need to allow the very stuff of novelty that studies are supposed to produce. We have observed that social scientists who confront their review assignments can be no less critical of their fellows' studies than a biomedical panel might be. Indeed, IRB staff have sometimes had to step in diplomatically to rescue a project from a zealous social science faculty panelist threatening to dismember it altogether. In this regard, we have observed a typical life cycle for social science panel members. The typical panel member begins his or her tenure by making it known that a great deal of harmless social science research is delayed without any reasonable cause, and that henceforth the reckless invasiveness of the IRB must be tempered. Yet this same panel member, when given projects to review, is often the most critical.

This pattern reflects a broader impulse among social scientists. We think of ourselves first and foremost as academics. Our business is to read research proposals, journal articles, student papers, and to find fault. Turning to IRB protocols, we become fastidious reviewers. When we read consent forms, it is hard for us to refrain from editing them. When we read with an eye toward possible risk, whether large or small, our expertise itself will unmask it. As social science panel members, we will inevitably find problems with social science IRB submissions; we cannot help ourselves. Importing our own disciplines' ethical dilemmas, the concerns that we raise often go far beyond those imagined by the federal legislators. They also hand the IRB, seeing our plight, both our fears and our language of expressing them to incorporate into its already overburdened repertoire. Over time, such impulses are tempered, and we learn to see the big picture again. In the meantime, however, the damage to the research enterprise is done.

In retrospect, giving the social sciences a separate review channel and letting them into the review process was helpful in that the social sciences gained mediators who could explain studies to their panel colleagues and attempt to buffer the power of the medical model. At the same time, our social science panel's own efforts to help both added to the layers of regulatory stratigraphy and intensified the regulatory flux. All this has undoubtedly provided further grounds for investigators to conclude that the IRB was capricious and inconsistent.



The authors are wrong, however, to suggest that Northwestern has a "social science" panel. According to "Schools, Departments and Programs Served by Panel E of the Institutional Review Board, " Panel E has jurisdiction over "research projects involving human subjects that use social and behavioral science methodologies." The same document claims,


Federal guidance defines social and behavioral science methodologies as those that include research on individual or group characteristics or behavior (including, but not limited to, research on perception, cognition, motivation, identity, language, communication, cultural beliefs or practices, and social behavior) or research employing survey, interview, oral history, focus group, program evaluation, human factors evaluation, or quality assurance methodologies.


The range of methods included in this list means that far from letting ethnographers review ethnographers and experimental psychologists review experimental psychologists, Northwestern has locked all its non-medical researchers in a room and told them to fight it out. Such an arrangement makes no allowance for the wide variation of methods and ethics within non-medical research. (See "My Problem with Anthropologists.")

Moreover, the claim that "federal guidance defines social and behavioral science methodologies" is incorrect. The list of methodologies is taken from OPRR's 1998 "Protection of Human Subjects: Categories of Research That May Be Reviewed by the Institutional Review Board (IRB) Through an Expedited Review Procedure." That document does just what its title suggests: it lists categories of research eligible for expedited review. It does not define social and behavioral science methodologies, nor, to my knowledge, has the federal human subjects apparatus ever defined social or behavioral science.

In reality, therefore, Northwestern's Panel E exists solely to provide full IRB review for projects that even the federal government admits do not require full IRB review. No wonder it doesn't work well.


Myth #3: If social scientists were to join IRBs and learn about their workings, they wouldn't complain so much.



Take this statement by J. Michael Oakes, "Risks and Wrongs in Social Science Research: An Evaluator's Guide to the IRB," Evaluation Review 26 (October 2002) 443-479:
"Investigators well versed in the Belmont Report and more technical IRB procedures rarely need to dispute decisions, and when they do it concerns how known rules are interpreted or what is best for the subjects. It follows that a great deal of frustration may be eliminated by careful study of basic IRB regulations and issues. Education seems to modify frustration in the researcher-IRB-subject chain."

Nonsense. Bledsoe herself chaired a subcommittee of the Northwestern University IRB Advisory Committee, and several of her coauthors served on, chaired, or staffed IRBs at Northwestern or elsewhere, as well as having dealt with IRBs as applicants. They are about as educated and experienced in these issues as one could hope for, and they as frustrated as anyone by the current system.






Beyond busting myths, the article seeks to document the changes in IRB work since the 1990s. Based on their personal experience, Bledsoe and her co-authors describe the expansion of both OHRP and IRB jurisdiction:



The university's Office for the Protection of Research Subjects spiraled from two professionals to what is now a staff of 26, of whom 21 support the IRB operation. Review panels went from one to six—four were created simultaneously in September 2000, with one for the social sciences created a year later, and another medical panel added subsequently— and appointing their membership became the duty of the university's vice president for research. The length of the basic protocol template for new projects went from two pages to its present length of twelve for the social sciences, and fifteen for biomedical research. In addition, the number of supplementary forms and documents required for each submission went from one or two to far more than that, depending on the nature of the study. Many protocols are now better measured in inches of thickness than in number of pages. The level of bureaucratic redundancy, inconvenience and aggravation increased dramatically: Unreturned phone calls, dropped correspondence, and administrative errors on forms became routine.



They also report some good news:



For several years after the IRB ramp-up began, our IRB panel expected detailed interview protocols from everyone. Now, an ethnographer who intends to employ participant observation does not need to provide a detailed specification of what is to be said to participants, and is not asked for it. Without such collusion, ethnographic studies would not survive under the IRB system. As much as social scientists complain about the ill fit their projects pose in IRB review, their own protocols are now spared this level of scrutiny.



As I reported earlier, Northwestern has exempted oral history from review, though Bledsoe et al. do not explain when or why that happened.

The authors conclude that "one could scarcely imagine a better example of a bureaucracy of the kind that so fascinated and infuriated Weber than the contemporary IRB system." It is indeed crucial to look at the systematic pressures on members and administrators, for that can explain why the same IRB abuses show up in such diverse institutions spread around the country.

But while Weber can explain some long-term trends, analyzing bureaucracies, rather than people, obscures the role of individual decisions. In this lengthy account of events at Northwestern, the authors decline to blame, credit, or even name a single individual administrator, researcher, IRB member, consultant, or federal official. Typical is this passage:



When the ratcheting up of the IRB bureaucracy at Northwestern was occurring, administrators were working in an environment in which suspension of federal funding to other institutions had produced considerable anxiety. It was no secret that the Northwestern IRB director was under pressure to bring the university into full compliance as quickly as possible.



Who was ratcheting? Who felt considerable anxiety and why? Who communicated with the federal government? Who was the Northwestern IRB director? Who pressured him or her? Who knew the secret? And, on the other end, who ruled that interviewers did not have to submit detailed protocols?

Because the authors decline to ask such questions, they can hold no one to account for sudden and important decisions. They instead conclude, "the odd history of IRB and its effects have been no one's fault; no one's intention. No convenient villains or victims emerge anywhere we look." But there is nothing to indicate that they looked terribly hard.

Friday, September 14, 2007

Study Finds IRBs Exaggerate Risks of Survey Questions

Michael Fendrich, Adam M. Lippert, and Timothy P. Johnson, "Respondent Reactions to Sensitive Questions," Journal of Empirical Research on Human Research Ethics 2 (September 2007): 31-37

Perhaps because they are punished for being too lax but never for being too strict, IRBs tend to err on the side of what they consider caution, exaggerating the risks of proposed research. It's easy to do so when, as these authors put it, "board members often rely on their 'gut' feeling in determining the potential for survey questions to effect adverse reactions."

To replace that gut feeling with some evidence, Fendrich, Lippert, and Johnson asked survey respondents who had been asked about illegal drug use whether they had felt felt threatened or embarrassed by the questions. Not much: the average score was less than 2 on a 7-point scale. But when asked if other people would feel threatened by those questions, the numbers shot above 5. Thus, survey respondents are as bad as IRBs at guessing how other people will feel about being questioned.

The authors conclude:

Consent documents often summarize potential adverse subject reactions to questions. For example, in the current study, the University of Illinois at Chicago’s REC [research ethics committee] approved consent document contained the following two sentences under the heading: “What are the potential risks and discomforts?”

There is a risk that you may feel anxious, uncomfortable or embarrassed as a result of being asked abut drug use and drug testing experience. However, you are free not to answer any question, and you are free to withdraw from the study at any time.


If our findings can be generalized to other studies asking questions about drug use, the first sentence may inappropriately convey an exaggerated sense of a drug survey’s risk. Even though voluntary participation is a non-contingent right, the second sentence seems to link the right of refusal and the voluntary nature of participation to this exaggerated risk.

The first author’s experience as a member and Chair of a behavioral science REC leads him to conclude that paragraphs like those cited above are common in survey consent documents. Researchers may pair statements about rights with statements about risk in order to appease REC concerns about study interventions to address risk. In the absence of empirical data, RECs should be cautious about recommending and approving consent documents that include clauses suggesting that questions about drug use cause emotional discomfort. Furthermore, RECs should recommend that consent documents decouple important reminders about subject rights from statements about potential risk (whether or not those risks are valid). While it may be important to reinforce rights in a consent document, we believe it is contrary to best practice to even imply that voluntary participation (and the right to withdraw or refuse to answer questions) should be contingent on adverse reactions. The type of text described above, however, would be obviated if RECs adopted a more realistic view of subject perceptions regarding drug use surveys.

Laud Humphreys Remembered

Scott McLemee's essay, "Wide-Stance Sociology" (Inside Higher Ed, 12 September 2007) uses Senator Larry Craig's arrest as a news hook for a discussion of the life and career of sociologist Laud Humphreys. Humphreys's 1960s research on men who found male lovers in public restrooms is a touchstone for advocates of IRB review of observational research. But as McLemee and some of the comments make clear, the case was far more nuanced than the medical-research scandals that inspired the federal requirement for ethical review, nor was his use of deliberate deception in any way typical of the work of the social scientists who now find themselves constrained by IRBs.