Showing posts with label Northwestern. Show all posts
Showing posts with label Northwestern. Show all posts

Tuesday, August 25, 2015

Gentle Regulation May Be More Effective

Law professor Samuel Bagenstos argues that recent Title IX excesses follow the pattern of IRB horror stories: the feds threaten drastic action, so university administrators hyper-regulate. He offers disability rights as an example of a less punitive regulatory effort that has produced good results.


[Samuel R. Bagenstos, “What Went Wrong With Title IX?,” Washington Monthly, October 2015.]

Thursday, May 26, 2011

Sex Researcher Calls for "An Evidence-Informed Process"

Brian Mustanski, Associate Professor, Department of Medical Social Sciences, Feinberg School of Medicine, Northwestern University, calls for "moving the IRB process of risk/benefit assessment from being entirely subjective to being evidence-based."

[Brian Mustanski, "Ethical and Regulatory Issues with Conducting Sexuality Research with LGBT Adolescents: A Call to Action for a Scientifically Informed Approach," Archives of Sexual Behavior, published online 29 April 2011.]

Tuesday, October 26, 2010

Dreger Wants to Scrap IRBs

On the heels of Laura Stark's Los Angeles Times op-ed calling for the replacement of local IRBs with centralized boards of experts, historian Alice Dreger has published her own call for a national system of ethics review based on expertise and transparency.

[Alice Dreger, "Nationalizing IRBs for Biomedical Research – and for Justice," Bioethics Forum, 22 October 2010.]

Troubled by her IRB's approval of a project she considers unethical, and by Carl Elliott's White Coat, Black Hat: Adventures on the Dark Side of Medicine, Dreger concludes that the system of local review is ineffective:


We’ve reached the point where many people in medicine and medical ethics don’t even expect IRBs to act as something other than liability shields for their universities. But do patients who come to us only to be turned into subjects know that? Do they know that there is literally a price on their heads put there by research recruiters?

I’ve come to believe we need a radical solution. Maybe what we need is a nationalized system of IRBs for biomedical research, one that operates on the model of circuit courts, so that relationships cannot easily develop between the IRBs and the people seeking approval. This system could be run out of the Office for Human Research Protections and involve districts, similar to the federal courts system. Deliberations would be made transparent, so that all interested parties could understand (and question) decisions being made.

Think of the advantages: the possibility of actually focusing on the protection of human subjects first and foremost, free of conflicts of interest; the possibility of having nothing but trained professionals (not rotating unqualified faculty and staff) sitting on review panels; the possibility of marking biomedical research as clearly different from the social science and educational research unreasonably managed by many IRBs; the possibility of much greater transparency to those interested in seeing what’s going on; the possibility of having multi-center trials obtain a single approval from one centralized IRB, rather than trying to manage approvals from multiple local institutions. And the possibility of shutting down the deeply opaque, highly questionable private IRBs Elliott describes as being increasingly used by universities. (Go ahead, call me a Communist for caring about the Common Rule.)


Her Communist leanings aside, I don't know why Dreger presents her argument as a defense of the Common Rule, which fails to distinguish between biomedical and social research, puts ethics review in the hands of rotating unqualified faculty and staff, and keeps deliberations opaque. But her wish for the kind of coordination and transparency provided by the court system has a long lineage. I've quoted it before, and I'll quote it again:


The review committees work in isolation from one another, and no mechanisms have been established for disseminating whatever knowledge is gained from their individual experiences. Thus, each committee is condemned to repeat the process of finding its own answers. This is not only an overwhelming, unnecessary and unproductive assignment, but also one which most review committees are neither prepared nor willing to assume.

[Jay Katz, testimony, U.S. Senate, Quality of Health Care—Human Experimentation, 1973: Hearings before the Subcommittee on Health of the Committee on Labor and Public Welfare, Part 3 (93d Cong., 1st sess., 1973), 1050].


It is not lack of good intentions or hard work that leads IRBs to restrict ethically sound surveys while permitting unethical experimental surgery. It is the ignorance and isolation identified by Katz in 1973 and still in place today.

Monday, June 30, 2008

The Psychologist Who Would Be Journalist

Back in August 2007, I mentioned the controversy surrounding the book The Man Who Would be Queen (Washington: Joseph Henry Press, 2003) by J. Michael Bailey, Professor of Psychology, Northwestern University. At the time, Professor Alice Domurat Dreger, also of Northwestern, had just posted a draft article on the controversy. Now that article, along with twenty-three commentaries and a reply from Dreger, has appeared in the June 2008 issue of the Archives of Sexual Behavior.

Dreger's article, the commentaries, and Dreger's response focus on big questions about the nature of transsexuality, the definitions of science, power relationships in research, and the ground rules of scholarly debates. Only a handful take up the smaller question of whether—as a matter of law and as a matter of ethics--Bailey should have sought IRB approval prior to writing his book. But that's the question that falls within the scope of this blog.

Friday, September 21, 2007

Bledsoe et al., Regulating Creativity

I am still working my way through the Northwestern University Law Review symposium on IRBs. Today's comments focus on Caroline H. Bledsoe, Bruce Sherin, Adam G. Galinsky, Nathalia M. Headley, Carol A. Heimer, Erik Kjeldgaard, James T. Lindgren, Jon D. Miller, Michael E. Roloff & David H. Uttal, "Regulating Creativity: Research and Survival in the IRB Iron Cage."

The article, based largely on events at Northwestern itself, is particularly effective at challenging three myths of IRBs and the social sciences:

Myth #1: Reports of IRB interference with research are overblown, since few projects are rejected and few researchers disciplined.




An example of this myth is Jerry Menikoff's contribution to the same symposium, in which he claims, "social and behavioral scientists who maintain appropriate communication with their institution's IRBs need not be shaking in their boots, fearing some career-ending enforcement action is about to come down from Washington."

Unlike Menikoff, Bledsoe et al., talked to some researchers, asking their colleagues about experiences with Northwestern's IRB. They report,



As a number of our colleagues have emphasized . . . both in person and in their responses to our email query, they alter their course not because of any real risk they perceive to their subjects but simply to pass IRB muster. Trying to reduce their own professional risk, they divert their work, choosing topics or populations selectively, or adapting methods that will entail less demanding IRB review and lessen the probability that they will have to make substantial changes before proceeding. IRB procedures, that is, can snuff out ambition even before the project begins.

The disturbing point is that it is the mere anticipation of onerous IRB review that can result in some alteration of the proposed protocol. Because of the potential for delays and the IRB tendency to intrude into each step of the research process, many social science faculty report that they think twice about taking on research topics, methods, and populations that IRB frames in the mode of risk. One respondent described the impact thus:

"The IRB has become a nightmare over the years that I have been a researcher. I'm sure most of this pressure is coming from the federal government, but the rigidity of the model (based on the medical sciences) and the number of hurdles/ forms, and the scrutiny (to the point of turning back projects for mispagination or other pithy errors, as has happened for some of my students) is just terrible. It is very discouraging, and I find myself thinking of EVERY new research project as it relates to the possibility of IRB approval."

Two respondents indicated that faculty had moved toward non-field projects in large part because of IRB. One faculty member even pointed specifically to concerns about IRB in a decision to make a career shift away from field-project themes and methods that might jeopardize the researcher's career:

"Since last year, my research became more theoretical in large part because of IRB requirements. I simply try not to do any research which would involve Panel E [the social science review panel at Northwestern]. . . . I no longer interview people during my trips abroad and try to limit the data gathering to passive observation or newspaper clippings."



An IRB that approves all social science projects submitted to it (and many, no doubt, do) may still crush research by making it so burdensome that researchers give up submitting proposals.



Myth #2: Medical IRBs are the problem, so an IRB devoted only to non-medical research is the solution.


This suggestion gets thrown out from time to time; for example, it appears as one of Dale Carpenter's admittedly "modest proposals for reform" in his own Northwestern Law Review piece. But Bledsoe et al. report that Northwestern already has a separate non-medical panel, and it doesn't sound pretty:



even a separate social science IRB enterprise suffers from internal tensions between the need for standardization, whether imposed by OHRP rules or by our own desires to ensure equity, and the need to allow the very stuff of novelty that studies are supposed to produce. We have observed that social scientists who confront their review assignments can be no less critical of their fellows' studies than a biomedical panel might be. Indeed, IRB staff have sometimes had to step in diplomatically to rescue a project from a zealous social science faculty panelist threatening to dismember it altogether. In this regard, we have observed a typical life cycle for social science panel members. The typical panel member begins his or her tenure by making it known that a great deal of harmless social science research is delayed without any reasonable cause, and that henceforth the reckless invasiveness of the IRB must be tempered. Yet this same panel member, when given projects to review, is often the most critical.

This pattern reflects a broader impulse among social scientists. We think of ourselves first and foremost as academics. Our business is to read research proposals, journal articles, student papers, and to find fault. Turning to IRB protocols, we become fastidious reviewers. When we read consent forms, it is hard for us to refrain from editing them. When we read with an eye toward possible risk, whether large or small, our expertise itself will unmask it. As social science panel members, we will inevitably find problems with social science IRB submissions; we cannot help ourselves. Importing our own disciplines' ethical dilemmas, the concerns that we raise often go far beyond those imagined by the federal legislators. They also hand the IRB, seeing our plight, both our fears and our language of expressing them to incorporate into its already overburdened repertoire. Over time, such impulses are tempered, and we learn to see the big picture again. In the meantime, however, the damage to the research enterprise is done.

In retrospect, giving the social sciences a separate review channel and letting them into the review process was helpful in that the social sciences gained mediators who could explain studies to their panel colleagues and attempt to buffer the power of the medical model. At the same time, our social science panel's own efforts to help both added to the layers of regulatory stratigraphy and intensified the regulatory flux. All this has undoubtedly provided further grounds for investigators to conclude that the IRB was capricious and inconsistent.



The authors are wrong, however, to suggest that Northwestern has a "social science" panel. According to "Schools, Departments and Programs Served by Panel E of the Institutional Review Board, " Panel E has jurisdiction over "research projects involving human subjects that use social and behavioral science methodologies." The same document claims,


Federal guidance defines social and behavioral science methodologies as those that include research on individual or group characteristics or behavior (including, but not limited to, research on perception, cognition, motivation, identity, language, communication, cultural beliefs or practices, and social behavior) or research employing survey, interview, oral history, focus group, program evaluation, human factors evaluation, or quality assurance methodologies.


The range of methods included in this list means that far from letting ethnographers review ethnographers and experimental psychologists review experimental psychologists, Northwestern has locked all its non-medical researchers in a room and told them to fight it out. Such an arrangement makes no allowance for the wide variation of methods and ethics within non-medical research. (See "My Problem with Anthropologists.")

Moreover, the claim that "federal guidance defines social and behavioral science methodologies" is incorrect. The list of methodologies is taken from OPRR's 1998 "Protection of Human Subjects: Categories of Research That May Be Reviewed by the Institutional Review Board (IRB) Through an Expedited Review Procedure." That document does just what its title suggests: it lists categories of research eligible for expedited review. It does not define social and behavioral science methodologies, nor, to my knowledge, has the federal human subjects apparatus ever defined social or behavioral science.

In reality, therefore, Northwestern's Panel E exists solely to provide full IRB review for projects that even the federal government admits do not require full IRB review. No wonder it doesn't work well.


Myth #3: If social scientists were to join IRBs and learn about their workings, they wouldn't complain so much.



Take this statement by J. Michael Oakes, "Risks and Wrongs in Social Science Research: An Evaluator's Guide to the IRB," Evaluation Review 26 (October 2002) 443-479:
"Investigators well versed in the Belmont Report and more technical IRB procedures rarely need to dispute decisions, and when they do it concerns how known rules are interpreted or what is best for the subjects. It follows that a great deal of frustration may be eliminated by careful study of basic IRB regulations and issues. Education seems to modify frustration in the researcher-IRB-subject chain."

Nonsense. Bledsoe herself chaired a subcommittee of the Northwestern University IRB Advisory Committee, and several of her coauthors served on, chaired, or staffed IRBs at Northwestern or elsewhere, as well as having dealt with IRBs as applicants. They are about as educated and experienced in these issues as one could hope for, and they as frustrated as anyone by the current system.






Beyond busting myths, the article seeks to document the changes in IRB work since the 1990s. Based on their personal experience, Bledsoe and her co-authors describe the expansion of both OHRP and IRB jurisdiction:



The university's Office for the Protection of Research Subjects spiraled from two professionals to what is now a staff of 26, of whom 21 support the IRB operation. Review panels went from one to six—four were created simultaneously in September 2000, with one for the social sciences created a year later, and another medical panel added subsequently— and appointing their membership became the duty of the university's vice president for research. The length of the basic protocol template for new projects went from two pages to its present length of twelve for the social sciences, and fifteen for biomedical research. In addition, the number of supplementary forms and documents required for each submission went from one or two to far more than that, depending on the nature of the study. Many protocols are now better measured in inches of thickness than in number of pages. The level of bureaucratic redundancy, inconvenience and aggravation increased dramatically: Unreturned phone calls, dropped correspondence, and administrative errors on forms became routine.



They also report some good news:



For several years after the IRB ramp-up began, our IRB panel expected detailed interview protocols from everyone. Now, an ethnographer who intends to employ participant observation does not need to provide a detailed specification of what is to be said to participants, and is not asked for it. Without such collusion, ethnographic studies would not survive under the IRB system. As much as social scientists complain about the ill fit their projects pose in IRB review, their own protocols are now spared this level of scrutiny.



As I reported earlier, Northwestern has exempted oral history from review, though Bledsoe et al. do not explain when or why that happened.

The authors conclude that "one could scarcely imagine a better example of a bureaucracy of the kind that so fascinated and infuriated Weber than the contemporary IRB system." It is indeed crucial to look at the systematic pressures on members and administrators, for that can explain why the same IRB abuses show up in such diverse institutions spread around the country.

But while Weber can explain some long-term trends, analyzing bureaucracies, rather than people, obscures the role of individual decisions. In this lengthy account of events at Northwestern, the authors decline to blame, credit, or even name a single individual administrator, researcher, IRB member, consultant, or federal official. Typical is this passage:



When the ratcheting up of the IRB bureaucracy at Northwestern was occurring, administrators were working in an environment in which suspension of federal funding to other institutions had produced considerable anxiety. It was no secret that the Northwestern IRB director was under pressure to bring the university into full compliance as quickly as possible.



Who was ratcheting? Who felt considerable anxiety and why? Who communicated with the federal government? Who was the Northwestern IRB director? Who pressured him or her? Who knew the secret? And, on the other end, who ruled that interviewers did not have to submit detailed protocols?

Because the authors decline to ask such questions, they can hold no one to account for sudden and important decisions. They instead conclude, "the odd history of IRB and its effects have been no one's fault; no one's intention. No convenient villains or victims emerge anywhere we look." But there is nothing to indicate that they looked terribly hard.

Tuesday, August 21, 2007

Northwestern IRB: Unsystematic Interviews Are Not Subject to Review


Today's New York Times features a story, "Criticism of a Gender Theory, and a Scientist Under Siege," about the case of J. Michael Bailey, Professor of Psychology, Northwestern University. Bailey's controversial book about identity. The book provoked several complaints, including the charge by "four of the transgender women who spoke to Dr. Bailey during his reporting for the book . . . that they had been used as research subjects without having given, or been asked to sign, written consent."

As reported by the Times, the case was investigated by Alice Domurat Dreger, Associate Professor of Clinical Medical Humanities & Bioethics at Northwestern, who has posted a draft article on the subject, "The Controversy Surrounding The Man Who Would Be Queen: A Case History of the Politics of Science, Identity, and Sex in the Internet Age," [PDF]

Dreger finds that Bailey did not commit serious ethical violations, nor did he violate the requirements for IRB review:


the kind of research that is subject to IRB oversight is significantly more limited than the regulatory definition of “human subject” implies. What is critical to understand here is that, in the federal regulations regarding human subjects research, research is defined very specifically as “a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge” (United States Department of Health and Human Services, 2005, sect. 46.102, def. “b”). In other words, only research that is truly scientific in nature—that which is systematic and generalizable—is meant to be overseen by IRBs. Thus, a person might fit the U.S. federal definition of “human subject” in being a person from whom a researcher gains knowledge through interpersonal interaction, but if the way that the the knowledge she or he intends to gain is unlikely to be generalizable in the scientific sense, the research does not fall under the purview of the researcher’s IRB.

It is worth noting here, for purposes of illustration of what does and doesn’t count as IRB-qualified work, that I consulted with the Northwestern IRB to confirm that the interviews I have conducted for this particular project do not fall under the purview of Northwestern’s IRB. Although I have intentionally obtained data through interpersonal interaction, the interview work I have conducted for this historical project has been neither scientifically systematic nor generalizable. That is, I have not asked each subject a list of standardized questions—indeed, I typically enjoyed highly interactive conversations during interviews; I have not interviewed all of my subjects in the same way; I have negotiated with some of them to what extent I would protect their identities. This is a scholarly study, but not a systematic one in the scientific sense. Nor will the knowledge produced from this scholarly history be generalizable in the scientific sense. No one will be able to use this work to reasonably make any broad claims about transsexual women, sex researchers, or any other group.

When I put my methodology to the Northwestern IRB, the IRB agreed with me that my work on this project is not IRB-qualified, i.e., that, although I have obtained data from living persons via interactions with them, what I am doing here is neither systematic nor generalizable in the scientific sense.


Clearly Bailey's work hurt the feelings of some people he wrote about, but, as Dreger notes, "scholarship (like journalism) would come to a screeching halt if scholars were only ever able to write about people exactly according to how they wish to be portrayed." Indeed, that's what social scientists have been arguing for three decades.