Monday, June 30, 2008

The Psychologist Who Would Be Journalist

Back in August 2007, I mentioned the controversy surrounding the book The Man Who Would be Queen (Washington: Joseph Henry Press, 2003) by J. Michael Bailey, Professor of Psychology, Northwestern University. At the time, Professor Alice Domurat Dreger, also of Northwestern, had just posted a draft article on the controversy. Now that article, along with twenty-three commentaries and a reply from Dreger, has appeared in the June 2008 issue of the Archives of Sexual Behavior.

Dreger's article, the commentaries, and Dreger's response focus on big questions about the nature of transsexuality, the definitions of science, power relationships in research, and the ground rules of scholarly debates. Only a handful take up the smaller question of whether—as a matter of law and as a matter of ethics--Bailey should have sought IRB approval prior to writing his book. But that's the question that falls within the scope of this blog.

Sunday, June 29, 2008

Oral Historians Draw Conclusions, Inform Policy, and Generalize Findings

In the lead story of today's New York Times ("Occupation Plan for Iraq Faulted in Army History"), Michael R. Gordon reports on a new 700-page official history of the early occupation of Iraq, produed by the Army’s Combined Arms Center at Fort Leavenworth. As Gordon reports, "the study is based on 200 interviews conducted by military historians and includes long quotations from active or recently retired officers." He notes that "the study is an attempt by the Army to tell the story of one of the most contentious periods in its history to military experts — and to itself." It draws important conclusions with policy implications, finding, for example, that "the military means employed were sufficient to destroy the Saddam regime; they were not sufficient to replace it with the type of nation-state the United States wished to see in its place.”

This sounds suspiciously like the kind of project comprising generalizable research as defined by OHRP's Michael Carome in his October 2003 discussion with the UCLA Office for Protection of Research Subjects (as reported by UCLA.) In that conversation, Carome noted that


Systematic investigations involving open-ended interviews that are designed to develop or contribute to generalizable knowledge (e.g., designed to draw conclusions, inform policy, or generalize findings) WOULD constitute "research" as defined by HHS regulations at 45 CFR 46.

[Example]: An open ended interview of surviving Gulf War veterans to document their experiences and to draw conclusions about their experiences, inform policy, or generalize findings.


Except for the fact that it's the wrong Gulf War, the Army study nicely fits Carome's example of research requiring review.

Fortunately for federal historians, no one else in the federal government seems to share Carome's view on this matter. I know of no federal agency, executive or legislative, that requires IRB review for oral histories conducted by its employees. As reported on this blog, even OHRP officials did not submit to IRB review when conducting oral history research.

Maybe Dr. Carome will try to discipline the researchers at Fort Leavenworth. Him and what army?

Friday, June 13, 2008

IRB Disciplines and Punishes a Qualitative Researcher

Tara Star Johnson reports her experiences in "Qualitative Research in Question: A Narrative of Disciplinary Power With/in the IRB," Qualitative Inquiry 14 (March 2008): 212-232.

Johnson left teaching high school to pursue a PhD in Language Education at the University of Georgia. As she completed her preparatory work, she found "no qualitative studies investigating the phenomenon of sexual dynamics in the classroom." She decided, for her dissertation work, "to address this void in educational research through in-depth interviewing of teachers who have experienced desire for and/or from students to trace how these attractions happen and open the door for dialogue about embodiment, desire, and sexuality in education." Her professors were encouraging, and her advisor accompanied her to her appointment with the IRB.

After waiting an hour and a half beyond their scheduled appointment, Johnson and her advisor finally met with about twenty members of the IRB. The chair listed several restrictions, which Johnson found disappointing, but "not unreasonable or completely unexpected." Then the fun began.

Tuesday, June 3, 2008

Music Educator Finds IRBs Inconsistent, Restrictive, and Burdensome

Rhoda Bernard kindly alerted me to Linda C. Thornton, "The Role of IRBs in Music Education Research," in Linda K. Thompson and Mark Robin Campbell, eds., Diverse Methodologies in the Study of Music Teaching and Learning (Charlotte, North Carolina: Information Age, 2008), 201-214.

Thornton (along with co-author Martin Bergee) wanted to survey music education majors at the 26 top university programs to ask why they had chosen music education as a profession. She writes, "no personal information regarding race, habits, or preferences was being collected—only descriptive data such as each student's major instrument (saxophone, voice, etc.), age, and anticipated year of graduation." She dutifully submitted her proposal to her local IRB, and then the trouble began.

Thornton's own IRB forbade the researchers from surveying students at their own institutions, then imposed requirements suitable for a survey on sexuality or criminal activity. Most significantly, it required Thornton to seek permission from the IRBs at the 24 universities remaining in her pool.

Nine of the 24 accepted the proposal as approved by Thornton's IRB, including one which noted it had a reciprocity agreement in place. Of the remaining 15, several imposed burdensome requirements, ranging from small changes in the informed consent letter (which then needed to re-approved by the original IRB), and the requirement that the instructor at the local institution, who was just going to distribute and collect questionnaires, be certified in human subjects research. Application forms ranged from two pages to eight; at least one IRB demanded to know the exact number of music education majors in every school to be surveyed. The result was that the researchers dropped many of the schools they hoped to study, cutting their sample from several thousand to 250.

This sad story touches on two points: inconsistency, and regulatory exemptions.