Tuesday, June 3, 2008

Music Educator Finds IRBs Inconsistent, Restrictive, and Burdensome

Rhoda Bernard kindly alerted me to Linda C. Thornton, "The Role of IRBs in Music Education Research," in Linda K. Thompson and Mark Robin Campbell, eds., Diverse Methodologies in the Study of Music Teaching and Learning (Charlotte, North Carolina: Information Age, 2008), 201-214.

Thornton (along with co-author Martin Bergee) wanted to survey music education majors at the 26 top university programs to ask why they had chosen music education as a profession. She writes, "no personal information regarding race, habits, or preferences was being collected—only descriptive data such as each student's major instrument (saxophone, voice, etc.), age, and anticipated year of graduation." She dutifully submitted her proposal to her local IRB, and then the trouble began.

Thornton's own IRB forbade the researchers from surveying students at their own institutions, then imposed requirements suitable for a survey on sexuality or criminal activity. Most significantly, it required Thornton to seek permission from the IRBs at the 24 universities remaining in her pool.

Nine of the 24 accepted the proposal as approved by Thornton's IRB, including one which noted it had a reciprocity agreement in place. Of the remaining 15, several imposed burdensome requirements, ranging from small changes in the informed consent letter (which then needed to re-approved by the original IRB), and the requirement that the instructor at the local institution, who was just going to distribute and collect questionnaires, be certified in human subjects research. Application forms ranged from two pages to eight; at least one IRB demanded to know the exact number of music education majors in every school to be surveyed. The result was that the researchers dropped many of the schools they hoped to study, cutting their sample from several thousand to 250.

This sad story touches on two points: inconsistency, and regulatory exemptions.

Since their creation in the 1960s, IRBs have been making decisions based on guesswork, with little attempt at developing a consistent system of best practices for research and for the review of research. As Jay Katz testified in 1973,


The review committees work in isolation from one another, and no mechanisms have been established for disseminating whatever knowledge is gained from their individual experiences. Thus, each committee is condemned to repeat the process of finding its own answers. This is not only an overwhelming, unnecessary and unproductive assignment, but also one which most review committees are neither prepared nor willing to assume.

[U.S. Senate, Quality of Health Care—Human Experimentation, 1973: Hearings before the Subcommittee on Health of the Committee on Labor and Public Welfare, Part 3 (93d Cong., 1st sess., 1973), 1050].


Katz's testimony helped inspire Congress to pass the National Research Act of 1974, requiring broader use of IRBs and establishing the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research to make further recommendations--recommendations that remain the basis of today's system in the United States and elsewhere. But neither the law nor the commission addressed the problem of disseminating knowledge. In 1982, Jerry Goldman and Martin D. Katz tested the system by submitting identical, flawed proposals for medical research to 32 IRBs. They found "substantial inconsistency in the application of ethical, methodological, and informed-consent standards for individual review boards." (Jerry Goldman and Martin D. Katz, ''Inconsistency and Institutional Review Boards,'' Journal of the American Medical Association 248, pages 197-202, 1982). Thornton did not set out to replicate the Goldman-Katz test (she wanted to learn about music students, not IRBs), but she did so by accident, and got similar results.

Perhaps no one will be surprised that Thornton's sample showed so much inconsistency. Indeed, even Laura Stark, something of a defender of the present system, encourages us to think of inconsistency as a feature, not a bug. "The local character of board review does not mean that IRB decisions are wrong so much as that they are idiosyncratic," she writes, suggesting that "the application of rules is always an act of interpretation and that sometimes this discretion can have positive, as well as negative, effects." (Laura Stark, "Victims in Our Own Minds? IRBs in Myth and Practice," Law & Society Review 41 (December 2007), 782. Perhaps, but I suspect the negative effects are far more common. I doubt Stark would defend the treatment Thornton received.

I was more surprised by the one way the 24 IRBs were consistent in their response. Federal regulations offer exemptions for


(2) Research involving the use of educational tests (cognitive, diagnostic, aptitude, achievement), survey procedures, interview procedures or observation of public behavior, unless:
(i) information obtained is recorded in such a manner that human subjects can be identified, directly or through identifiers linked to the subjects; and (ii) any disclosure of the human subjects' responses outside the research could reasonably place the subjects at risk of criminal or civil liability or be damaging to the subjects' financial standing, employability, or reputation. (45 CFR 46.101)


Thornton's research clearly fits this exemption.

Universities may apply their own rules on top of federal regulations, and we know from a 1998 study that less than 40 percent of survey research eligible for exemption actually receives it. [James Bell, John Whiton and Sharon Connelly, Evaluation of NIH Implementation of Section 491 of the Public Health Service Act, Mandating a Program of Protection for Research Subjects (Arlington, Virginia: James Bell Associates, 1998), 29.] Still, I would have expected some IRBs to tell Thornton that her research required no review. No such luck. Thornton's own institution requires review of all research involving college students, and all or almost all of the other universities seem to have applied similar, non-federal rules.

Last year, Jerry Menikoff argued that social scientists had exaggerated the dangers of IRBs. He claimed that "most institutions assume that any study which falls within one of the exemption categories would automatically be in compliance with the Belmont Report criteria," and therefore "such studies, in a properly functioning IRB system, should receive relatively rapid and nonburdensome review." ("Where’s the Law? Uncovering The Truth About IRBs and Censorship," Northwestern University Law Review 101 (2007), 794-795). Maybe they should receive such review, but Thorton's experience suggests they do not. If research universities that excel in music education are any indicator, overly restrictive IRBs are the rule and not (as Menikoff suggests) the exception.

The exemptions in 45 CFR 46 resemble the bill of rights in the old Soviet constitution. They may look good on paper, but don't count on them to protect the freedom of inquiry.

No comments: