As I see it, the comments pose three main questions.
1. What kind of training?
The chief substantive question is what kind of training should investigators, IRB members, and administrators receive.
Few if any of the comments suggested that these folks should act in ignorance, but the authors disagreed over what types of training are appropriate, and how to judge the system which is currently in place, which most typically consists of online readings followed by a multiple choice test. While the mostly widely used system is the CITI Program, the comments mentioned some variants of this system.
Many seem satisfied with such training. The CITI Program itself submitted a comment citing a presentation, "Instruction in The Protection of Human Research Subjects: A web based model," that finds broad satisfaction among those receiving the training. For example, when asked if they agreed with the statement "The time spent doing the course was well justified," "social-behavioral" researchers gave an average response of about 7.5 on a 1-10 scale. I'd like to see this figure disaggregated. How does it break down by discipline? And does an average of 7.5 mean that everybody gives the program a 7 or 8, or that 75 percent give a 9 or 10 while 25 percent rate it a 1?
Indeed, some of the comments (my own included) mocked the idea of multiple choice tests for ethics training. As Jeffrey Spike of the Florida State College of Medicine put it,
All that is required [is] to read a few a few paragraphs and then parrot back the words on a multiple choice quiz . . . This is a shameful failure to take the material seriously, and one which would NEVER be acceptable for the teaching of clinical ethics to future doctors."
Jonathan Baron, a University of Pennsylvania psychology professor, writes,
I did the PI training that was required by NIH. I found that it was more like brainwashing than education. I had to take a test, and, in order to pass the test, I had to express agreement with ethical statements that I thought were wrong. I did it, but it did not endear me to the process. It got me angry.
Two former federal officials are also skeptics. Greg Koski, former director of OHRP, writes on behalf of the Academy of Pharmaceutical Physicians and Investigators,
Many academic institutions mistakenly believe that a physician's ability to pass the [CITI] exam is sufficient to assure that an investigator has a comprehensive knowledge of Good Clinical Practices. The CITI exam and other limited training courses and examinations cannot serve as a substitute for an exam that evaluates all clinical research topics, including human research protections.
Likewise, Charles McCarthy, one of the architects of the present IRB system, writes that online courses "tend to promote the mistaken assumption that the required level of ethical conduct of research can be mastered by taking a three or four hour on-line training session repeated every two or three years. IF SUBJECTS ARE TO BE ADEQUATELY PROTECTED, THE RESEARCH COMMUNITY MUST DO BETTER THAN THAT!"
Even fans of CITI seem to recognize its ability to antagonize researchers. Michael Gillespie, the IRB Coordinator for California State University, San Bernardino, believes that implementing the CITI program led to improved applications, yet he also claims that CSU faces "an increasing problem with faculty understanding the requirements . . . including those that ignore the rules."
What, then, are the alternatives?
Some want training to be more like a college course. Spike suggests 28 hours of course time, ideally taught by real bioethicists. McCarthy also wants college level courses for at least some members of each institution. Howard Stone, of the University of Texas Health Science Center at Tyler, describes 1-2 day courses as "wildly successful." The IRB Sponsor Roundtable wants the flexibility to offer retreats, role-playing exercises, or other formats.
Other comments (including mine) stress the need to match the training to the type of research to be conducted. The American Psychological Association reminds OHRP of the National Bioethics Advisory Committee's recommendation that academic and professional societies be included in developing curricula. The American Historical Association agrees.
2. Regulations or guidance?
Beyond the substantive question about what kind of training is needed is the procedural question of whether OHRP should mandate training, recommend it, or leave institutions alone. Unsurprisingly, the universities and their associations resist the idea of a formal regulation, which could be enforced against them.
For researchers, however, such distinctions may not matter, if university administrations impose OHRP guidance as university policy. As the University of West Florida put it, "the UWF IRB treats OHRP recommendations as regulations and routinely adds them to the UWF IRB Policy and Procedures."
I am frustrated by letters like the joint comment of the Council on Governmental Relations, the Association of American Universities, and the Association of American Medical Colleges. These organizations claim that "institutions are best able to determine the content and extent of relevant training according to an individual's role in the research process." But they also point to the CITI Program as an example of improved training. If institutions are best able to devise training for their people, why have so many delegated the task to CITI? These organizations aren't opposed to inflexible training per se; they just don't want to be legally responsible for imposing it.
3. Guesswork or empirical research?
The final question, also procedural, is whether OHRP should base its policies on empirical research. As the American Psychological Association put it,
As a scientific organization APA values decisions based on empirical research. Thus, the question of whether additional guidance or new regulations for institutional training and education programs on compliance with federal human research protection regulations are required might be answered best by first undertaking a systematic and comprehensive analysis of objective data that are collected by OHRP in the course of its compliance activities, to determine if the underlying cause of noncompliance is a lack of education and training or a combination of these and other factors.
Writing on behalf of the American Educational Research Association, the American Political Science Association; American Sociological Association, and several other organizations, Felice Levine argues that "before determining what forms education should take and what needs to be required of whom, having a base of knowledge on which to determine educational and other needs is critical in order to promote ethically sound research and review practices and to avoid wasting limited resources." The American Academy for the Advancement of Science concurs, noting "a lack of evidence-based studies documenting where and how further training could be most effectively implemented. . . . In the absence of such evidence, it would be premature to issue a regulation."
Sympathetic as I am to such arguments, I must note that a lack of evidence has never deterred regulators from setting human subjects policy. Neither OHRP, its predecessors, or Congress has ever conducted an investigation of the need to regulate social science research. And while they have devoted somewhat more attention to the question of ethical review of medical research, the comments correctly note that existing reports on the efficacy of the IRB system are thin and sporadic. If OHRP were to take these comments seriously, it would have to rethink its entire policy making process.
I have posted all the comments, as well as those submitted in responses to the October 2007 announcement about expedited review, on a new page: http://www.schrag.info/irb/ohrpcomments.html