[Laura A. Petersen, Kate Simpson, Richard SoRelle, Tracy Urech, and Supicha Sookanan Chitwood, "How Variability in the Institutional Review Board Review Process Affects Minimal-Risk Multisite Health Services Research," Annals of Internal Medicine 156, no. 10 (May 15, 2012): 728–735. h/t Human Subject News.]
Like the Green, Lowery, Kowalski, and Wyszewianski article cited in the ANPRM, this article describes the fate of a health services research study. That is, rather than directly studying patients, the researchers wanted to learn about the behavior of physicians and other clinicians, specifically if financial incentives would affect their adherence to the guidelines for hypertension care.
Like Green et al., they hoped to use Veterans Affairs (VA) medical centers as a way of setting up a controlled trials. This required getting IRB approval from each site, so the investigators submitted proposals to 17 facilities.
The responses varied widely. Two sites allowed expedited review, though one of those rejected the study, as did two others. Fourteen sites approved the study and classified it as minimal risk.
The total time spent in the IRB approval process before the study could be implemented, from initial submission to the first site to approval of the protocol modification at the final site, was 827 days, or more than 27 months. This is 21 months longer than we had proposed and 23 months longer than the time for which we had received a budget. Staff spent an estimated 6729 hours working on IRB- and R&D-related tasks, costing approximately $168,229 in salaries. This estimate does not include the salary for the PI or site PIs.
The delay meant that some physicians left their posts before the study could begin, and the study was skewed toward "more highly affiliated, urban sites that were treating more complex patients, potentially affecting the external validity (generalizability) of the study findings."
The authors concede that "Some variation in review may be appropriate because of local values in assessing human subjects’ risks and benefits." But they note that
many of the revisions requested by local IRBs, when compared with what was approved by the IRB of a multisite study’s coordinating center, have been shown to add little in terms of local context or essential protections and usually make few, if any, substantive changes to the study protocol. Our experience confirms this finding. One underlying issue responsible for the type of local variation we had is that IRBs do not seem to agree on the limits of their sphere of human research protections and do not confine themselves to reviewing the ethical issues related to them. For example, 1 IRB required that we provide documentation of union approval and then asked whether we were providing any incentives to the institution itself.
The authors find that "the time and costs involved in the review process seem incongruous," and they conclude that "An overall review of the standards for research as planned by the Department of Health and Human Services is welcome."
In a comment on the article, Adam Rose of the Bedford VA Medical Center describes a similar experience: "In the course of our one-year, $100,000 study grant, we spent over half of our funds on the process of securing approvals . . . The utility of all this extra work, in terms of protecting human research subjects, was questionable at best."
A second comment, by Jeffrey Silverstein of Mount Sinai School of Medicine, wishes that the authors had distinguished between those changes (whether mandated by the IRBs or others) that they found justified and those they found misguided. Though the article leaves a strong impression that they researchers did not think that the delay and expense were appropriate, it would indeed be interesting to know if they found anything of value in the review process.