[Laura A. Petersen, Kate Simpson, Richard SoRelle, Tracy Urech, and Supicha Sookanan Chitwood, "How Variability in the Institutional Review Board Review Process Affects Minimal-Risk Multisite Health Services Research," Annals of Internal Medicine 156, no. 10 (May 15, 2012): 728–735. h/t Human Subject News.]
Like the Green, Lowery, Kowalski, and Wyszewianski article cited in the ANPRM, this article describes the fate of a health services research study. That is, rather than directly studying patients, the researchers wanted to learn about the behavior of physicians and other clinicians, specifically if financial incentives would affect their adherence to the guidelines for hypertension care.
Like Green et al., they hoped to use Veterans Affairs (VA) medical centers as a way of setting up a controlled trials. This required getting IRB approval from each site, so the investigators submitted proposals to 17 facilities.
The responses varied widely. Two sites allowed expedited review, though one of those rejected the study, as did two others. Fourteen sites approved the study and classified it as minimal risk.
Nevertheless,
The total time spent in the IRB approval process before the study could be implemented, from initial submission to the first site to approval of the protocol modification at the final site, was 827 days, or more than 27 months. This is 21 months longer than we had proposed and 23 months longer than the time for which we had received a budget. Staff spent an estimated 6729 hours working on IRB- and R&D-related tasks, costing approximately $168,229 in salaries. This estimate does not include the salary for the PI or site PIs.
The delay meant that some physicians left their posts before the study could begin, and the study was skewed toward "more highly affiliated, urban sites that were treating more complex patients, potentially affecting the external validity (generalizability) of the study findings."
The authors concede that "Some variation in review may be appropriate because of local values in assessing human subjects’ risks and benefits." But they note that
many of the revisions requested by local IRBs, when compared with what was approved by the IRB of a multisite study’s coordinating center, have been shown to add little in terms of local context or essential protections and usually make few, if any, substantive changes to the study protocol. Our experience confirms this finding. One underlying issue responsible for the type of local variation we had is that IRBs do not seem to agree on the limits of their sphere of human research protections and do not confine themselves to reviewing the ethical issues related to them. For example, 1 IRB required that we provide documentation of union approval and then asked whether we were providing any incentives to the institution itself.
The authors find that "the time and costs involved in the review process seem incongruous," and they conclude that "An overall review of the standards for research as planned by the Department of Health and Human Services is welcome."
In a comment on the article, Adam Rose of the Bedford VA Medical Center describes a similar experience: "In the course of our one-year, $100,000 study grant, we spent over half of our funds on the process of securing approvals . . . The utility of all this extra work, in terms of protecting human research subjects, was questionable at best."
A second comment, by Jeffrey Silverstein of Mount Sinai School of Medicine, wishes that the authors had distinguished between those changes (whether mandated by the IRBs or others) that they found justified and those they found misguided. Though the article leaves a strong impression that they researchers did not think that the delay and expense were appropriate, it would indeed be interesting to know if they found anything of value in the review process.
4 comments:
Part of the IRB review process is the back and forth between study team and IRB. In this summary, there is no discussion of the time the PI took to respond to IRB issues. Often PI's will complain the approval took 6 months when they sat on IRB comments for 5 months. Perhaps the original article has data on study team turnaround times to IRB comments? if not, their conclusions are suspect...
Thanks for this comment. Dr. Silverstein also notes the desirability of such data.
That said, the authors do offer some data on the variability of approval times. Three sites approved the study in less than 50 days. If we take that as a baseline, we need to explain the variability among the other sites.
Even if hard data did exist about how much time the ball was in the "PI's court" as opposed to the "IRB's court," would that significantly reduce the perceived severity of this case, or eliminate it as an excellent example of why the IRB system needs major improvement? I'm not convinced it would.
I believe the time it takes PIs to respond to IRB requests is directly related to the reasonableness and consistency of the IRB's demands. Of course, I have no evidence to back that up -- only my experience as an IRB staffer communicating such demands to researchers.
Thanks for this comment. My guess is that if the researchers were to provide a careful analysis of the causes of the delay, we'd learn something. For example, it would be interesting to know if all the IRBs involved met only on a monthly schedule, so that every modification took at least a month to be considered and sent back, or if some reviewers were willing to engage in a constant back-and-forth with the researchers. Maybe a follow-up article for Petersen et al.?
Post a Comment