Unlike symposium participants Tony Porter, Dvora Yanow and Peregrine Schwartz-Shea, Levine and Skedsvold do not question the premise that IRBs help promote ethical research. Instead, they assert that there is no fundamental conflict between IRBs and social science researchers: "federal regulations, professional ethics codes, and research practice may have shared goals but tend to speak with different languages—creating frustration and skepticism in a system that could potentially work quite well if transformations are made." (502) Based on that assertion, they suggest four such transformations, ranging from the bold to the timid.
Decentralizing the IRB
The first suggestion is the boldest: establishing IRBs at the level of the individual department or research unit. These departmental IRBs would still meet the requirements of 45 CFR 46, but they would include--and presumably be led by--researchers familiar with the methods under review. As I've written before, for those projects that can benefit from prospective review, I do like the idea of putting the responsibility for review in the hands of scholars who know something about the proposals they encounter.
But Levine and Skedsvold seem naive when they suggest that "the regulations still provide latitude for institutions interested in developing new models for a local human research protection system to do so." (502) Reading the regulations in the absence of OHRP actions and policy statements doesn't tell you much about the real requirements for IRBs. For example, the regulations do not require that a quorum be documented for every IRB action item, but OHRP does. [Norman Fost and Robert J. Levine, "The Dysregulation of Human Subjects Research," JAMA 298 (14 November 2007), 2196.] That kind of picky demand can only be met by expert staff, which tends to move power away from IRB members and toward the administrators who have the time to keep up with all the rules. It also makes it harder to establish multiple IRBs.
Particularly painful is the article's pseudo-historical claim that "in contrast to when IRBs were first established, colleges and universities today are larger and more complex organizations with many more human subjects protocols to review," and therefore decentralization is more needed today than in the past. (503) Nonsense. When the Department of Health, Education, and Welfare first imposed IRB requirements for a wide range of research in the early 1970s, the fields of anthropology, political science, psychology, sociology, and the like were as varied as they are today, with differing ethical codes and methodological practices. To accommodate this diversity, some universities sought just the kind of department-level review that Levine and Skedsvold now propose. But OPRR (OHRP's predecessor) crushed that effort, insisting that all these fields be lumped into a single "behavioral" category. As Donald Chalkley, OPRR's director, put it in 1976,
There were several questions with regard to the use of sub-committees by Institutional Review Boards. We have encouraged it, we have discouraged it. We have discouraged it when the tendency was to put a sub-committee in every department. We beat Ohio State out in that. And we have encouraged it when it was obvious to us that a board that had begun primarily as a medical board, was not capable of dealing with behavioral research and things of this sort. In fact, our current listing in general assurances distinguishes between those institutions which are capable of dealing with medical subjects, and those that are capable of dealing only with the behavioral. [National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, Transcript, Meeting #24, November 1976, 175].
Would today's OHRP act differently? Here's hoping.
Simplifying and Expediting Expedited Review
The second suggestion is modest, but still important. Levine and Skedsvold suggest that IRBs be required to report on their efficiency at handling expedited reviews:
Through the FWA, institutions could describe the expected timeframe for completing expedited review and provide an annual report regarding, for example, the number of applications submitted and approved for expedited review, the number that were ultimately taken to full review, and the time span between filing for expedited review and action by the expedited review official. (503)
While it seems a pity to add another layer of paperwork to the IRB process, quantifying IRB performance might be a start toward IRB accountability. As Atul Gawande has advised, "if you count something interesting to you, I tell you: you will find something interesting."
It's important, however, to find ways to keep IRBs from gaming the statistics. IRBs have killed a lot of projects--by delaying approval until the researcher gives up, or forcing the researcher to modify the proposal beyond recognition--without outright rejecting them. I suggest, then, that the statistics proposed here are only the beginning. Indeed, if we are serious about IRB accountability, we need several measures of effectiveness. To date, we don't have one.
Limiting Review of Public Use Data Files
The third suggestion is that IRBs stop reviewing studies that use "public use data files," which contain "only data collected in anonymous form or that has been stripped of direct and indirect identifiers." The authors suggest a certification scheme so that "local IRBs would no longer need to determine that the use of the file by an investigator meets the criteria for exemption from IRB review." (503)
This sounds sensible enough, though I'd like to get more details about what kinds of files would need such certification. As I reported earlier, UCLA believes that its researchers need IRB permission to read blogs (UCLA's Policy 42 includes specific instructions for how to get permission to read a blog, or a letter to the editor.) I'd like Levine and Skedsvold to elaborate definitions to avoid this kind of hyper-regulation.
Enhancing the Educative Function of IRBs
The final suggestion is the most timid, and the least helpful. Levine and Skedsvold suggest education as a panacea:
Local IRBs could sponsor a monthly open meeting to answer questions about federal regulations, local policies, and/or issues relating to specific protocols. Developing opportunities for education and advisement could assist institutions in creating a positive climate for improving human research protections. While educating investigators about the overall system, IRB members and investigators can discuss ideas about, for example, reducing risk or enhancing confidentiality protections as researchers are developing protocols. (503)
This reminds me of earlier calls for simple cooperation between IRBs and researchers, for example, Robert Cleary's 1987 suggestion:
Fundamentally, [IRBs’] work aims at the heightening of sensitivity among researchers to the need to protect human subjects. Thus the main problems involving the work of IRBs in political science seem to be perceptual and informational, rather than regulatory. Fortunately, these are the kinds of problems that people of good will can solve. And in my opinion the positive results for the protection of human subjects make it worthwhile to try! [Robert E. Cleary, "The Impact of IRBs on Political Science Research," IRB: Ethics and Human Research 9 (May-June 1987), 10.]
But such calls for communication and discussion ignore the profound differences that divide IRBs and social researchers. Pick any horror story you want--Tony Porter's, Tara Star Johnson's, Scott Atran's--and you'll find a researcher who knows what she's doing confronting an IRB that does not. IRBs cannot educate investigators about the overall system because they don't know very much about the overall system--if the "overall system" is taken to include the ethics and methods of various branches of the social sciences and humanities. Nor do they have much incentive to learn anything.
I refer Levine and Skedsvold to the ethnographic studies of IRBs by Laura Stark and Maureen Fitzgerald. Both found that IRBs' public policies had little bearing on their actual decision processes. An IRB that holds a monthly open meeting to expound on fundamental ethical principles, then retreats behind closed doors to reject proposals based on spelling errors, is not going to instill the kind of respect among researchers that Levine and Skedsvold want.
An alternative would be to require that IRBs document their reasons for decisions on each proposal, creating what Jack Katz has called a system of "legality." Katz explains:
Legality changes the interaction environment of decisionmaking by creating a series of processes in which the reviewed become capable of examining and publicly criticizing the review to which they are subjected, both on a retail, case-by-case basis, and on a wholesale, policymaking level. [Jack Katz, "Toward a Natural History of Ethical Censorship," Law & Society Review 41 (December 2007), 805.]
Katz presents legality primarily as a way to protect researchers against IRB censorship, but it would also be an excellent way to provide the kind of education Levine and Skedsvold want. For example, in the early 1970s, the Berkeley IRB assembled a handbook based on 2500 cases it had decided. By showing in concrete, not hypothetical, terms what the IRB considered to require review and what qualities it looked for in a proposal, the handbook both empowered and educated Berkeley researchers. But the handbook was part of a broader effort that excused some researchers from IRB review, and Chalkley's OPRR killed that too.
OHRP's Role
The authors acknowledge that IRB overregulation emanates from Bethesda, and they conclude that the ball is in OHRP's court.
By acknowledging that there are multiple ways to protect human research participants within the parameters of the federal regulations, OHRP would provide needed reassurance for institutions and investigators. Furthermore, to promote new ideas in this area, OHRP could develop a call for reform models and thereby signal to institutions its support for change. (504)
I agree that OHRP is crucial here; one cannot expect local IRBs--vulnerable as they are to OHRP's wrath--to stick their necks out with new initiatives. But how likely is OHRP to lead the cause of reform?
As Levine and Skedsvold concede, OHRP has ignored earlier calls--going back to 2002--for some of the very reforms they advocate. The authors are silent on why that should be, or why we should expect better treatment now. OHRP's record of ignoring social scientists' calls for reform challenges the overall assumption in this article that OHRP is interested in helping social scientists with their research. Here's a case where they could have used some of the more skeptical perspective of Tony Porter.
Let's face it: the regulation of medical research is a lot more important, in lives and dollars, than the kinds of work that Levine, Skedsvold, and I care about. Whatever the regulations say, whatever "flexibility" they offer, we can expect OHRP to keep its eye on medical research, and to issue rules solely with medical research in mind. Serious reform may take much bolder restructuring than Levine and Skedsvold admit.
2 comments:
We appreciate your taking seriously, and commenting on, the recommendations we offer for change. Unfortunately, short of significant regulatory reform (which is not likely) or other creative solutions, social scientists will find themselves at this same place years from now. Even Congressional action (e.g., at least one bill is being redrafted now) is not likely to help matters. Our approach was to attempt to identify areas in which institutional change could occur now without regulatory change. We are not insensitive to (or naïve about) the troubled relationships between social science researchers and IRBs. These troubled relationships have been present from the creation of the human research protections system in this country and have been the subject of much discussion over recent years. And historians, for one, have been particularly effective in raising their concerns. Despite these power dynamics, however, there is room for change. Our article is directed at trying to focus on some feasible changes and to move beyond problem specification to problem solving. Also, we do not see “education as a panacea,” but we do think that providing opportunities for the exchange of ideas around methodological or regulatory issues could lay the groundwork – at least at some institutions – for positive change. Our illustrations were not just empty calls for education, but steps that could change how IRBs come to understand their role (e.g., providing advice as protocols are developed). We cite research showing the benefits of a more transparent, open, and legitimate process. Indeed, a panel of social scientists assembled by the National Academies has also called for researchers and IRBs to seek a better understanding of the functions and constraints on each other as a way to improve the process.
Levine and Skedsvold
Thank you for your essay and for this helpful comment.
I am glad your comment makes explicit your assumption that we are unlikely to achieve positive regulatory or legislative change any time soon. Perhaps I am the naive one, but I think such change is actually more likely than the kind of enlightened leadership you seek from OHRP. At today's meeting of the Secretary’s Advisory Committee on Human Research Protections, several committee members expressed their frustration with the current regulations, while also noting that OHRP's limited resources prevent it from leading reform efforts. Nor do OPRR/OHRP's actions in the past give me much hope. So I don't think I'm alone in thinking that we need at least to consider strategies for changing the regulations, and perhaps the statute which they claim to implement.
Of course, efforts at incremental and radical reform can proceed simultaneously, which is why I was glad to read your article. But I would like to know why you think your proposed reforms are any more likely to be adopted than those of the 2003 National Academies panel. As you note in your article, "good ideas . . . are yet to be tested or implemented on a wide scale." (502) That may be an understatement; in five years, has OHRP made any effort to implement any of the National Academies panel recommendations?
In sum, you two and I share some basic hopes for a reformed system. I think we would all like to see a greater role for review at the department level, ethical training for researchers tailored to their own methods and topics, and some way of holding IRBs to standards of procedural justice. And we may be equally pessimistic about achieving such outcomes nationwide. But while you direct your pessimism at Congress and the Common Rule signatories, I'll target mine at OHRP and the local IRBs.
Zach
Post a Comment