A group of 11 researchers and IRB professionals, most of them affiliated with the University of California, San Diego, report on a brainstorming session from early 2015. They argue that readable consent forms, expert review, a less punitive system, and more exemptions would better serve researchers and participants. While they present their ideas as “a human research protections system that is responsive to 21st century science,” the measures they propose are equally valid for research as it has been practiced for decades.
[Cinnamon Bloss et al., “Reimagining Human Research Protections for 21st Century Science,” Journal of Medical Internet Research 18, no. 12 (2016): e329, doi:10.2196/jmir.6634.]
The team presents its proposals under five headings, but I see the second and fifth as similar in intent.
- Redesigning the Consent Form and Process
There’s broad consensus that written consent forms often fail to give prospective research participants the information they need to make a good decision. The UCSD team proposes consent forms based on Creative Commons licenses (like the one used by this blog). “Research studies,” they explain, “could create three consent forms: one that contains all the legalese and scientific exposition; one in plain English that presents the facts; and a third that is simplified even further and presents risks in bullet point format.”
- Empowering Researchers to Protect Participants
Though the authors call for “empowering researchers,” really they are calling for expert peer review:
Researchers intending to engage in human-participant research could produce a document that lays out plans and risks of the research. They could then offer those documents, along with an outline of the proposed consent process, for review by their peers. Peers would be researchers in the field of relevance for the research. These documents could be posted on the Web in the same way clinical trials are registered; not to get approval but to create a public record of the research …
Using [a] Web-based resource, within a few hours, researchers posing questions such as “How do I ensure that I won’t cause harm by asking this interview or survey question?” would receive answers from researchers who have been rated in terms of experience and expertise in human research protections. Elements of the plans could ultimately become like “protection modules” that could be swapped in and out of consent forms and research protocols, drawing attention to highly ranked modules.
- Reinforcement and Learning From Experience
Like Greg Koski, the San Diego team sees the aviation safety system as a promising model. To encourage the sharing of important information, that system relies on more transparency, and less punishment for non-compliance.
Pilots who have a “bad” landing or make another safety-related error who self-report their mistake are spared from punishment, but those who do not report it themselves are penalized if someone elects to report. Analogously, as an alternative to an IRB, in this system, researchers who create a protocol they believe to be safe, who then observe a harm during the research and who report that harm to their university or institution, present an opportunity for the research institution and community to learn how to prevent future harm.
- Increasing Efficiency of the Institutional Review Board
The authors call for several measures to track and ultimately reduce the costs of IRB review. Of particular concern to this blog is their suggestion that IRBs “use the ‘exempt’ category to a greater degree, as it was intended. The exempt category is frequently appropriate for the vast majority of social and behavioral science studies, yet it is underused, which leads to delays in review and approval and, thus, wasted resources.”
- Review of Research That Leverages Technological Advances
The authors claim that new technologies, such as “mobile, visual imaging, pervasive sensing, and geolocation tracking technologies present new ethical and regulatory challenges,” and suggest that “virtual network composed of researchers, technologists, and bioinformatics experts may prove to be a workable solution to augment or replace the traditional IRB review process resulting in an informed and meaningful human protections review of 21st century science.” This sounds a lot like their second recommendation.
These aren’t new problems
Like Metcalf and Crawford, the San Diego authors implausibly suggest that today’s IRB structure once made sense, but has been overtaken by events.
While IRBs have helped address this critical need, the IRB system has not kept pace with the evolution of research methods and practices or current and emerging trends in science and technology. The fact that the system has become antiquated calls into question whether the IRB continues to foster the protection of human research participants per the principles originally put forth in the Belmont Report. New forms of research enabled by technological advances in information technology and data science appear to be particularly challenging to IRBs, yet clear standards to guide best practices are not well established.
In fact, most of the problems they raise are not unique to “21st century science.” IRBs have wasted resources meddling with social science wasteful and inappropriate regulation of social science research since the 1960s. In the 1970s, the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research discussed participants’ struggle to understand consent forms and the need to get expert judgment to understand the risks of specific methods. The exemptions which the authors want to see applied date to 1981. Punishing researchers, rather than encouraging them to learn from mistakes, has been a bad idea from the start.
The authors are thus wrong to suggest that these problems arose with cellphones and social media. It is not that the IRB system has become antiquated; rather, it never scaled up well from its small start in the 1960s. The reforms suggested here would have been as useful half a century ago.