Monday, July 21, 2014

Most IRB Chairs Can't Recognize Exempt Research or Non-Research

A study of criminal justice researchers' knowledge of IRB rules has found that IRB chairs can't agree on what makes a project exempt from review and think that IRB review is needed for public records. The authors of the study, one of whom is an IRB chair, seem not to realize the significance of these findings.

[Tartaro, Christine, and Marissa P. Levy. "Criminal Justice Professionals' Knowledge of Institutional Review Boards (IRBs) and Compliance with IRB Protocol." Journal of Criminal Justice Education 25, no. 3 (2014): 321–41. doi:10.1080/10511253.2014.902982.]

"Correct" Answers in Scare Quotes


Christine Tartaro and Marissa P. Levy, both professors of criminal justice at the Richard Stockton College of New Jersey, sought to learn what their fellow criminologists knew about IRB rules and procedures.

To do this, they devised seven hypothetical research scenarios and posed them in survey form to two groups. First, IRB chairs--from whom they got 164 responses--and second, to U.S.-based, academic members of the Academy of Criminal Justice Sciences (ACJS). Of the 1,174 potential respondents in the latter group, they received 323 from respondents who work at institutions with IRBs.

For reasons not well explained, the authors labeled the consensus view of the IRB chairs as the "correct" answer to a given scenario.

The correct answers for the scenario responses were gained by reviewing each scenario with the Chair [i.e., Levy] and comparing her answers to those of a national survey of IRB Chairs which produced responses from 164 IRB Chairs. While we acknowledge that many of these answers are subject to interpretation, it is the IRB Chairs who ultimately decide the level of review required for each protocol based on his or her interpretation of the federal guidelines. As such, IRB Chairs' interpretations of federal guidelines as applied to the research scenarios is the closest one can get to identifying "correct" answers. All IRB Chairs were given the same amount of information from which to draw their conclusions and determine "correct" answers. IRB Chairs were divided on the correct course of action for two of the seven research scenarios, so those scenarios were excluded for the current analysis.

The scare-quotes around "correct" and the exclusion of two scenarios show that the authors are at least somewhat aware that the disagreements among IRB chairs pose serious questions about the clarity of the scenarios and of the regulations themselves. Rather than label any response "correct," it might have been better to describe the chairs' view as just that: the chairs' view.

Unforuntately, the study authors do not indicate how the IRB chairs split on these cases. What constituted enough of a consensus for the authors to label an answer "correct"? And, more importantly, what can the IRB chairs' responses tell us about the consistency of IRB rulings?

IRB Chairs Can't Recognize Non-Research


Assuming that a "correct" label represents a fairly broad consensus among Levy and the IRB chairs surveyed, it is striking that members of this group can't recognize non-human subjects research when they see it.

Here are Tartaro and Levy's descriptions of scenarios four and five:

For scenario four, "You want to give students a non-graded quiz at the beginning of the semester. At the end, you plan to give students the same quiz to see how much they learned. You aren't going to publish the results. How will you handle this?" Ninety-five percent correctly chose "proceed without IRB approval," 3% believed it was necessary to apply for an exempt or expedited review and 2% selected full IRB review. For scenario five, the instructor from the previous scenario planned to use the assessment data for a conference presentation or publication. The correct answer is to apply for an IRB exempt/expedited review, and 65% of respondents chose that option. Sixteen percent believed that no IRB review was necessary, and 19% were in favor of a full IRB review.

Let's compare that "correct answer" to OHRP's 2010 guidance on quality improvement:

The intent to publish is an insufficient criterion for determining whether a quality improvement activity involves research. The regulatory definition under 45 CFR 46.102(d) is "Research means a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge." Planning to publish an account of a quality improvement project does not necessarily mean that the project fits the definition of research; people seek to publish descriptions of nonresearch activities for a variety of reasons, if they believe others may be interested in learning about those activities.

So while OHRP does not think that publication triggers IRB review, Tartaro and Levy, the consensus of IRB chairs, and something like 84 percent of the researcher respondents all think that it does.

A starker case is scenario two:

For scenario two, respondents were asked to imagine that, "You are seeking information from a local police department about the date and location of each report of a stolen car over a period of a year. You are not requesting any identifying information on the owner or car. How would you handle this study?" The appropriate course of action, according to IRB rules, is to apply for exempt or expedited status from the IRB, and 66% answered this correctly. Twenty-seven percent responded that proceeding without IRB approval was appropriate, and 8% thought that a full IRB review was necessary.

Folks, while laws about police records vary by state, police blotters (including reports of stolen cars) are public records in most or all states. And even if they weren't, if you are collecting data that does not include identifiable private information about a living individual, you are not conducting human subjects research as defined by the Common Rule.

It's appalling that that 74 percent of the researchers surveyed didn't know this, and more appalling that Levy and her fellow IRB chairs did not.

IRB Chairs Can't Recognize Exempt Research


It's also distressing that the IRB chairs could not agree on what constitutes exempt research.

The description of the first scenario states that "IRB Chairs were widely in agreement that some level of IRB review was necessary, but they were split on the extent of that review." Tartaro and Levy do not report how the split broke down among those favoring exemption (which Tartaro and Levy regard as "some level of IRB review"), expedited review, or full review. Thus, Tartaro and Levy present as wide agreement what may in fact have been substantial disagreement.

For scenario two (the public records of stolen cars) Tartaro and Levy report that "The appropriate course of action, according to IRB rules, is to apply for exempt or expedited status from the IRB." So not only did IRB members fail to spot the non-human subjects research, but having mistaken it for human subjects research, they couldn't agree on whether it was exempt.

The same goes for scenario five, the quality improvement ("The correct answer is to apply for an IRB exempt/expedited review"). In these three scenarios, the IRB chairs could not come to consensus on whether a given activity was exempt, forcing the Tartaro and Levy to accept both exempt and expedited as "correct" answers.

Then there were the two scenarios excluded from the study because "IRB Chairs were divided on the correct course of action." Tartaro and Levy do not state whether those scenarios involved intepreting IRB regulations.

So of the 4-6 IRB-related scenarios Tartaro and Levy began with, the IRB chairs were able to agree on only one: no IRB involvement is needed to quiz your students on how much they have learned in a semester. For the remaining five, they could not come to consensus on whether the project as described is exempt under the Common Rule.

Tartaro and Levy do not remark on this finding. Yet their results stand as a potential counterpoint to the claims of IRB apologists like Suzanne Rivera, who thinks that researchers are incompetent to determine exemption. Are researchers poorer interpreters of 45 CFR 46.101 than the IRB chairs in this study?

Criminologists Disagree on Ethics


For reasons not explained, one of the five scenarios in the article presents a question of ethics, not of regulatory procedures.

Scenario three involved the ethics of identifying research participants. Participants were asked to consider the following: "As you prepare to present findings at a conference, your colleague presents you with slides that shewants to use. On one of the slides, your colleague has a picture of an offender that she took during field observations ... How do you respond?" Ninety-three percent chose, "tell your colleague that including the photo would be unethical," while 7% would have advised the colleague to include the photo in the presentation.

Tartaro and Levy present this finding without comment, not claiming that it involves the IRBs or offering a "correct" response from the IRB chairs. For my part, I would have to say that the scenario offers insufficient information to give an answer.

The ACJS code of ethics makes clear that "Confidential information provided by research participants should be treated as such by members of the Academy, even when this information enjoys no legal protection or privilege and legal force is applied." But it is not clear that the photograph in the scenario is confidential. Are we talking about a photograph taken in someone's living room? Or on a public sidewalk? If the latter, I would note that newspapers regularly feature photographs of people committing offenses, such as the apparently illegal chokehold used by NYPD Officer Daniel Pantaleo against Eric Gardner last week. (See also the Gericault-esque photographs of unauthorized migration by Meridith Kohut for the New York Times.)

Criminologists Change Wording, but Otherwise Follow Rules


In addition to asking about the hypothetical scenarios, Tartaro and Levy surveyed the researchers about their own practices. They found that 41 percent had, in the past three years, undertaken "activities against IRB rules," but they acknowledge that their survey mistakenly listed researching "public records or other data not involving human participants" as such an infraction. The 41 percent is therefore an overstatement.

A better figure may be the 27 percent who "made a minor change to the wording of a survey or consent form after IRB approval without reporting back to the IRB." This jibes with the finding that "Twenty-one percent indicated that researchers should be able to make a minor wording change to a previously approved survey or consent form without reporting it to the IRB."

Much smaller numbers (less than 5 percent) made substantial changes without consulting the IRB, conducted human subjects research before receiving approval or "purposefully left out information or was vague about an aspect of the study that you anticipated that the IRB would challenge." This last group (3.3 percent, or nine of the 245 respondents to that question) included a respondent who explained, "My IRB doesn't understand qualitative research that uses grounded theory." Unfortunately, the analsyis compares the responses given by researchers according to various attributes, e.g., has the researcher served on the IRB, it does not break down the replies by quantitative or qualitative research.

Why Blame the Researchers?


Tartaro and Levy conclude that "the results of this study indicate that academic researchers in criminal justice lack a uniform understanding of IRB rules." They could just as well have concluded that IRB chairs lack a uniform understanding of IRB rules.

But really, the fault lies not with the researchers or the chairs, but with a set of poorly written rules, layered with contradictory guidance from federal officials, and desperately in need of revision. It has been nearly 20 years since the Office for Protection from Research Risks decided that exempt projects were not really exempt. Until the regulations are rewritten, there may be no "correct" answers on what they mean.



No comments: