The background is this: in an effort to discourage smoking, California law prohibits the sale of single cigarettes, known as "loosies." Nevertheless, store owners in predominantly poor, African American neighborhoods in San Francisco sell them. In 2002, a group of University of California, San Francisco researchers teamed up with a public health group and residents of the neighborhoods to study the problem, creating Protecting the 'Hood Against Tobacco project, (PHAT).
At first, they proposed merely to observe the sale of loosies, and they got the UCSF IRB's approval. But then the researchers realized that this was impractical; it would require observers to loiter for a long time in the hopes of seeing a spontaneous sale. So they returned to the IRB, this time asking that members of the community be allowed to request cigarettes and record the result. The IRB refused to allow this under UCSF auspices, though it could not stop community members from proceeding on their own.
In 2006, four of the researchers--R. E., Malone, V. B. Yerger, C. McGruder, and E. Froelicher-- complained in print about their treatment at the hands of the IRB. ["'It's like Tuskegee in reverse'": A case study of ethical tensions in institutional review board review of community-based participatory research," American Journal of Public Health 96 (2006): 1914–1919.] While conceding that some readings of federal regulations could justify the IRB's actions, they suspected that the IRB was not simply protecting the human subjects of research:
The early IRB referral to the university's risk management department whence we were referred to the legal department, suggests that the project was regarded in some way as a legal risk and a financial threat to the university. The subsequent legal analysis—which opined that community research partners might be hurt (and thereby possibly put the university at an economic or legal risk because it would be considered a university project)—supports this interpretation. This raises the question about whether such concerns represent an institutional conflict of interest, because the decision about whether the study was ethical appears to be associated with institutional self-protection. (1918)
In her article, Wolf states that "I do not intend to provide a rebuttal to Malone et al. or a defense of the REC decision." She does seek to explain the IRB's decision as follows:
In this particular case, there were a relatively small number of stores in a limited geographic area. As a result, identification of stores might be possible, even if their identities were withheld from publication, at least within the community, which could lead to adverse consequences for them. In addition, the information sought pertained to illegal activity. The researchers had obtained an agreement from the district attorney that the office would not prosecute store owners or clerks for illegal activity uncovered by the study. While this agreement is helpful in minimizing the risk of prosecution, it did not fully eliminate it; another district attorney might not honor the agreement or the information from the study could trigger monitoring by law enforcement after the study. In light of these circumstances, the UCSF REC [research ethics committee, i.e., IRB] felt that the store personnel and owners must be afforded protection under the federal regulations.
She continues,
Some of the problems between the UCSF REC and the PHAT researchers may have stemmed from confusion regarding the definition of "community." For those involved in the PHAT study, the community comprised those residents of Bayview–Hunters Point who had participated in the research collaboration through their engagement in deciding on a research question, and developing and carrying out the research protocol. The REC, on the other hand, had a broader view of what constituted the community. In addition to the Bayview– Hunters Point residents who had collaborated with the academic researchers, the REC felt it had to consider the interests and well-being of those who owned, operated, and worked for the stores from whom data were obtained. Even if they were not human subjects as defined by the federal regulations, they were members of the Bayview–Hunters Point community whose interests and trust in research could be jeopardized if the REC approved the researchers' amendment regarding illegal sales of loose cigarettes. Thus, the REC felt an ethical obligation to consider the interests of the broader community in addition to the interests of the community members participating directly in the study conduct. (79)
This is not a credible explanation of UCSF's actions. If the IRB was worried that documenting the sale of loosies by identifiable stores would lead to consequences too adverse to be accepted, it would have blocked the original proposal, in which hoped to provide that documentation solely through observation. That the IRB would allow the damaging information to be collected through observation but not through actual purchase of cigarettes suggest that the denial was based either on the determination that the second version turned the store employees into human subjects under federal definitions, or a more general form of institutional ass-covering.
Wolf presents the whole affair as a misunderstanding. "Most of these challenges can be met if we engage in an open dialogue among RECs, academic researchers, and community partners, both formally and informally," she writes. "If the parties engage each other openly and respectfully, their collaboration will enable important CBPR research to go forward with appropriate review and oversight." (82) In other words, What we've got here is a failure to communicate.
But the researchers understood that they faced not a failure of communication, but an ethical debate. "From a biomedical ethics perspective that is based on principlism and proceduralism, the IRB's decision appears reasonable, even necessary," they wrote. (1917) The problem is that applying biomedical ethics to social questions led to a "decision [that] protected the interests of the tobacco industry and other industries whose representatives wink at illegal cigarette sales." (1918)
On the other hand, the researchers's 2006 article does fail to articulate the core ethical problem. When the researchers seek to justify work that might harm store owners and employees, they defend their research proposal in terms not far removed from those of a medical researcher.
First, they emphasize "the guaranteed immunity from prosecution" based on the study. (Individuals won't be hurt.) Second, "Ethicists already consider it reasonable that concem for individuals may become secondary to public health priorities during public health emergencies." (OK, individuals may be hurt, but people are dying!)
Finally, "the object of our study was to assess institutional practices within a community, not the responses of individuals within those institutions—a distinction the IRB dismissed as irrelevant . . . By their very nature, institutions have distinct legal and social identities that are something other than a collection of individual legal and social identities, and institutional practices transcend and do not necessarily equate with individual beliefs or behaviors." This puts more distance between the ethics of medical research and that of social research, but it is a hard distinction to maintain when the businesses involved are neighborhood convenience and liquor stores, whose institutional practices are in fact quite likely to equate with individual beliefs or behaviors.
What is needed is a justification for harming individuals, even deliberately doing so. In 1967, Lee Rainwater and David J. Pittman offered one ["Ethical Problems in Studying a Politically Sensitive and Deviant Community," Social Problems 14 (Spring 1967), 363]:
sociologists have the right (and perhaps also the obligation) to study publicly accountable behavior. By publicly accountable behavior we do not simply mean the behavior of public officials (though there the case is clearest) but also the behavior of any individual as he goes about performing public or secondary roles for which he is socially accountable—this would include businessmen, college teachers, physicians, etc.; in short, all people as they carry out jobs for which they are in some sense publicly accountable. One of the functions of our discipline, along with those of political science, history, economics, journalism, and intellectual pursuits generally, is to further public accountability in a society whose complexity makes it easier for people to avoid their responsibilities.
In the absence of such clear statements in favor of doing harm, we get articles like Wolf's, suggesting no limits on an IRB's ability to restrict research that could cause someone harm. Particularly chilling is Wolf's response to the charge that it was silly for the IRB to forbid UCSF researchers from participating while allowing community partners to proceed. (As the researchers noted, this had the effect of depriving the community partners of expertise while depriving store owners of the protections worked out between university researchers and prosecutors.)
Wolf concedes this point, and wishes for "a consistent set of ethical standards for all research, and I join the call of others to extend the scope of the federal regulations." (82) In other words, this law professor wants a world in which IRBs can forbid citizens from taking notes about the crimes committed in their neighborhoods, lest it "lead to adverse consequences" for the criminals.
I would prefer the world of Rainwater and Pittman, in which those business owners who break the law and poison their communities face some risk of exposure. To achieve such a world, researchers must, at times, intend harm.
2 comments:
I feel I must correct a misconception that forms the basis for these posts "First, Do Some Harm". Namely, that "do not harm"
is a standard by which the IRB makes decisions regarding whether to approve research. This is not the case. Nowhere in the
Belmont Report or the federal regulations does it say that research should be risk free.
The Belmont Report does not state "do no harm" as a principle. The Belmont principle is beneficence, which is more than non-
maleficence or "do no harm". In its discussion on the assessment of risks and benefits, which is based on the principle of beneficence, the Belmont Report clearly indicates that research that increases risk to subjects can be justified by the benefits of such research to the individual subject or to society.
With regard to the regulations, the Belmont principles are implemented in Section 111 of the regulations, "Criteria for IRB
Approval of Research." It is these criteria which serve as the standards for IRB review. The first two criteria in Section
111 are about risk. The first criterion is that risks to subjects are minimized. Note that it does not say that
there should be no risk. Rather, it says that there should be the least possible risk to obtain sound results. There is no
ethical basis for conducting research that exposes subjects to more risk than is necessary to obtain valid results. The second criterion is that risks to subjects are reasonable in relation to anticipated benefits, if any, to subjects, and the importance of the knowledge that may reasonably be expected to result. This means that research should not subject subjects to risk without a sufficiently good reason. There is no ethical justification to subjecting subjects to potential harm without sufficient benefit. The purpose of research is not to harm people but to further human knowledge. We have the court system to punish people for their misdeeds. Research must be able to justify that there is sufficient benefit which can be derived from the research to warrent the risks in the research.
So, holding up "do no harm" as an IRB standard is a misunderstanding of both the Belmont Report and the federal regulations.
Thank you for these comments.
I quite agree that the Common Rule does not include the admonition to "do no harm," but rather calls on researchers to minimize risks "by using procedures which are consistent with sound research design and which do not unnecessarily expose subjects to risk." Indeed, Wolf seems to recognize that, by denying researchers a sound research design, her IRB acted contrary to the guidance of the regulations. "The regulations provide a framework for decision-making and provide the minimum requirements for ethical conduct of human subjects research," she writes. "An REC can impose more stringent requirements on a study than are specified in the regulations if it feels that doing so is necessary to protect human subjects." (79) In other words, never mind the Common Rule; we'll do as we please.
The Belmont Report is another matter. Its section on beneficence uses the phrases "do not harm" and "do no harm," both with apparent approval. That section sketches only two narrow exceptions. First, "even avoiding harm requires learning what is harmful," i.e., clinical equipoise. And second, the example of risky research on children "without immediate prospect of direct benefit to the children involved" if such research promises "great benefit to children in the future."
Neither exception applies to research on criminal shopkeepers. No one argued that "what is harmful" was unknown; both sides in the debate understood that having one's crimes aired was bad for the criminals. Nor did anyone argue that the research on criminal shopkeepers would be of great benefit to criminal shopkeepers in the future. Rather, the proposal was to imperil one group--the shopkeepers--for the benefit of another: those neighborhood residents not wishing to suffer tobacco-related diseases. The Belmont Report ignores the ethics of such "muckraking sociology."
Thus, I cannot agree with your claim that "the Belmont Report clearly indicates that research that increases risk to subjects can be justified by the benefits of such research to the individual subject or to society." Perhaps some IRBs read it that way, but if so, they are reading between the lines. More commonly, "do no harm" appears as a bullet point on IRB websites around the country; try a search for "IRB" and "do no harm." This leads intelligent people like Bethe Hagens to conclude that "'Do no harm' is an IRB principle."
I don't know if the obscure and somewhat contradictory language of the Belmont Report was responsible for the decision by the UCSF IRB, whose principle seems to have been, "first, do no harm to the University of California, San Francisco." But a better ethics document, such as the Canadian Tri-Council Policy Statement, would at least have given researchers and reviewers a common vocabulary for discussing a project involving critical inquiry.
Post a Comment