Now that I’ve had my rant about the Belmont Report’s year of publication, I can turn to the more substantive arguments of Barron Lerner and Arthur Caplan’s recent essay, “Judging the Past: How History Should Inform Bioethics." These scholars wisely argue against simplistic condemnations of past behavior, yet they also reject the other extreme of attributing all past misbehavior to the age rather than the individual. By understanding what choices were open to actors in the past, we can better assess the morality of their actions and the choices that we ourselves face.
[Barron H. Lerner and Arthur L. Caplan, “Judging the Past: How History Should Inform Bioethics,” Annals of Internal Medicine 164, no. 8 (April 19, 2016): 553–57, doi:10.7326/M15–2642.
Lerner (who holds a PhD in history as well as an MD) and Caplan recite several of the usual human subjects horror stories: Nazis, Tuskegee, Willowbrook, Jewish Chronic Disease Hospital. (Sanjay Srivastava will be glad to know that poor Stanley Milgram gets a pass this time.)
They offer a twist, however, in insisting that not all the perpetrators of these misdeeds were “monsters from an alien past.” Rather, they draw on recent scholarship, including works by such eminent historians as Allan Brandt and Susan Reverby, to demand that we understand the context in which people acted immorally, including the crucial question of who supported or challenged the decisions now judged unethical.
They note, for instance, that “many students of bioethics may be surprised to learn that even though the Macon County [Alabama] Medical Society was predominantly African American by the late 1960s, it continued to approve the Tuskegee study. This reality, which raises issues of class, complicates explanations of the study that focus only on racism.”
But complication is not exoneration.
In promoting the historicizing of the past, we do not advocate moral relativism, in which past behaviors are merely excused because they occurred in an era with different values. For example, one might be tempted to explain away the behavior of the Tuskegee study investigators by noting that the study took place in the Jim Crow South, where racism was institutionalized. Historians would be unlikely to make such an argument, but they might ask the following contextual question: Why, in the case of the Tuskegee study, did otherwise progressive persons remain so backward when it came to the issue of race? Indeed, it is precisely the context that holds the key lesson for modern researchers and clinicians studying the moral failings of their predecessors.
One way to help assess moral blame is to ask whether there were contemporaneous criticisms that should have alerted physicians to ethical problems. For example, even though the inventor of the lobotomy, Egas Moniz, won the Nobel Prize for his achievement, there were detractors. In 1947, for example, a Swedish psychiatrist labeled the operation “crude” and “hazardous." The neurologist Walter Freeman achieved renown in the mid–1940s for developing a less-invasive lobotomy, but he continued to do the operation into the 1960s on children with psychosis and healthy adolescents with “anxiety”—patients who were widely believed to be inappropriate candidates. This persistence, despite criticisms and other available options, surely deserves particular historical censure. In the case of the radiation experiments, some commentators at the time remarked that the experiments violated standards established by the Nuremburg Code, belying claims that the concept of informed consent was largely unknown. “It’s not very long since we got through trying Germans for the exact same thing,” wrote an official about a proposed study, adding that it had “a little of the Buchenwald touch.” Researchers ignored these and other warnings, which arguably makes them more culpable than those whose behaviors were not challenged at the time.
Nicely put. Let’s just also recall that certain regulators and bioethicists have ignored timely warnings as well.