Showing posts with label Belmont. Show all posts
Showing posts with label Belmont. Show all posts

Sunday, June 5, 2016

Bioethicists learn from history

Now that I’ve had my rant about the Belmont Report’s year of publication, I can turn to the more substantive arguments of Barron Lerner and Arthur Caplan’s recent essay, “Judging the Past: How History Should Inform Bioethics." These scholars wisely argue against simplistic condemnations of past behavior, yet they also reject the other extreme of attributing all past misbehavior to the age rather than the individual. By understanding what choices were open to actors in the past, we can better assess the morality of their actions and the choices that we ourselves face.


[Barron H. Lerner and Arthur L. Caplan, “Judging the Past: How History Should Inform Bioethics,” Annals of Internal Medicine 164, no. 8 (April 19, 2016): 553–57, doi:10.7326/M15–2642.

Saturday, June 4, 2016

The Belmont Report was published in 1978, goddammit!

Bioethicists Barron Lerner and Arthur Caplan have published a nice essay about using history to make better decisions today. I will comment on their main points in a separate post, but I want to address separately the authors’ repetition of a common error: the suggestion that the Belmont Report was published in 1979.

The Belmont Report was published in 1978, goddammit!

[Barron H. Lerner and Arthur L. Caplan, “Judging the Past: How History Should Inform Bioethics,” Annals of Internal Medicine 164, no. 8 (April 19, 2016): 553–57, doi:10.7326/M15–2642.

Wednesday, May 11, 2016

NSF Officer Misstates Belmont and Common Rule Standards

In the final contribution to the PS symposium, Lee Demetrius Walker, currently serving as program officer for the Political Science Program at the National Science Foundation, acknowledges the problems of applying a biomedical review system to social science. But he misstates the Belmont and Common Rule standards for assessing research.


[Lee Demetrius Walker, “National Science Foundation, Institutional Review Boards, and Political and Social Science,” PS: Political Science & Politics 49, no. 02 (April 2016): 309–12, doi:10.1017/S1049096516000263.]

Friday, January 1, 2016

My NPRM Comments

Perhaps 2016 will be the year when OHRP makes good on its 2007 promise to “give more guidance on how to make the decision on what is research and what is not,” in the form of a promulgated revision to the Common Rule. If so, Happy New Year, OHRP!


Wth these hopes, I have submitted my own comments on the NPRM. I have posted a copy of the PDF I submitted, and below is a web version with links.


Friday, September 4, 2015

NPRM: Freedom for Historians, If They Can Keep It

The notice of proposed rulemaking (NPRM) promises long-sought relief for historians, journalists, and biographers. For these groups, the goal will be to ensure that the proposed rules are enacted as currently written.

[This post has been cross-posted to the Petrie-Flom Center's Bill of Health, which is conducting an online NPRM Symposium.]

11 September 2015: See update at the bottom of this post.

Thursday, June 4, 2015

Earnest Member Reads the Belmont Report

My favorite portion of The Censor’s Hand is Schneider’s invention of Earnest Member, a conscientious gentleman with good intentions but no ethical training until he is appointed to the IRB and handed copies of what Schneider terms the Sacred Texts.

Wednesday, March 19, 2014

McCarthy's Mysterious Mythmaking

PRIM&R has launched "People & Perspectives (P&P)," described as a "digital story-telling library." The site features a blurb by Joan Rachlin, PRIM&R's soon-to-retire executive director, who calls it "an enduring and dynamic record of our historical antecedents, how and when we come together."

But is anyone going to vet the accuracy of stories posted on the site?

That question is raised by a 4-minute clip (taken from a much longer November 2013 interview) with Charlie McCarthy, director of the Office for Protection from Research Risks from 1978 to 1992.

I have not watched the full interview (not transcribed, and therefore a chore). But the four minutes and 12 seconds on "social-behavior research" is by itself a disturbing stew of faulty memory and misinformation.



Here are some of the key inaccuracies.

Sunday, January 12, 2014

NRC Report: Assess Risk Empirically

One theme running throughout the NRC report is the need to replace the worthless gut reactions decried by Ezekiel Emanuel with a system that would base its judgments on the latest empirical evidence. But the report does not present a clear set of reforms that would effect this change without scrapping the current system of local IRB review.

Wednesday, June 12, 2013

Robert Levine: We Should Have Done a Careful Study of Social and Behavioral Research

The June issue of the Journal of Clinical Research Best Practices features an interview with Robert Levine about his service as consultant to the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. Levine concedes that the commission did not sufficiently explore "sociology, anthropology, education and other vast areas of research."

[Mark Barnes, "Bob Levine on the Making of the Belmont Report," Journal of Clinical Research Best Practices 9, no. 6 (June 2013). h/t Michelle Meyer]

Wednesday, April 17, 2013

What Can One University Do?

A few weeks ago, a correspondent asked me what reforms individual universities can implement while awaiting systemic, regulatory reform. It's an excellent question, so here's a roundup from material previously covered on the blog.

No university has adopted all of these measures, and at least one of these measures has not been adopted by any. But most of them are in place already, and there's no reason they can't spread.

Sunday, February 10, 2013

Faden et al. Question Research-Treatment Distinction

Writing in a special report of the Hastings Center Report, a team of prominent ethicists and researchers "argue that conceptual, moral, and empirical problems surround the received view that we can and should draw sharp distinctions between clinical research and clinical practice." Yet they decline to detail the implications of any regulatory change for IRB review of medical research, much less research in the social sciences and humanities.

[Kass, Nancy E., Ruth R. Faden, Steven N. Goodman, Peter Pronovost, Sean Tunis, and Tom L. Beauchamp. "The Research-Treatment Distinction: A Problematic Approach for Determining Which Activities Should Have Ethical Oversight." Hastings Center Report 43, no. s1 (2013): S4–S15. doi:10.1002/hast.133. h/t Yashar Saghai]

Monday, January 14, 2013

Bell and Salmon Warn of Dangerous Assumptions

Kirsten Bell and Amy Salmon, both of the University of British Columbia, warn that in trying to protect people they consider vulnerable, ethics committees ignore empirical evidence that some measures are counterproductive.

[Bell, Kirsten, and Amy Salmon. “Good Intentions and Dangerous Assumptions: Research Ethics Committees and Illicit Drug Use Research.” Research Ethics 8, no. 4 (December 2012): 191–199. doi:10.1177/1747016112461731.]

Sunday, September 16, 2012

Could Guidance and Feedback Replace Rote Compliance?

Murray Dyck and Gary Allen, both of Griffith University in Australia, argue that "the review process should be an advisory and collegial one—not one that focuses on compliance, enforcement and gatekeeping."

[Murray Dyck and Gary Allen. “Is Mandatory Research Ethics Reviewing Ethical?” Journal of Medical Ethics (August 3, 2012), DOI: 10.1136/medethics-2011-100274.]

Monday, February 13, 2012

Talk at NIH

On March 5 I will speak to the Bioethics Interest Group at the National Institutes of Health on the topic "Blunder at Belmont: The 1970s Origins of IRB Mission Creep."

Those familiar with my first book may appreciate the irony of my speaking in the William H. Natcher Building.

Tuesday, August 30, 2011

The Years Spin By

In his presentation to the Presidential Commission for the Study of Bioethical Issues today, Ezekiel Emanuel showed some textual slides that were visible during the live webcast but do not show up--at least on my browsers--when I replay the recording.

One of these slides featured a timeline of events leading to the current system of human subjects protections. That timeline featured two common errors.

Monday, June 20, 2011

New FWA Terms Allow Alternatives to Belmont

In September 2010, OHRP posted drafts of new FWA form and FWA Terms of Assurance. In my comments, I asked that the drafts be revised to make clear that under 45 CFR 46.103(b)(1), institutions have the right to choose any statement of principles they wish, including statements they formulate themselves. Other comments also made this point.

I also asked that the FWA terms allow institutions to conform to the current version of Canada's Tri-Council Policy Statement, rather than the 2005 version mentioned in the OHRP draft.

OHRP has now released the new version of the FWA terms. I am happy to report that both of my suggestions have been adopted.

Tuesday, January 11, 2011

Sociologists Find IRBs Serve Organizational Interests, Not Professional Ethics

Sociologists Carol Heimer (Northwestern) and JuLeigh Petty (Vanderbilt) find that IRBs "substitute bureaucratic ethics for professional ethics."

[Carol A. Heimer and JuLeigh Petty, "Bureaucratic Ethics: IRBs and the Legal Regulation of Human Subjects Research," Annual Review of Law and Social Science 6 (2010): 601-626.]

Much of the article consists of concise, accurate summaries of many of the complaints lodged against IRBs, including some by your humble blogger. (The bibliography lists well over 100 works on IRBs and research ethics.) Heimer and Petty categorize these complaints as "critiques of IRB law as law, critiques of IRBs as regulation, and critiques of IRBs as a system of norm making." Critics have charged that IRBs act lawlessly, do more harm than good, and deny researchers the opportunity to shape the norms that govern them. "IRBs seem to have lost sight of their original objective," Heimer and Petty state, summarizing some of this work. "No longer collective bodies of researchers deliberating together about the ethical dilemmas they encounter, IRBs are instead agents of the university (or research center). Rather than protecting research subjects from harm, they now seem especially focused on protecting universities and research centers."

To these complaints (which they mostly seem to endorse), Heimer and Petty add three of their own.

First, they employ the "lens of inequality," finding that "the regulations fail in part because the research process does not go as the regulators imagine and because the regulations do not address the social sources of the big inequalities. Furthermore, the regulations support inequality when they prevent research on powerful groups who harm others." IRBs fret over the details over consent forms, ignoring evidence that "potential research subjects actually pay little attention to consent forms and later do not even remember the details in them." At the same times, IRBs ignore "structural inqualities" (most notably, "the big inequality that the majority of the global research funds address the health problems of the wealthy few") while perpetuating inequality by preventing social researchers from studying powerful groups, including sellers of loose cigarettes.

I find the section on "the big inequality" the least persuasive part of this article. Heimer and Petty too readily accept the claims of Jill Fisher and Adriana Petryna that (in the words of Heimer and Petty) "the focus on abstract, universal principles in the Belmont Report deflects attention from the structural conditions and inequalities under which the unethical treatment of research subjects has taken place." Fisher and Petryna mischaracterize both the Belmont Report and the Common Rule by claiming (in Petryna's words) that "so long as an investigator [can] document that his or her subjects could deliberate about personal goals and act 'under the direction of such deliberation,' it [is] ultimately up to the subjects themselves to judge the acceptability of the risks they [take]."

In fact, the Belmont Report specifically warns against imposing the burdens of research "upon poor ward patients, while the benefits of improved medical care [flow] primarily to private patients," and the Common Rule requires IRBs to determine that "risks to subjects are reasonable in relation to anticipated benefits" and that "selection of subjects is equitable," independently of ensuring informed consent.

Whether IRBs are able to do this, and to do this without inappropriately restricting a great deal of ethical research, is another question. But it's unfair to charge the National Commission or the authors of the regulations with ignoring the problem of structural inequality and the challenge it poses to a consent-based model.

After the section on the "lens of inequality," Heimer and Petty "look at IRBs through the lens of professions." Noting that "the regulation of human subject research is a growth industry," they warn that rather than cede the power to declare a project exempt (as suggested by accommodationist reformers like Levine and Skedsvold), "IRB professionals [may] defend and perhaps seek to expand their jurisdiction." In doing so, they are protecting "their livelihood, a secure niche on the edges of the research and scholarly world." And they will have help: "OHRP’s focus on documentation helps explain why IRB professionals and not bioethicists are the growth sector in human subjects regulation."

Finally, Heimer and Petty "examine IRBs and research enterprises as organizations." Here they find that "a complex mixture of coercion by the government, fear of loss of funding, individual professional self-interest . . . and a desire not to be seen to be on the wrong side of a key cultural divide" do more to explain the growth of IRBs than do the Nuremberg Trials and other documented cases of unethical research.

They conclude with a grim assessment:

As the regulation of human subjects research has been institutionalized, professional competition and the protection of organizational interests seem to have carried the day. A bureaucratized research ethics is essentially an ethics of documentation. The task of translating the principles of autonomy, beneficence, and justice was never going to be easy. But translations that ignore structural inequalities, delay or reduce valuable research, and substitute bureaucratic ethics for professional ethics may not bring as much progress as we hoped.

Thursday, December 23, 2010

First, Do Some Harm, Part II: The AAA Ethics Task Force

In mid-October, the Ethics Task-Force of the American Anthropological Association solicited comments on the following text, a section of a draft Code of Ethics now being written:


Do No Harm

Anthropologists share a primary ethical obligation to avoid doing harm to the lives, communities or environments they study or that may be impacted by their work. This includes not only the avoidance of direct and immediate harm but implies an obligation to weigh carefully the future consequences and impacts of an anthropologist’s work on others. This primary obligation can supersede the goal of seeking new knowledge and can lead to decisions not to undertake or to discontinue a project. Avoidance of harm is a primary ethical obligation, but determining harms and their avoidance in any given situation may be complex.

While anthropologists welcome work benefiting others or increasing the well-being of individuals or communities, determinations regarding what is in the best interests of others or what kinds of efforts are appropriate to increase well-being are complex and value-laden and should reflect sustained discussion with those concerned. Such work should reflect deliberate and thoughtful consideration of both potential unintended consequences and long-term impacts on individuals, communities, identities, tangible and intangible heritage and environments.


As of December 13, 33 people (presumably all anthropologists, but I'm not sure) had posted comments. The comments are often nuanced, making it hard to say whether they endorse the language or not. But they broke down roughly as follows:

Do No Harm



Significantly, the most wholehearted supporters of the "do no harm" proposal are those who uncritically embrace the Belmont Report and the Common Rule. "'Do no harm' is an IRB principle, and so it should be in our code," writes Bethe Hagens. Four other responses, from Chip Colwell-Chanthaphonh, mkline, Robert T Trotter II, and Simon Craddock Lee, all seem to suggest that the AAA code should conform to those documents, without asking much about their origins or their fit to the practices and beliefs of anthropologists.

Four other responses--from Barbara Rose Johnston, Seamus Decker, socect, and Vicki Ina F. Gloer--endorse Hagens's idea that anthropologist should "intend no harm." Despite the Belmont Report's description of "the Hippocratic maxim ”do no harm” [as] a fundamental principle of medical ethics," this form is more faithful to the Belmont's overall section on beneficence.

Do Some Harm



Eight responses--almost as many--appear to reject the "do no harm" idea on the grounds that neutrality is impossible, and anthropologists should not hesitate to harm those who deserve it. "A blanket edict to 'Do No Harm' could easily lead to a professional paralysis when one considers that a few steps away from the person giving you this interview is someone who will not like, will want or need to fight, or will suffer consequences for what is said much further down the line," writes Benjamin Wintersteen. Murray Leaf concurs. "Do no harm is fine as principle of medical practice," he writes, "where you are working with a single individual. It is nearly meaningless when you (we) work with human communities, in which what is good and what is harm is usually in contention. As some of these posts suggests, what we do is often a matter of helping some while undermining the position of others. No harm at all, in such a context, would almost always be also no help at all–and no effect at all."

Bryan Bruns offers an example. "I work, in conjunction with communities and a government agency, to design and support a process in which communities are likely to, in a reasonably democratic way, act to restrain the behavior and thereby (harm) reduce the benefits of a few people (upstream irrigators, large landowners) who currently take advantage of others, it’s not clear how a principle of 'do no harm' would allow any practical engagement."

I would say that the responses by Dimitra Doukas, Joan P Mencher, Moish, Noelle Sullivan, and Ray Scupin all fall in this general category of respecting critical inquiry. Margaret Trawick's comment is harder to categorize. "I have been teaching 'Do no harm' to my students as the first ethical principle for anthropological fieldwork, for many years," she writes. "It is a difficult principle to follow, precisely because you never know what might cause harm, and therefore you have to THINK about what you are doing in the field more carefully than you might in everyday life. Good intentions are not enough. Additionally, 'harm to whom' is a good question . . . Sometimes to protect and advocate for one party (.e.g. Untouchables in India) is to, at the least, offend some other party – e.g. high caste Hindus." Given her understanding of this problem, I'm not sure why she teaches "do no harm" rather than something like "think about whom you are harming."

It's the Wrong Question



An even greater number of responses suggest that, in the words of Carl Kendall, "This principle is way too vague and self-directed to be practically useful." Kendall hints, perhaps cynically, that anthropologists need one set of principles these ethical principles to "pass IRB muster" and a second set "to protect communities and fieldworkers." Carolyn Fluehr-Lobban argues that "'Harm' should be problematized—are there agreed upon universal standards of harm, and where is there discussion of reasonable disagreement."

James Dow rejects the medical language of IRBs: "'Do no harm' is an good ethical principle to be applied to individual social relationships, which we hope that we understand; however, there is a problem when applying it to larger societies and cultures." Likewise, David Samuels writes that "The place where you need to get informed consent is at the point at which you have turned people into characters in your story. The medicalized pre-framing of the IRB process doesn’t cover that at all."

Taken as a whole, the responses suggest that only a minority of those commenting embrace the Belmont Report and the IRB process as enthusiastically as the AAA did in its 2004 statement that presents the active involvement of IRBs as a positive good. I hope the Task Force recognizes this, and takes the opportunity to reconsider the AAA's overall position in regard to IRB review.

[Hat tip to Alice Dreger. For a historical perspective on another discipline's efforts to craft a research ethics code, see Laura Stark, "The Science of Ethics: Deception, the Resilient Self, and the APA Code of Ethics, 1966–1973," Journal of the History of the Behavioral Sciences 46 (Fall 2010): 337–370.]

Tuesday, November 30, 2010

Belmont's Ethical Malpractice

I complain about the Belmont Report in an essay published today in Bioethics Forum.

Wednesday, November 10, 2010

Comments: FWA Forms Should Reflect Common Rule

On October 4, I reported that OHRP was inviting comments on drafts of new FWA form and FWA Terms of Assurance.

Prior to the October 25 deadline, OHRP received comments from only five individuals and two professional organizations, all of which are posted at regulations.gov.

Of these seven comments, three (including mine, of course) complained that the draft Terms of Assurance, like the existing ones, violate the Common Rule's pledge that an institution's statement of principles "may include an appropriate existing code, declaration, or statement of ethical principles, or a statement formulated by the institution itself."

No one made a case for retaining the discrepancy between the regulations and the forms.