Showing posts with label SACHRP. Show all posts
Showing posts with label SACHRP. Show all posts

Sunday, April 17, 2016

The Cost of Ethical Review, Part III: Hindering HIV Prevention

Risk-averse IRBs are hindering potentially life-saving research, write Brian Mustanski and Celia Fisher. “Critical advances in HIV prevention among AMSM [adolescent men who have sex with men],” they note, “have been impeded by the failure of IRBs to apply federal regulations permitting adolescents to self-consent to research without parental involvement.”


[Mustanski, B., & Fisher, C. B. (2016). HIV Rates Are Increasing in Gay/Bisexual Teens: IRB Barriers to Research Must Be Resolved to Bend the Curve. American Journal of Preventive Medicine, In Press. doi:10.1016/j.amepre.2016.02.026.]

Thursday, December 31, 2015

NPRM: How to Exclude Journalism?

Few if any argue that journalists should be required to submit their work to IRB review. Some IRB apologists think journalism is too important to bear restriction, while others consider it so full of “blatant bias and even hyperbole” that it doesn’t deserve the dignity of review. But all participants in the debate, at least in United States, seem uncomfortable with the idea of subjecting journalists to prior restraint.


The question, as always, is how to draw the line between journalism and regulated forms of conversation. The NPRM’s proposed rule attempts to do so with a specific exclusion for “Oral history, journalism, biography, and historical scholarship activities that focus directly on the specific individuals about whom the information is collected.” Will that suffice?

Neither the NPRM's language nor SACHRP's proposed replacement is quite right, so let me suggest an alternative.

PRIM&R and SACHRP Attack Social Science Exclusion

In their comments on the NPRM, SACHRP and PRIM&R oppose the proposed exclusion of “Research, not including interventions, that involves the use of educational tests (cognitive, diagnostic, aptitude, achievement), survey procedures, interview procedures, or observation of public behavior (including visual or auditory recording) uninfluenced by the investigators” (§ ll.101(b)(2)(i)); they want such research to be moved from the excluded to the exempt category. But they differ in what they think the consequences of such a move would be; SACHRP thinks that researchers would face low barriers, while PRIM&R sees a chance for its members to continue to exert more control than is authorized by regulations. Both groups fail to represent the researchers most likely to conduct these kinds of studies.

Thursday, July 30, 2015

Botkin Expects NPRM by October

At the tail end of the July SACHRP meeting (5 hours, 35 minutes into the video) chair Jeffrey Botkin stated, “We’re all anticipating the NPRM will be out before October. What that means is our business in October is likely to be the NPRM.”


Clear your calendars, folks.

Friday, July 12, 2013

Two More Biomedical Members for SACHRP

From the OHRP List:

The Office for Human Research Protections (OHRP) would like to announce two new members who have been invited to join the Secretary's Advisory Committee on Human Research Protections (SACHRP). The invited members are:

• James R. Anderson, Ph.D., Professor of Biostatistics and the Associate Dean for Research at the University of Nebraska Medical Center College of Public Health; and

• Stephen J. Rosenfeld. M.D., M.B.A., Chairperson of Quorum Review IRB

OHRP would like to thank all applicants, and notes that the next SACHRP member solicitation will be published in the Federal Register in approximately six months.

Tuesday, June 11, 2013

SACHRP: Exempt Research May "Be Subject to IRB Review"

As reported by Erica Check Hayden in Nature, at its March meeting, SACHRP endorsed "Considerations and Recommendations Concerning Internet Research and Human Subjects Research Regulations, with Revisions," prepared by Elizabeth Buchanan and Dean Gallant. The guidance offers some common sense, but it struggles with the legacy of the poorly drafted Common Rule. And it threatens to make matters worse by suggesting that some exempt research may "be subject to IRB review."

Saturday, March 30, 2013

Rivera: Faculty Researchers Are Notoriously Poor Judges of Risks

Suzanne Rivera, Associate Vice President for Research at Case Western Reserve University and member of the Secretary's Advisory Committee on Human Research Protections, responds to the AAUP's IRB report by asserting that faculty are inept at making determinations of exemption. I question this claim.

[Rivera, Suzanne A. “Academic Freedom and Responsibility |.” Bill of Health. Accessed March 28, 2013. http://blogs.law.harvard.edu/billofhealth/2013/03/24/academic-freedom-and-responsibility/. h/t Michelle Meyer]

Saturday, September 29, 2012

SACHRP Still Lacks Social Scientists

OHRP has announced new members of the Secretary's Advisory Committee on Human Research Protections (SACHRP):

  • Chair Designate: Jeffrey R. Botkin, M.D., M.P.H., Professor of Pediatrics and Medical Ethics, Associate Vice President for Research, University of Utah. Term: October 15, 2012 - October 15, 2016
  • Thomas Eissenberg, Ph.D., Professor, Department of Psychology and Institute for Drug and Alcohol Studies; Director, Clinical Behavioral Pharmacology Laboratory, Virginia Commonwealth University. Term: October 9, 2012 - October 9, 2016
  • Owen Garrick, M.D., M.B.A., President and CEO, Bridge Clinical Research, Inc. Term: October 15, 2012 - October 15, 2016
  • Pilar Ossorio, J.D., Ph.D., Associate Professor of Law and Bioethics, University of Wisconsin-Madison. Term: October 15, 2012 - October 15, 2016

Eissenberg's PhD is in Experimental Psychology, and his "primary area of research is the behavioral pharmacology of drugs of abuse, focusing primarily on nicotine/tobacco." Ossorio's PhD is in Microbiology and Immunology. So that's four new members with background in biomedical research of one kind or another, none whose primary interests are in non-medical research.

Sunday, September 9, 2012

OHRP Calls for 2013 SACHRP Nominations Without Announcing 2012 Appointments

OHRP is calling for nominations to fill two SACHRP positions that will open in 2013.

OHRP has not, to my knowledge, announced the new members for 2012, including replacements for two members whose terms expired in July.

How is the public to suggest appropriate names for 2013 when we do not know what qualifications the 2012 appointments will bring to the committee?

Friday, July 1, 2011

SACHRP to Hear from Presidential Commission

The Secretary’s Advisory Committee on Human Research Protections (SACHRP)
has posted the agenda for its July meeting. The committee will receive a briefing on the Presidential Commission for the Study of Bioethical Issues from the commission's executive director.

Any chance the phrases "nitpicking monster" or "mortifyingly stupid" will be used?

The committee will also get a report from its own Subcommittee on Harmonization.

Tuesday, June 28, 2011

SACHRP Now Lacks Social Scientists

Anthropologist Patricia A. Marshall, the sole social scientist on the Secretary's Advisory Committee on Human Research Protections (SACHRP), completed her term on the committee in March, along with three other members.

Thursday, November 18, 2010

Is Facebook Data Mining Human Subjects Research?

Recent law-school graduate Lauren Solberg finds that "data mining on Facebook likely does not constitute research with human subjects, and therefore does not require IRB review, because a researcher who collects data from Facebook pages does not 'interact' with the individual users, and the information on Facebook that researchers mine from individual users' pages is not 'private information.'"

[Lauren Solberg, "Data Mining on Facebook: A Free Space for Researchers or an IRB Nightmare?" article under review, University of Illinois Journal of Law, Technology & Policy 2010 (2). The article has been accepted for publication, but the journal is still soliciting comments.]

Solberg challenges policies now in place at Indiana University and the University of Massachusetts Boston, where researchers must get Facebook's written permission or the written permission of every individual who is studied. These policies, she argues, impose unnecessary burdens on researchers and IRBs alike. (The two policies are identical, but it's not clear which university borrowed from the other.)

She argues that most data mining projects do not meet the regulatory definition of human subjects research. Reading existing profiles is not interaction with an individual. Nor is a Facebook profile that is open to strangers private information, i.e., "information which has been provided for specific purposes by an individual and which the individual can reasonably expect will not be made public (for example, a medical record)." If a college admissions officer or a potential employer can read your profile, you've lost little by having an anthropologist read it as well.

This analysis seems sound, but it's not clear to me that anyone disagrees. In particular, the third university Solberg mentions, Washington University in St. Louis, applies its policy only to "
Any activity meeting the definition of 'human subject research' which is designed to recruit participants or collect data via the Internet.
" It then lists several examples, most of which involve interaction with living individuals. Thus, I doubt Solberg's claim that "researchers at Washington University need only inform Facebook users that they are recording information that is posted on their pages." Rather, if the project does not meet the definition of human subject research, then Wash U. researchers need not do even that much.

Solberg's article skirts some interesting questions. One concerns the boundaries of a reasonable expectation of privacy. Thus, Michael Zimmer gives the example of a study by Harvard graduate students of the Facebook profiles of Harvard undergraduates. If an undergraduate had made some information visible only to other Harvard students (a choice Facebook's software allows), and a Harvard student-researcher sees it, does that change Solberg's analysis?

A second question concerns the authority of university research offices and IRBs to insist that researchers abide by website terms of service. Notably, the Indiana and UMASS policies do not cite federal human subjects regulations as their authority. Rather, they claim that Facebook and Myspace "explicitly state that their sites are not intended for research but for social networking only."

Solberg writes that evaluating such claims is "outside the scope of this article," but they are interesting in three ways. First, they may be factually false; I could find no such explicit statements in the Facebook or Myspace terms of service. Second, they are divorced from federal regulation. For example, the Facebook terms of service do not distinguish between living and dead Facebook members, whereas federal human subjects protections apply only to the living. Finally, they are internally inconsistent. If Facebook and Myspace did prohibit the use of their sites for research, would not researchers still be violating the terms of service even if they got signed consent from individual members, as allowed by the policies? Just who are these two universities trying to protect?

Solberg concludes that "Unfortunately, and somewhat surprisingly, the OHRP has issued no guidance pertaining to Internet research in general, let alone guidance specifically relating to the issue of data mining on the Internet." To give the feds some credit, in summer 2010 (after Solberg wrote her article), SACHRP did sponsor a panel on the Internet in Human Subjects Research. It can take a long time from a SACHRP presentation to OHRP guidance, but the wheels may be moving on this one.

---

Note, 19 November 2010: The original version of this post identified Ms. Solberg as a law student. She has in fact graduated. I have also changed the link about Michael Zimmer's work from his SACHRP presentation to his article, "'But the data is already public': on the ethics of research in Facebook," Ethics and Information Technology 12 (2010): 313-325.

Monday, July 19, 2010

SACHRP to Discuss Internet Research

The July 21 meeting of the Secretary's Advisory Committee on Human Research Protections will sponsort a panel entitled "The Internet in Human Subjects Research," featuring Elizabeth Buchanan, Montana Miller, Michael Zimmer, and John Palfrey. It should be an interesting session. Unfortunately (and ironically), SACHRP has stopped posting transcripts of its meetings, so it's not clear how much of the content will be available to Internet researchers. I am told the meeting minutes will be posted at some point.

Monday, October 27, 2008

Report from SACHRP, October 2008

Today I attended the public meeting of the Secretary's Advisory Committee on Human Research Protections (SACHRP). Most of the day's discussion concerned the slow pace with which SACHRP's recommendations over the years have been implemented in any way; some have been sitting for years without action. OHRP officials offered detailed explanations of the complexity of policy-making process and OHRP's lack of resources. While these issues help explain OHRP's neglect of questions important to the social sciences and humanities, today's discussion had little of direct importance to scholars in those fields.

Here are a few tidbits of potential significance.

What is the difference between guidance and regulation?



Christian Mahler, a lawyer with HHS's Office of General Counsel, explained that only regulations can be enforced; as a legal matter, institutions are free to ignore OHRP guidance documents. But he and other officials acknowledged that in practice, the difference may not be great. As acting OHRP director Ivor Pritchard conceded, "when we issue guidance . . . people look at every phrase, clause, use of punctuation to see what was meant by OHRP.” He noted that institutional officials may believe the safest course is to comply with all OHRP guidance.

Pritchard said that OHRP tries to differentiate, when drafting guidance, between "must" statements (that indicate OHRP's interpretations of the regulations) and "should" statements that can be ignored if an institution has good reason. He also stated that the best course might be to issue guidance documents that offer multiple ways to comply with the regulations, though it's not clear that OHRP has ever issued such guidance.

Finally, Mahler pointed to the Office of Management and Budget's 2007 "Final Bulletin for Agency Good Guidance Practices.” That bulletin notes that


The courts, Congress, and other authorities have emphasized that rules which do not merely interpret existing law or announce tentative policy positions but which establish new policy positions that the agency treats as binding must comply with the [Admistrative Procedure Act]’s notice-and-comment requirements, regardless of how they initially are labeled. More general concerns also have been raised that agency guidance practices should be better informed and more transparent, fair and accountable. Poorly designed or misused guidance documents can impose significant costs or limit the freedom of the public.


Despite that last sentence, the bulletin seems designed more to limit regulation of economic affairs than to safeguard the freedom of the public. Still, if its approach were followed, OHRP might not be able to get away with so many arbitrary decisions.

What is research?



In February 2007, then-OHRP director Bernard Schwetz told the New York Times that OHRP would, by the end of 2007, issue guidance on what is and is not research under the regulations. OHRP is almost a year overdue in keeping that promise, but apparently it is still at work. Pritchard noted that "We have been working on a guidance document on the definition of ‘research’ for several years now.” Pritchard did not indicate how far along that process is, or whether OHRP will solicit public comment before promulgating that document.

What is an ideal consent process?



In her presentation, Elizabeth Bankert, co-chair of SACHRP's Subpart A Subcommittee, addressed the problem of IRB insistence on long, complex consent forms. She noted that while the regulations requiring consent forms have not changed since 1991, IRBs have been demanding ever more detailed forms, and have gained a reputation for "wordsmithing" and "nit-picking." Not only does this erode investigator trust and respect for IRBs, but it also "diminishe[s] the consent process for subjects."

Bankert--drawing on work by subcommittee member Gary Chadwick--challenged the presumption that "the form must contain every piece of information and in the same detail as required in the consent process," which leads to such long forms. She called upon OHRP and FDA to endorse the use of shortened consent forms: about 3-4 pages for a clinical trial, and just one for "surveys, etc."

Bankert offered these suggestions as a way to decrease some burdens on IRBs and investigators, but she then suggested that IRBs would still need to review the consent process. She did not explain how the same incentives for nit-picking wouldn't lead to endless fretting about the consent process, rather than the consent form. As SACHRP member Jeffrey Botkin pointed out, IRBs and investigators turn to long forms out of self-preservation. And as David Foster noted, FDA inspectors and AAHRPP accreditors have demanded ever longer forms. Before Bankert and Chadwick can address the problem of silly forms, they need to understand the system that produces them.

Predictably, almost all the discussion of consent forms centered around clinical trials. For example, Bankert offered sample executive summaries for a drug trial and a request to store blood or tissue for future research, but no comparable document for a social science project. And one committee member, expressing her concerns, forgot to speak of "human subjects" and instead started talking about protecting "patients." Given the complexity of the issue, and the dominance of SACRHP by medical researchers and bioethicists, I would expect that any change will be designed for clinical trials, and then imposed on social researchers.

Saturday, July 26, 2008

Report from SACHRP, Part 3: When Consent Means Censorship

A third item of interest from this month's SACHRP meeting concerns rules about research on Indian reservations.

According to a handout provided at the meeting, in March 2008, Dr. Francine Romero--an epidemiologist and former member of SACHRP--proposed that the Common Rule be amended to specify that


For human subject research to be conducted wtihin the jurisdiction(s) of federally recognized American Indian or Alaska native (AIAN) Tribal government(s), the IRB shall require documentation of explicit Tribal approval for the research. This approval shall come from the Tribal Council or other agency of the Tribal government to whom such authority has been delegated by the Council.


The Subpart A Subcommittee decided that while amending the Common Rule was neither "efficacious, expeditious, nor appropriate," it apparently thought the overall idea a good one, and recommended that OHRP develop guidance to assure that researchers get permission from Tribal governments to do research within their jurisdiction. In the general discussion, various SACHRP members and other federal officials debated whether OHRP was the right office to handle the task, and they modified the recommendation to include other HHS agencies.

As I pointed out during the public comment period, similar rules in Canada have deterred historians from including First Nations Canadians in their research, and give Band Councils veto power over who in their communities gets to talk with a university researcher. And in California, a Tribal government used an IRB to suppress research on labor conditions in casinos. But at no point during the SACHRP discussion did anyone consider the effect the recommendation would have on social science research.

Since 1966, IRB policies have been determined by bodies dominated by medical researchers, and SACHRP is just the latest in a long list. However much medical researchers and administrators may want the trust and respect of social researchers, they simply cannot keep in mind the rights and responsibilities of social scientists when something like this comes up. For medical researchers, it seems, more consent is always better, and they forget that one person's consent is another's censorship.

In related news, today's New York Times reports that the U.S. military has suppressed photographs of American casualties in Iraq by insisting that photojournalists obtain written consent from the troops they photograph:


New embed rules were adopted in the spring of 2007 that required written permission from wounded soldiers before their image could be used, a near impossibility in the case of badly wounded soldiers, journalists say . . . Two New York Times journalists were disembedded in January 2007 after the paper published a photo of a mortally wounded soldier. Though the soldier was shot through the head and died hours after the photo was taken, Lt. Gen. Raymond T. Odierno argued that The Times had broken embed rules by not getting written permission from the soldier.

[Michael Kamber and Tim Arango, "4,000 U.S. Deaths, and Just a Handful of Images," New York Times, 26 July 2008]

Friday, July 25, 2008

Report from SACHRP, Part 2: The Calcified Common Rule

Part of the SACHRP discussion last week concerned a provision of the Common Rule to which I had not paid much attention. As the Subpart A subcommittee noted, 45 CFR 46.117(c)(1) provides that


An IRB may waive the requirement for the investigator to obtain a signed consent form for some or all subjects if it finds . . . that the only record linking the subject and the research would be the consent document and the principal risk would be potential harm resulting from a breach of confidentiality. Each subject will be asked whether the subject wants documentation linking the subject with the research, and the subject's wishes will govern . . .


Several committee members noted that this last bit--about asking the subject if she wants the documentation that an IRB has determined will put her at risk--is pretty stupid. David Forster noted that offering a signed document can create unnecessary distrust. Neil Powe and Daniel Nelson suggested that it would be a significant burden for a researcher to devise and gain approval for a consent form on the off chance that a subject will demand one. Everyone seemed to agree that this provision is never enforced, and that it would be a bad idea if it were.

But what to do about it? As members of an official body, the committee members were clearly uncomfortable recommending that IRBs ignore a provision of the Common Rule. Yet they all seemed to think that amending the Common Rule was impossible.

This kind of defeatism distresses me. Since the Common Rule was promulgated in 1991, we've amended the Constitution, added an executive department to the cabinet, and brought professional baseball back to Washington, D.C. I'm sure it's a pain in the neck to bring together all the Common Rule signatories, but can't it be done every seven years, or ten? Or are we to endure these kinds of errors for a century?

I have not yet figured out who put in the provision that subjects be offered documentation even when it threatens them. The National Commission recommended no such requirement, yet it appeared in the draft regulations of August 1979. Someone in the Department of Health, Education, and Welfare made a mistake thirty years ago, and now we're stuck with it.

Wednesday, July 23, 2008

Report from SACHRP, Part 1: A Systems Level Discussion

On July 16 I attended the second day of the open meeting of the Secretary's Advisory Committee on Human Research Protections (SACHRP, pronounced sack-harp) in my home town of Arlington, Virginia. This was the first time I have observed such a meeting, and I am sure there is much I missed for want of context. But in this and following posts, I will record a few impressions.

The most interesting part of the meeting came at the end, when the committee's chair, Samuel Tilden, invited committee members to participate in "a systems level discussion" of today's human subjects protection regime. Not all committee members offered comments, and I was disappointed that anthropologist Patricia Marshall, the sole social scientist on the committee, did not do so. But the members who did speak displayed a range of viewpoints.

The most enthusiastic advocates of the status quo were Jeffrey Botkin and Daniel Nelson. Botkin described himself as an "unabashed advocate of current system." He noted that IRBs rose in response to documented abuses in medical research, such as those detailed by Henry Beecher in 1966 ["Ethics and Clinical Research," New England Journal of Medicine 274 (16 June 1966): 1354-1360]. Today, he noted, most researchers know the rules. While the system may let an occasional unethical project slip through, there is no "hidden underbelly of unethical research."

This is an important point, and I remain agnostic about whether IRBs are appropriate for medical research. But I am also sure that Dr. Botkin understands that even beneficial drugs can have nasty side effects, and that he would not prescribe the same drug to treat all ailments. I would be interested to know what he considers the social science analogue to Beecher's article. For if we are to judge today's system by its ability to avoid documented problems of the past, we need to know what we are trying to avoid for every type of research we regulate.

Nelson declared that the "Subpart A Subcommittee" he co-chairs decided early in its existence that "there is general consensus that the Common Rule is not 'broken.'" Yet in his system-level talk, he conceded that the power granted by the Common Rule to local IRBs results in arbitrary decisions (he called this "variability") and "well-intended overreaching." He noted that the only sure way to eliminate all risky research is to eliminate all research.

Other committee members, while not calling for changed regulations, were more explicit about current problems. Lisa Leiden, an administrator at the University of Texas, has heard from a lot of upset faculty, and she is looking for ways to relax oversight. This would include "unchecking the box," that is, declining to promise to apply federal standards to research not directly sponsored by a Common Rule agency. Without going into specifics, she suggested that the federal standards are too stringent, and that the University of Texas system, if freed from them, would craft exemptions beyond those now offered by the Common Rule. Overall, she is looking for ways to move from a "culture of compliance to one of conscience."

Liz Bankert, Nelson's co-chair of the subcommittee, also showed her awareness of the overregulation of social research, and her frustration with IRBs' emphasis on regulatory compliance. "I've gone to IRBs all over the country," she reported. "They are thoughtful, sincere, really intelligent groups. To have all this brainpower sucked into the vortex of minimal risk research is not efficient." It also contributes to what Bankert sees as a lack of mutual respect between IRBs and reseachers. She blamed the problems on a "fear factor which has been developing over the past several years."

Both Leiden and Bankert implied that it was the interpretation of the regulations, not the regulations themselves, that caused the problems they have identified. Without saying so explicitly, they seemed to blame the OPRR of the late 1990s for scaring IRBs all over the country into letter-perfect regulatory compliance, at the expense of research ethics.

In contrast, two committee members seemed willing to reconsider the regulations themselves. David Strauss hoped for a system that was "clinically and empirically informed," terms that no one could apply to the regulation of social research. And he recognized that the regulations are not divine revelation. "We shouldn't be reviewing research that we don't think needs to be reviewed because some folks 30 years ago, at the end of a long, hot day, decided to use the word 'generalizable,'" he explained. "We have to have language that makes sense to us."

Finally, Tilden himself described the Common Rule as largely broken. He noted that the 1981 regulations--which have changed only slightly since--were accompanied by the promise that most social research would not have to undergo IRB review. The fact that so few social science projects escape review, he concluded, showed that the exemption system has collapsed. Rather than try to shore it up again, he suggested that concerns about confidentiality be separated from other risks, and that projects whose only risks involved breaches of confidentiality be evaluated only for the adequacy of their protections in that area.

This last proposal interests me, because when scholars talk seriously about the wrongs committed by social science researchers, they almost always come back to questions of confidentiality. If IRBs were restrained from making up other dangers--like interview trauma--and instead limited to more realistic concerns, they could potentially do some good.

In sum, I did not get the impression that, in Nelson's words, "there is general consensus that the Common Rule is not 'broken.'" Strauss and Tilden, in particular, seem to understand that the present system has wandered far from the stated intentions of the authors of the regulations, and from any empirical assessment of the risks of research or the effectiveness of IRBs. I hope they will continue to think about alternative schemes that would keep controls on medical experimentation without allowing federal and campus officials free rein to act on their fears.

Thursday, April 5, 2007

Anthropologist Patricia Marshall Appointed to SACHRP

The Department of Health and Human Services has announced the appointment of a new chair and three new members to the Secretary’s Advisory Committee on Human Research Protections. Among the three new members is Professor Patricia A. Marshall of Case Western University. Like Lorna Rhodes who left the committee in 2006, Marshall is a medical anthropologist, making me wonder if the department has unofficially reserved a seat for such a scholar, who is then supposed to represent all social scientists.

Marshall brings a somewhat critical perspective, having complained about her own treatment by an IRB. In the early 1990s, she wanted to interview patients in a waiting room, and—in a classic example of IRB formalism--her IRB insisted that because she was doing research in a medical setting, she had to warn her interview subjects that “emergency medical treatment for physical injuries resulting from participation would be provided.” (Patricia A. Marshall, “Research Ethics in Applied Anthropology,” IRB: Ethics and Human Research 14 [Nov. - Dec., 1992]: 1-5).

Perhaps as a result of this experience, she has maintained some skepticism about IRB review of anthropology, as expressed in her essay, “Human Subjects Protections, Institutional Review Boards, and Cultural Anthropological Research,” Anthropological Quarterly 76 (Spring 2003): 269-285. That essay shows Marshall’s familiarity with much of the critical literature on IRBs, and she repeats some of that criticism herself:


  • “IRBs may be overly zealous in their interpretation and application of federal guidelines, exacerbating the challenges faced by anthropologists and other professionals in seeking approval for studies.” (270)
  • “Although committees must include representatives from diverse scientific fields and the community, IRBs have a strong orientation to biomedical and experimental research. In fact, a significant flaw in the development of the federal guidelines for ethical research is that social scientists were not included in the process. The result is a conflation of two related problems for anthropologists: first, the Common Rule emphasizes concerns for biomedical researchers; and second, most IRBs do not have members with expertise in anthropological methods.” (272)
  • “Misapplications of the Common Rule and inappropriate requests for revisions from IRBs can have a paralyzing effect on anthropological research. Moreover, it reinforces a cynical view of institutional requirements for protection of human subjects, and it uses scarce resources that would be better spent on studies involving greater risks for participants.” (273)



Given her understanding of these problems, one might expect her to advocate, or at least consider, the exclusion of anthropological research from IRB review. Instead, she concludes, “regulatory oversight by IRBs is a fact of life for scientific researchers. Anthropologists are not and should not be exempt.” (280)

Huh?

This conclusion is so contrary to the rest of the essay that I can only guess at how it got in there. Perhaps it represents a resigned surrender after years of failed efforts to exclude some review. Perhaps it is a failure of imagination. Perhaps Marshall believes that only by embracing IRB review will anthropologists be taken seriously by the biomedical researchers she works with.

Or perhaps the key issue is that Marshall fits the pattern I mentioned earlier of some anthropologists’ embrace of the Belmont Report principles. In “Research Ethics in Applied Anthropology,” Marshall cites not the Code of Ethics of the American Anthropological Association, but the comparable Ethical Guidelines of the National Association for the Practice of Anthropology, which state that “Our primary responsibility is to respect and consider the welfare and human rights of all categories of people affected by decisions, programs or research in which we take part.”

I have no complaint with applying those guidelines to their intended subject: “a professionally trained anthropologist who is employed or retained to apply his or her specialized knowledge problem solving related to human welfare and human activities.” But they are inappropriate restrictions for scholars whose primary role is academic inquiry, not problem solving.

Thus, like Stuart Plattner, Marshall uncritically assumes that one field’s ethics can be imposed on another. She writes, “ethical principles governing applied anthropological research are not unique to this discipline. Respect for persons, beneficence, and justice are fundamental concerns for any scientist.” (“Research Ethics in Applied Anthropology,” 4) While that sounds lovely, the latter two terms, as defined by the Belmont Report, are foreign to the ethical codes of most academic research. Until she recognizes the distinction between problem-solvers whose primary goal is to do no harm and researchers whose primary goal is to seek the truth, she will be a poor advocate for most scholars in the social sciences and humanities.

Yet in previous work, Marshall herself has argued against the idea that humans share a single set of ethics, recognizing instead that “ethics and values cannot be separated from social, cultural, and historical determinants that regulate both the definition and resolution of moral quandaries.” (“Anthropology and Bioethics,” Medical Anthropology Quarterly, New Series, 6 [Mar., 1992]: 62) If she brings that insight to the committee, perhaps she will recognize the basic wrongness of forcing Belmont’s biomedical ethics on non-biomedical fields.