Tuesday, July 29, 2008

The Dormant Right to Ethical Self-Determination

The Common Rule, in 45 CFR 46.103(b)(1), requires that each institution receiving funding from a Common Rule agency must submit an assurance that includes


A statement of principles governing the institution in the discharge of its responsibilities for protecting the rights and welfare of human subjects of research conducted at or sponsored by the institution, regardless of whether the research is subject to Federal regulation. This may include an appropriate existing code, declaration, or statement of ethical principles, or a statement formulated by the institution itself.


In contrast, OHRP's Federalwide Assurance requires U.S. institutions to pledge that


All of the Institution's human subjects research activities, regardless of whether the research is subject to federal regulations, will be guided by the ethical principles in: (a) The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, or (b) other appropriate ethical standards recognized by federal departments and agencies that have adopted the Federal Policy for the Protection of Human Subjects, known as the Common Rule.

Saturday, July 26, 2008

Report from SACHRP, Part 3: When Consent Means Censorship

A third item of interest from this month's SACHRP meeting concerns rules about research on Indian reservations.

According to a handout provided at the meeting, in March 2008, Dr. Francine Romero--an epidemiologist and former member of SACHRP--proposed that the Common Rule be amended to specify that


For human subject research to be conducted wtihin the jurisdiction(s) of federally recognized American Indian or Alaska native (AIAN) Tribal government(s), the IRB shall require documentation of explicit Tribal approval for the research. This approval shall come from the Tribal Council or other agency of the Tribal government to whom such authority has been delegated by the Council.


The Subpart A Subcommittee decided that while amending the Common Rule was neither "efficacious, expeditious, nor appropriate," it apparently thought the overall idea a good one, and recommended that OHRP develop guidance to assure that researchers get permission from Tribal governments to do research within their jurisdiction. In the general discussion, various SACHRP members and other federal officials debated whether OHRP was the right office to handle the task, and they modified the recommendation to include other HHS agencies.

As I pointed out during the public comment period, similar rules in Canada have deterred historians from including First Nations Canadians in their research, and give Band Councils veto power over who in their communities gets to talk with a university researcher. And in California, a Tribal government used an IRB to suppress research on labor conditions in casinos. But at no point during the SACHRP discussion did anyone consider the effect the recommendation would have on social science research.

Since 1966, IRB policies have been determined by bodies dominated by medical researchers, and SACHRP is just the latest in a long list. However much medical researchers and administrators may want the trust and respect of social researchers, they simply cannot keep in mind the rights and responsibilities of social scientists when something like this comes up. For medical researchers, it seems, more consent is always better, and they forget that one person's consent is another's censorship.

In related news, today's New York Times reports that the U.S. military has suppressed photographs of American casualties in Iraq by insisting that photojournalists obtain written consent from the troops they photograph:


New embed rules were adopted in the spring of 2007 that required written permission from wounded soldiers before their image could be used, a near impossibility in the case of badly wounded soldiers, journalists say . . . Two New York Times journalists were disembedded in January 2007 after the paper published a photo of a mortally wounded soldier. Though the soldier was shot through the head and died hours after the photo was taken, Lt. Gen. Raymond T. Odierno argued that The Times had broken embed rules by not getting written permission from the soldier.

[Michael Kamber and Tim Arango, "4,000 U.S. Deaths, and Just a Handful of Images," New York Times, 26 July 2008]

Friday, July 25, 2008

Report from SACHRP, Part 2: The Calcified Common Rule

Part of the SACHRP discussion last week concerned a provision of the Common Rule to which I had not paid much attention. As the Subpart A subcommittee noted, 45 CFR 46.117(c)(1) provides that


An IRB may waive the requirement for the investigator to obtain a signed consent form for some or all subjects if it finds . . . that the only record linking the subject and the research would be the consent document and the principal risk would be potential harm resulting from a breach of confidentiality. Each subject will be asked whether the subject wants documentation linking the subject with the research, and the subject's wishes will govern . . .


Several committee members noted that this last bit--about asking the subject if she wants the documentation that an IRB has determined will put her at risk--is pretty stupid. David Forster noted that offering a signed document can create unnecessary distrust. Neil Powe and Daniel Nelson suggested that it would be a significant burden for a researcher to devise and gain approval for a consent form on the off chance that a subject will demand one. Everyone seemed to agree that this provision is never enforced, and that it would be a bad idea if it were.

But what to do about it? As members of an official body, the committee members were clearly uncomfortable recommending that IRBs ignore a provision of the Common Rule. Yet they all seemed to think that amending the Common Rule was impossible.

This kind of defeatism distresses me. Since the Common Rule was promulgated in 1991, we've amended the Constitution, added an executive department to the cabinet, and brought professional baseball back to Washington, D.C. I'm sure it's a pain in the neck to bring together all the Common Rule signatories, but can't it be done every seven years, or ten? Or are we to endure these kinds of errors for a century?

I have not yet figured out who put in the provision that subjects be offered documentation even when it threatens them. The National Commission recommended no such requirement, yet it appeared in the draft regulations of August 1979. Someone in the Department of Health, Education, and Welfare made a mistake thirty years ago, and now we're stuck with it.

Wednesday, July 23, 2008

Report from SACHRP, Part 1: A Systems Level Discussion

On July 16 I attended the second day of the open meeting of the Secretary's Advisory Committee on Human Research Protections (SACHRP, pronounced sack-harp) in my home town of Arlington, Virginia. This was the first time I have observed such a meeting, and I am sure there is much I missed for want of context. But in this and following posts, I will record a few impressions.

The most interesting part of the meeting came at the end, when the committee's chair, Samuel Tilden, invited committee members to participate in "a systems level discussion" of today's human subjects protection regime. Not all committee members offered comments, and I was disappointed that anthropologist Patricia Marshall, the sole social scientist on the committee, did not do so. But the members who did speak displayed a range of viewpoints.

The most enthusiastic advocates of the status quo were Jeffrey Botkin and Daniel Nelson. Botkin described himself as an "unabashed advocate of current system." He noted that IRBs rose in response to documented abuses in medical research, such as those detailed by Henry Beecher in 1966 ["Ethics and Clinical Research," New England Journal of Medicine 274 (16 June 1966): 1354-1360]. Today, he noted, most researchers know the rules. While the system may let an occasional unethical project slip through, there is no "hidden underbelly of unethical research."

This is an important point, and I remain agnostic about whether IRBs are appropriate for medical research. But I am also sure that Dr. Botkin understands that even beneficial drugs can have nasty side effects, and that he would not prescribe the same drug to treat all ailments. I would be interested to know what he considers the social science analogue to Beecher's article. For if we are to judge today's system by its ability to avoid documented problems of the past, we need to know what we are trying to avoid for every type of research we regulate.

Nelson declared that the "Subpart A Subcommittee" he co-chairs decided early in its existence that "there is general consensus that the Common Rule is not 'broken.'" Yet in his system-level talk, he conceded that the power granted by the Common Rule to local IRBs results in arbitrary decisions (he called this "variability") and "well-intended overreaching." He noted that the only sure way to eliminate all risky research is to eliminate all research.

Other committee members, while not calling for changed regulations, were more explicit about current problems. Lisa Leiden, an administrator at the University of Texas, has heard from a lot of upset faculty, and she is looking for ways to relax oversight. This would include "unchecking the box," that is, declining to promise to apply federal standards to research not directly sponsored by a Common Rule agency. Without going into specifics, she suggested that the federal standards are too stringent, and that the University of Texas system, if freed from them, would craft exemptions beyond those now offered by the Common Rule. Overall, she is looking for ways to move from a "culture of compliance to one of conscience."

Liz Bankert, Nelson's co-chair of the subcommittee, also showed her awareness of the overregulation of social research, and her frustration with IRBs' emphasis on regulatory compliance. "I've gone to IRBs all over the country," she reported. "They are thoughtful, sincere, really intelligent groups. To have all this brainpower sucked into the vortex of minimal risk research is not efficient." It also contributes to what Bankert sees as a lack of mutual respect between IRBs and reseachers. She blamed the problems on a "fear factor which has been developing over the past several years."

Both Leiden and Bankert implied that it was the interpretation of the regulations, not the regulations themselves, that caused the problems they have identified. Without saying so explicitly, they seemed to blame the OPRR of the late 1990s for scaring IRBs all over the country into letter-perfect regulatory compliance, at the expense of research ethics.

In contrast, two committee members seemed willing to reconsider the regulations themselves. David Strauss hoped for a system that was "clinically and empirically informed," terms that no one could apply to the regulation of social research. And he recognized that the regulations are not divine revelation. "We shouldn't be reviewing research that we don't think needs to be reviewed because some folks 30 years ago, at the end of a long, hot day, decided to use the word 'generalizable,'" he explained. "We have to have language that makes sense to us."

Finally, Tilden himself described the Common Rule as largely broken. He noted that the 1981 regulations--which have changed only slightly since--were accompanied by the promise that most social research would not have to undergo IRB review. The fact that so few social science projects escape review, he concluded, showed that the exemption system has collapsed. Rather than try to shore it up again, he suggested that concerns about confidentiality be separated from other risks, and that projects whose only risks involved breaches of confidentiality be evaluated only for the adequacy of their protections in that area.

This last proposal interests me, because when scholars talk seriously about the wrongs committed by social science researchers, they almost always come back to questions of confidentiality. If IRBs were restrained from making up other dangers--like interview trauma--and instead limited to more realistic concerns, they could potentially do some good.

In sum, I did not get the impression that, in Nelson's words, "there is general consensus that the Common Rule is not 'broken.'" Strauss and Tilden, in particular, seem to understand that the present system has wandered far from the stated intentions of the authors of the regulations, and from any empirical assessment of the risks of research or the effectiveness of IRBs. I hope they will continue to think about alternative schemes that would keep controls on medical experimentation without allowing federal and campus officials free rein to act on their fears.

Thursday, July 17, 2008

Political Scientists to the Rescue?

The final essay in the PS symposium is Sue Tolleson-Rinehart, "A Collision of Noble Goals: Protecting Human Subjects, Improving Health Care, and a Research Agenda for Political Science."

Tolleson-Rinehart addresses the question of quality improvement in health care, such as the checklist publicized by Atul Gawande. As she notes, her "essay is not about the influence of IRBs on political science research" and is therefore largely outside the scope of this blog. That said, she makes some observations relevant to the regulation of social science research.

While sympathetic to individual IRB members, Tolleson-Rinehart takes a dim view of the system as it now operates:


IRBs are understandably, and necessarily, driven by their limiting case: the possibility of invasive or dangerous procedures performed with vulnerable individuals or populations, without adequate regard for the hallmarks of protection, respect for persons, beneficence and nonmaleficence, and justice elucidated in what is widely known as the Belmont Report, and adopted from the time of the Belmont Report’s release as our fundamental ethical principles. It is not surprising, given IRBs’ role as the protector of the vulnerable, that the general IRB perspective on “minimal risk” and risk-benefit comparisons is a conservative one.

IRBs are also terrified. All IRB professionals I know work in real fear that their IRBs could be the next ones to cause an entire university’s research to be shut down. Shutdowns in recent years at Harvard, Duke, and Johns Hopkins give IRBs every reason to be fearful of making what OHRP considers to be an error. The reasonable suspicion that researchers regard IRBs as obstacles to, rather than facilitators of, research, must further IRB professionals’ sense of being embattled.

Reviewers on IRB committees are our very hardworking colleagues who are usually not given adequate release time to meet their committee responsibilities, and who are not able to benefit from truly extensive and nationally standardized training, nor do they have anything like a consensus metric for evaluating the spectrum of risk in different research contexts and for different populations.

All these sources of strain might determine the conservative approach to human subject protections. When social science research (including quality-improvement research) occurs in a biomedical context, or when health care and health policy require evaluation, the conservative stance can become dysfunctional. IRB assessments of my own students’ work provide a clear example of one of the ironic and unintended consequences of the absence of agreed upon and broadly understood metrics for assigning risk in different research contexts. IRBs have a comparative lack of familiarity with how social science methods— such as those used in quality-improvement research—may differ from some other methods of clinical research in the risks they pose to subjects. (508)


She adds that while her own students submit "substantially similar" protocols, their


IRB determinations range from declarations that the project is “not human subjects research” at all to “research that requires full consent,” with every intermediate determination. I frequently have students working simultaneously on very similar projects, one of whom must go through a tedious consenting process taking as much as four to five minutes of the beginning of telephone interviews (with the busy elites at the other end chafing at these long and unnecessary prefaces to the first question), while another student researcher is not required to secure any kind of consent at all. The single source of variation across these cases is not the research question or the method, but the IRB reviewers and their familiarity ~or lack thereof ! with in-depth interviewing and the unique protections already available, via one’s position, to the “powerful research subject.” (509)


This is damning enough, but Tolleson-Rinehart insists that the "point of this vignette is not to criticize IRBs." (509) Rather, she argues that


political science is well prepared to analyze and make normative (but evidence-based) recommendations about the politics of human subjects research. We can help define what it is, and the circumstances under which it is generalizable knowledge, even though it may not necessitate a conservative approach to protections. We can construct frameworks to achieve a more precise understanding of how to balance risks and benefits. Those frameworks might even lead us to formulate what would amount to a multidimensional risk scale. Finally, political science can contribute to the construction of theoretical and methodological underpinnings for the content of truly national standards for IRB training curricula. These would improve IRB reviewers’ understanding of different research methods to go beyond mere compliance with federal regulations and become real resources and decision aids for hard-pressed reviewers who may have to evaluate research they aren’t familiar with. (509)


Finally, Tolleson-Rinehart notes that while the Association for the Accreditation of Human Research Protection Programs and Public Responsibility in Medicine and Research mean well, both emphasize regulatory compliance over actual research ethics. She argues that political scientists can go beyond compliance questions to work on "a common epistemology of the philosophical, ethical, and political foundations of human subjects research." (510)

All of this sounds fine, and I hope that Tolleson-Rinehart and her colleagues get to work on her agenda. But as my recent exchange with Levine and Skedsvold suggests, the most immediate question for political scientists may be to figure out how to make the regulatory system more responsive to developments in research. We seem stuck with a 1974 law and 1991 regulations that cannot be changed, even when everyone agrees they need updating.

Monday, July 14, 2008

Can We Patch This Flat Tire?

The fourth article in the PS symposium is Felice J. Levine and Paula R. Skedsvold, “Where the Rubber Meets the Road: Aligning IRBs and Research Practice.” Both authors been involved in IRB debates for several years, and this article reflects their sophisticated understanding of some of the issues involved. But for an article published in a political science journal, it is disappointingly insensitive to the power dynamics that govern IRB-researcher relationships.

Unlike symposium participants Tony Porter, Dvora Yanow and Peregrine Schwartz-Shea, Levine and Skedsvold do not question the premise that IRBs help promote ethical research. Instead, they assert that there is no fundamental conflict between IRBs and social science researchers: "federal regulations, professional ethics codes, and research practice may have shared goals but tend to speak with different languages—creating frustration and skepticism in a system that could potentially work quite well if transformations are made." (502) Based on that assertion, they suggest four such transformations, ranging from the bold to the timid.

Friday, July 11, 2008

The Biomedical Ethics Juggernaut

The third contribution to the PS symposium is Tony Porter, "Research Ethics Governance and Political Science in Canada."

Porter laments that "the history of research ethics governance in Canada reveals recurrent concerns expressed by political scientists and other SSH [social sciences and humanities] researchers that indicate the inappropriateness of the [ethics] regime for SSH research, and that also create the impression that the regime is a juggernaut that continues on its trajectory, relatively impervious to criticism." (495)

Porter then offers a helpful capsule history of the debates leading up to Canada's present policy statements. From an American perspective, they look pretty good. In contrast to the Belmont Report, which calls for informed consent and harms-benefit assessment without specifying the types of research to which it applies, Canada's Tri-Council Policy Statement declares:

certain types of research— particularly biographies, artistic criticism or public policy research—may legitimately have a negative effect on organizations or on public figures in, for example, politics, the arts or business. Such research does not require the consent of the subject, and the research should not be blocked merely on the grounds of harms-benefits analysis because of the potentially negative nature of the findings. (496)


Unfortunately, Porter finds that in practice, research ethics boards ignore such guidance. For his own article, he was asked to specify questions in advance, destroy data, and write long explanations of his research plans. And he warns of even stricter regulation ahead.

Porter attributes the imposition of biomedical ethics and regulation on non-biomedical research to the clout that biomedical researchers have in government and universities. There are more of them, they have more money, and they care more about ethics--since they face more serious ethical challenges. As a result, "the growth of a biomedically oriented but unified research ethics regime has appeared as a seemingly unstoppable trend in Canada." (498) Rather dismally, Porter suggests that the only thing that will stop that trend is its own ability to alienate researchers until "opposition on the part of SSH researchers will increase and the legitimacy of the arrangements will be damaged, as will the ability of the regime to elicit the degree of voluntarism and acceptance that is needed to sustain it." (498)

Perhaps for lack of space, Porter does not consider another possibility: that the social sciences will internalize the medical ethics implicit in the "unified research ethics regime." The American Anthropological Association took a big step in this direction in 1998, with the adoption of a code of ethics that comes close to rejecting the idea that research "may legitimately have a negative effect on organizations or on public figures." If the ethics regime grows stronger in Canada and elsewhere, and more social scientists follow the AAA's line, it may be that young people interested in "critical research," as Porter puts it (496), will seek careers in journalism, rather than in university scholarship. To use a Canadian example, if Russel Ogden were writing for a newspaper, no one would be blocking his research.

Monday, July 7, 2008

Ideas on Fieldwork Are Oldies but Goodies

The second article in the PS symposium on IRBs is Dvora Yanow and Peregrine Schwartz-Shea, "Reforming Institutional Review Board Policy: Issues in Implementation and Field Research."

The authors argue that "the character of its implicit research design model, embedded in its historical development, . . . renders IRB policy problematic for ethnographic and other field researchers." (483) Specifically, they contend that ethnographers are likley to have trouble meeting IRB demands that their protocols spell out procedures for selecting subjects, obtaining informed consent, disgusing the identity of participants, balancing risks and benefits, and protecting the data they collect. (489)

Fieldwork, they argue, is just too unpredictable to be planned out so thoroughly in advance. They note,

Field researchers must enter others’ worlds, and are expected to do so with care and respect, and these worlds can be complex, unbounded, and in flux. Instead of rigidly delimited, predesigned protocols laying out research steps that are invariable with respect to persons and time, which subjects can be handed as they step into the world of the medical researcher, field research often requires flexing the research design to accommodate unanticipated persons and personalities and unforeseen conditions.


And, they find,

extending [the Belmont] principles to other, non-experimental research settings without making the underlying mode of science and its methodology explicit and without exploring their suitability to non-experimental scientific modes and methodologies has resulted in a hodgepodge of ethical guidance that is confused and confusing. Those guidelines do not give the many serious ethical problems of field research design and methodologies the sustained attention they deserve. (491)


All of this sounds perfectly sensible. What suprises me a bit is the authors' belief that they are the first to make these arguments:

The proposals that we have seen to date for reforming IRB policy (e.g., Carpenter 2007) all tinker with the existing system. None of them, to the best of our knowledge, has yet identified and engaged the underlying methodological frame—experimental research design—shaping that policy and its implementation. Policy reforms that address resource, organizational, and other features of the existing policy leave that framing and its prosecution in place. The impact of these policies on field research is, however, serious, extending IRB policy to these other forms of research in the absence of systematic evidence of their having harmed research participants. If we are to have policies to ensure the protection of human participants in all areas of research, those policies need to be suited to other than just experimental research designs in ways that are commensurate with their own potential for harms. It is vital that recognition of the misfit between existing experimentally based policy and field research design and methodologies also be on the table in discussions of IRB policy reform. (491)


In fact, ethnographers have been complaining about the imposition of experimental research ethics on non-experimental research for thirty or forty years. Anthropologist Murray Wax, in particular, eloquently distinguished experimental research from fieldwork in just the way that Yanow and Schwartz-Shea do. See, for example, his essay, "On Fieldworkers and Those Exposed to Fieldwork: Federal Regulations and Moral Issues," Human Organization 36 (Fall 1977): 321-28. Indeed, despite a long bibliography, Yanow and Schwartz-Shea cite none of the many IRB critiques written in 1978-1980, when the IRB regulations were being overhauled.

I don't fault Yanow and Schwartz-Shea too much for not knowing this history. It is one of the tasks of the historian to save others from having to reinvent the wheel, and I hope my book, when finished, will make such a contribution.

Yanow and Schwartz-Shea end their article with "A Call for Action," most of which is fairly vague. IRB critics are split between those who seek to "tinker with the existing system," and those who seek to exclude large categories of research from any IRB jurisdiction. Yet it's not even clear on which side of this divide these authors fall. For example, they want APSA to "Issue a statement calling for reform of IRB policy in a substantive way that protects the interests of APSA members." (492) Lovely, but what should such a statement say? They demand reform without defining it.

More promising is their call for more research. They note,

There is much that we do not know about the kind(s) of field research political scientists are doing today . . . We need more systematic, policy-oriented research about members’ field research practices, and we call on APSA to take the lead in conducting or facilitating it . . . (491)


They mention the possibility of an APSA handbook on ethical issues and current regulations.

This sounds a bit like the effort undertaken by the American Psychological Association in the preparation of its 1973 Ethical Principles in the Conduct of Research with Human Participants. As described in the first chapter of that book, rather than sit together and lay down some rules, the drafting committee surveyed the APA membership and assembled thousands of descriptions of real research projects that had raised ethical issues. The descriptions became the basis for an ethical guide directly relevant to the needs and values of the APA's members.

Around the same time, APSA itself undertook a similar effort, on a smaller scale, by conducting a study of actual cases in which researchers faced problems with confidentiality. Unfortunately, the full study seems not to have been published. A brief summary was published as James D. Carroll and Charles R. Knerr, "The APSA Confidentiality in Social Science Research Project: A Final Report," PS 9 (Autumn 1976): 416-419.

Whether or not a detailed ethical study would help ethnographic political scientists with their IRBs, it would be a great resource for scholars who want to do right by the people they study. I hope APSA--and other scholarly societies--will consider such a project.

Saturday, July 5, 2008

Human Subject of Biomedical Research Angry!

Peter Klein at Organizations and Markets notes a brief dialogue concerning medical research ethics in The Incredible Hulk. Interestingly, the scientist involved suggests not the weighing of autonomy, beneficence, and justice demanded by the Belmont Report, but rather a prioritization of autonomy, allowing the subject, rather than an ethics committee, to decide whether the potential benefits justify the risks. Some ethicists of the 1970s proposed such a prioritization, but the National Commission rejected it.

The only movie I can think of off the top of my head, in which a comparable scene depicts an ethical debate in the social sciences and humanities, is Songcatcher. I haven't seen the movie, but even in the trailer they're arguing about when research becomes exploitation. Maybe I should watch the whole thing.

Friday, July 4, 2008

When Seligson is Non-Seligson

The first article in the July 2008 PS symposium is Mitchell A. Seligson's “Human Subjects Protection and Large-N Research: When Exempt is Non-Exempt and Research is Non-Research." While it's great to have someone interested in the contradictions of IRB regulations, the absurdity of the present regime seems to have left Seligson hopelessly confused, and his incoherent essay calls for both expansion and contraction of IRB authority.

Rather than trying to outline his argument, let me just list some of the questions to which he poses contradictory answers.

1. Should social science and humanities research follow the Belmont Report?



Early in his essay, Seligson attacks the Belmont Report as irrelevant to social science research, especially survey research. He particularly dislikes its call for an assessment of risks and benefits, noting


the problem of assessing risk is especially vexing for all of those who rely on large-N studies, typically in the field of survey research. Ironically, when only a handful of subjects are used in a campus laboratory-based experiment, the IRB is likely to approve the project with no objection. But survey research, which invariably relies on large-N studies, is viewed with suspicion by many IRBs simply because the risk, however small, is seen as being replicated 1,000 or more times, since most samples strive for confidence intervals of 63% or better. Protocol analysts, who are used to seeing laboratory experiments and focus groups with samples of fewer than 100, are often taken aback when they confront the large sample sizes inherent in most survey research. And when they do, they question why such a large sample is needed. As a result, it is not at all uncommon to have IRB protocol analysts ask survey researchers to cut down their sample sizes. (479)


He also is skeptical of the Common Rule, especially its protections for pregnant women--irrelevant and damaging to survey research. And he quotes--seemingly with approval--the AAUP's 2006 recommendation "that research whose methodology consists entirely of collecting data by surveys, conducting interviews, or observing behavior in public places be exempt from the requirement of IRB review.”

But then Seligson turns around, lamenting that "historians are not only exempt from IRB control, they have no requirement or even need to take human subjects protection training and pass tests on their knowledge of the principles and rules. Literature faculties often have no knowledge at all of human subjects protection." (480) He wants "faculty members in a broad range of institutions to familiarize themselves with the IRB regulations and to take the tests to demonstrate their knowledge of same," including "the Belmont principles." (482)

Why? Why should faculty members be required to familiarize themselves with guidelines that Seligson has told us are inapplicable to their work? Does he just want company in his misery?

2. Can researchers be trusted?



Seligson thinks that IRB regulations did not help survey research, because


Long before human subjects regulations and the invention of IRBs, survey researchers in all fields instinctually knew that by guaranteeing anonymity they would encourage frankness on the part of respondents. . . . Political scientists who carry out surveys have been aware for decades of the importance of guaranteeing anonymity to their subjects. (480)


If this track record weren't enough, he notes that governments and universities trust political scientists to behave ethically in other aspects of their work.


Even though political scientists conducting educational tests and surveys are exempt from federal regulation, they are not, after all, exempt because the federal government believes we cannot be trusted. What is so strange here is that in countless other important ways, we are trusted by that same federal government. When we grade tests taken by our students, we are not allowed to discriminate on the basis of race, creed, national origin, sexual preference, etc. Yet we are not asked to sign a statement saying that we will not discriminate before ~or indeed after! we grade each exam or before we determine final grades. We hold office hours, but are not asked to submit an application prior to each office hour, not even prior to the start of each term, to the affirmative action offices on our campuses that we will not sexually harass students. We submit articles to conferences but are not asked to submit signed statements saying that we did not plagiarize the material. (481)


Since political scientists have proven more or less trustworthy in these areas, Seligson wants IRBs "to stop assuming, . . that we are all guilty of violations of human subjects rights unless we can prove otherwise." (482)

That's all very nice, but he's unwilling to extend the trust to researchers in other fields. He writes,


some humanists may be naive about the risks involved in disclosing names of subjects. One can imagine many kinds of risk to respondents. One such risk is dismissal of employment from an employer who either might not like the views expressed in the oral history or testimonio or deems them harmful to the company’s welfare. Potential employers might look at the oral history information and deny a position based on the statements contained therein. Another risk could be ostracism at work or in one’s neighborhood for expressing politically unpopular views. One can even imagine law enforcement officials using oral histories to prosecute individuals for revelations that suggest criminal behavior. (480)


In other words, Seligson does not trust interview researchers to have the same instinctual knowledge of ethics he ascribes to survey researchers, he ignores oral historians' sixty-year record in favor of hypothetical abuses, and he assumes historians are guilty of violations of human subjects rights unless we can prove otherwise. Perhaps he wants us to get approval before grading tests as well.

3. Can IRBs be trusted?



Overall, Seligson takes a dim view of those in charge of human subjects regulations, whom he terms "overzealous bureaucrats, both federal and on campuses," and wants retrained. (482) He even relays the follwoing anecdote:

A very senior IRB official at one university, in order to impress upon a political science faculty member his omnipotence, asked, “Do you ever use the library to read books about President Bush?” When the response was affirmative, he said, “Unless you file for IRB approval before opening those books, you will be held in violation, since Bush is a human, is living, and the books almost certainly contain personal information.” (480)


I'm willing to believe a lot of bad things about IRBs, but even I can't swallow a story like this without names and dates attached.

Yet while portraying IRB officials as power-mad bureaucrats, Seligson wants to expand their jurisdiction "to cover all studies of any kind that obtain data on living humans." (482) Wouldn't that include a book about President Bush?

Seligson concludes that "the roadmap to the future should be clear." Maybe it should be, but this article isn't helping. Fortunately, the other essays in the symposium are better researched and reasoned.

Thursday, July 3, 2008

Research Restrictions Not Confined to IRBs

John Mueller alerts me to Douglas Todd's article, "Academics Fight for B.C. Prof's Right to View Assisted Suicides," Vancouver Sun, 2 July 2008.

The article concerns the case of sociologist Russel Ogden, who studies assisted suicide. His employer, Kwantlen University College, has prohibited him from witnessing assisted suicides, according to the Canadian Association of University Teachers. The association wants scholars to have the opportunity to "understand politically unpopular behaviour."

The twist here is that the research ethics committee (the Canadian term for an IRB) is not to blame. They approved Ogden's research three years ago.




Update, July 7. Professor Mueller alerts me to further coverage by the National Post and Inside Higher Ed.

Wednesday, July 2, 2008

OHRP Seeks Comment on Training and Education Programs

Rob Townsend kindly alerted me to the July 1 announcement in the Federal Register that OHRP is seeking comments on its requirements for human subjects training for investigators and IRB members. The summary follows; the full text for the announcement is online at http://edocket.access.gpo.gov/2008/E8-14917.htm. The deadline for comments is September 29.


[Federal Register: July 1, 2008 (Volume 73, Number 127)] [Notices] [Page 37460-37463] From the Federal Register Online via GPO Access [wais.access.gpo.gov] [DOCID:fr01jy08-43] -----------------------------------------------------------------------
DEPARTMENT OF HEALTH AND HUMAN SERVICES
Request for Information and Comments on the Implementation of Human Subjects Protection Training and Education Programs AGENCY: Department of Health and Human Services, Office of the Secretary, Office of Public Health and Science, Office for Human Research Protections.

ACTION: Notice.

-----------------------------------------------------------------------

SUMMARY: The Office for Human Research Protections (OHRP), Office of Public Health and Science is seeking information and comments from affected entities and individuals about (a) Whether OHRP should issue additional guidance recommending that institutions engaged in human subjects research conducted or supported by the Department of Health and Human Services (HHS) implement training and education programs for certain individuals involved in the conduct, review, or oversight of human subjects research, or (b) whether HHS should develop a regulation requiring the implementation of such training and education programs. This request for information and comment stems from the 1998 report from the HHS Office of Inspector General (OIG) recommending that Federal requirements be enacted to help ensure that investigators and institutional review board (IRB) members be adequately educated about, and sensitized to, human subjects protections. More recently, the Secretary's Advisory Committee on Human Research Protections (SACHRP) recommended that OHRP require institutions to ensure that initial and continuing training is provided for IRB members and staff, investigators, and certain institutional officials. The implementation of such training and education programs might help to ensure that individuals involved in the conduct or review of human subjects research at institutions holding OHRP-approved Federalwide Assurances (FWAs) understand and meet their regulatory responsibilities for protecting human subjects.

DATES: Submit written or electronic comments by September 29, 2008.

ADDRESSES: You may submit comments by any of the following methods: E-mail: humansubjectstraining@hhs.gov. Include ``Human Subjects Protection Training and Education'' in the subject line. Fax: 301-402-2071. Mail/Hand delivery/Courier [For paper, disk, or CD-ROM submissions]: Michael A. Carome, M.D., Captain, U.S. Public Health Service, OHRP, 1101 Wootton Parkway, Suite 200, Rockville, MD 20852. Comments received within the public comment period, including any personal information, will be made available to the public upon request.

FOR FURTHER INFORMATION CONTACT: Michael A. Carome, M.D., Captain, U.S. Public Health Service, OHRP, 1101 Wootton Parkway, Suite 200, Rockville, MD 20852, 240-453-6900; e-mail Michael.Carome@hhs.gov.

Tuesday, July 1, 2008

Political Science Perspectives on IRBs

The July 2008 issue of PS, the journal of the American Political Science Association, offers a five-part symposium: "Protecting Human Research Participants, IRBs, and Political Science Redux." Over the next few days I plan to comment on each article, but for now it's safe to say that while each author offers a different diagnosis and prescription, none thinks that the current system is working well.