Thursday, December 30, 2010

NIH Bioethicist Grady Questions IRB Effectiveness

JAMA has published an interesting exchange concerning the lack of data about IRB effectiveness.

[Christine Grady, "Do IRBs protect human research participants?," JAMA 304 (2010):1122-3; James Feldman, "Institutional Review Boards and Protecting Human Research Participants," and Christine Grady, "Institutional Review Boards and Protecting Human Research Participants—Reply," JAMA 304 (2010): 2591-2592.]

In the September 8 issue, Christine Grady of the Department of Bioethics, National Institutes of Health Clinical Center, quotes David Hyman's charge that "Despite their prevalence, there is no empirical evidence IRB oversight has any benefit whatsoever—let alone benefit that exceeds the cost." Grady is less blunt, but her message is the same:


Without evaluative data, it is unclear to what extent IRBs achieve their goal of enhancing participant protection and whether they unnecessarily impede or create barriers to valuable and ethically appropriate clinical research. This lack of data is complicated by the reality of no agreed-on metrics or outcome measures for evaluating IRB effectiveness. Although available data suggest a need for more efficiency and less variation in IRB review, neither efficiency nor consistency directly gauges effectiveness in protecting research participants. Protection from unnecessary or excessive risk of harm is an important measure of IRB effectiveness, yet no systematic collection of data on research risks, no system for aggregating risks across studies, and no reliable denominator of annual research participants exist. Even if aggregate risk data were easily available, it may be difficult to quantify the specific contribution of IRB review to reducing risk because protection of research participants is not limited to the IRB.

Serious efforts are needed to address these concerns and provide evidence of IRB effectiveness.


The December 15 issue features a reply by James Feldman of the Boston University School of Medicine. Feldman makes two points.

First, he doubts that IRBs cause that much trouble:


The critique of IRBs by Bledsoe et al, which was cited as evidence that they stifle research without protecting participants, is based on a single-site report of the results of an e-mail survey mailed to 3 social science departments with a total of 27 respondents. The evidence that IRBs have "disrupted student careers [and] set back tenure clocks" should also meet a reasonable standard of evidence.


OK, but what is that standard of evidence? In the absence of federal funding to study systematically a problem created by federal regulations, how much are frustrated researchers expected to do to demonstrate the problem? In other words, how many horror stories would Feldman need to change his views?

Having insisted that evidence is necessary to show the costs of IRB review, Feldman then asserts that no evidence is needed to show its benefit:


I believe that the effectiveness of IRBs in protecting human participants from research risks is analogous to preventive medicine. It is difficult to derive evidence that can quantify the effectiveness of a specific preventive intervention (new cases of HIV prevented? new injuries prevented?). However, evidence of preventable injury or illness makes a case for the need for effective prevention. Similarly, the tragic and prevalent cases of research abuse and injury make a compelling case for more rather than less review by IRBs that are independent, experienced, and knowledgeable.


As Grady points out in her reply to the letter, even if we accept the analogy, the IRB system does not meet the standards we impose on preventive medicine. She writes, "clinicians and public health officials do rely on evidence of the risks, benefits, and effectiveness of an intervention in preventing HIV or injuries or other conditions to justify adopting one particular preventive intervention rather than another and to defend the necessary investment of resources."

Exactly. As it stands, IRBs are the Avandia of ethics.

Sunday, December 26, 2010

First, Do Some Harm, Part III: Loosies in San Francisco

The third recent document illustrating the problem of applying the Hippocratic maxim to non-medical research is Leslie E. Wolf, "The Research Ethics Committee Is Not the Enemy: Oversight of Community-Based Participatory Research," Journal of Empirical Research on Human Research Ethics 5, no. 4 (December 2010): 77–86. It offers a clear example of the kind of valuable research that is impeded by simplistic medical ethics.

The background is this: in an effort to discourage smoking, California law prohibits the sale of single cigarettes, known as "loosies." Nevertheless, store owners in predominantly poor, African American neighborhoods in San Francisco sell them. In 2002, a group of University of California, San Francisco researchers teamed up with a public health group and residents of the neighborhoods to study the problem, creating Protecting the 'Hood Against Tobacco project, (PHAT).

At first, they proposed merely to observe the sale of loosies, and they got the UCSF IRB's approval. But then the researchers realized that this was impractical; it would require observers to loiter for a long time in the hopes of seeing a spontaneous sale. So they returned to the IRB, this time asking that members of the community be allowed to request cigarettes and record the result. The IRB refused to allow this under UCSF auspices, though it could not stop community members from proceeding on their own.

In 2006, four of the researchers--R. E., Malone, V. B. Yerger, C. McGruder, and E. Froelicher-- complained in print about their treatment at the hands of the IRB. ["'It's like Tuskegee in reverse'": A case study of ethical tensions in institutional review board review of community-based participatory research," American Journal of Public Health 96 (2006): 1914–1919.] While conceding that some readings of federal regulations could justify the IRB's actions, they suspected that the IRB was not simply protecting the human subjects of research:

The early IRB referral to the university's risk management department whence we were referred to the legal department, suggests that the project was regarded in some way as a legal risk and a financial threat to the university. The subsequent legal analysis—which opined that community research partners might be hurt (and thereby possibly put the university at an economic or legal risk because it would be considered a university project)—supports this interpretation. This raises the question about whether such concerns represent an institutional conflict of interest, because the decision about whether the study was ethical appears to be associated with institutional self-protection. (1918)


In her article, Wolf states that "I do not intend to provide a rebuttal to Malone et al. or a defense of the REC decision." She does seek to explain the IRB's decision as follows:

In this particular case, there were a relatively small number of stores in a limited geographic area. As a result, identification of stores might be possible, even if their identities were withheld from publication, at least within the community, which could lead to adverse consequences for them. In addition, the information sought pertained to illegal activity. The researchers had obtained an agreement from the district attorney that the office would not prosecute store owners or clerks for illegal activity uncovered by the study. While this agreement is helpful in minimizing the risk of prosecution, it did not fully eliminate it; another district attorney might not honor the agreement or the information from the study could trigger monitoring by law enforcement after the study. In light of these circumstances, the UCSF REC [research ethics committee, i.e., IRB] felt that the store personnel and owners must be afforded protection under the federal regulations.


She continues,


Some of the problems between the UCSF REC and the PHAT researchers may have stemmed from confusion regarding the definition of "community." For those involved in the PHAT study, the community comprised those residents of Bayview–Hunters Point who had participated in the research collaboration through their engagement in deciding on a research question, and developing and carrying out the research protocol. The REC, on the other hand, had a broader view of what constituted the community. In addition to the Bayview– Hunters Point residents who had collaborated with the academic researchers, the REC felt it had to consider the interests and well-being of those who owned, operated, and worked for the stores from whom data were obtained. Even if they were not human subjects as defined by the federal regulations, they were members of the Bayview–Hunters Point community whose interests and trust in research could be jeopardized if the REC approved the researchers' amendment regarding illegal sales of loose cigarettes. Thus, the REC felt an ethical obligation to consider the interests of the broader community in addition to the interests of the community members participating directly in the study conduct. (79)


This is not a credible explanation of UCSF's actions. If the IRB was worried that documenting the sale of loosies by identifiable stores would lead to consequences too adverse to be accepted, it would have blocked the original proposal, in which hoped to provide that documentation solely through observation. That the IRB would allow the damaging information to be collected through observation but not through actual purchase of cigarettes suggest that the denial was based either on the determination that the second version turned the store employees into human subjects under federal definitions, or a more general form of institutional ass-covering.

Wolf presents the whole affair as a misunderstanding. "Most of these challenges can be met if we engage in an open dialogue among RECs, academic researchers, and community partners, both formally and informally," she writes. "If the parties engage each other openly and respectfully, their collaboration will enable important CBPR research to go forward with appropriate review and oversight." (82) In other words, What we've got here is a failure to communicate.

But the researchers understood that they faced not a failure of communication, but an ethical debate. "From a biomedical ethics perspective that is based on principlism and proceduralism, the IRB's decision appears reasonable, even necessary," they wrote. (1917) The problem is that applying biomedical ethics to social questions led to a "decision [that] protected the interests of the tobacco industry and other industries whose representatives wink at illegal cigarette sales." (1918)

On the other hand, the researchers's 2006 article does fail to articulate the core ethical problem. When the researchers seek to justify work that might harm store owners and employees, they defend their research proposal in terms not far removed from those of a medical researcher.

First, they emphasize "the guaranteed immunity from prosecution" based on the study. (Individuals won't be hurt.) Second, "Ethicists already consider it reasonable that concem for individuals may become secondary to public health priorities during public health emergencies." (OK, individuals may be hurt, but people are dying!)

Finally, "the object of our study was to assess institutional practices within a community, not the responses of individuals within those institutions—a distinction the IRB dismissed as irrelevant . . . By their very nature, institutions have distinct legal and social identities that are something other than a collection of individual legal and social identities, and institutional practices transcend and do not necessarily equate with individual beliefs or behaviors." This puts more distance between the ethics of medical research and that of social research, but it is a hard distinction to maintain when the businesses involved are neighborhood convenience and liquor stores, whose institutional practices are in fact quite likely to equate with individual beliefs or behaviors.

What is needed is a justification for harming individuals, even deliberately doing so. In 1967, Lee Rainwater and David J. Pittman offered one ["Ethical Problems in Studying a Politically Sensitive and Deviant Community," Social Problems 14 (Spring 1967), 363]:


sociologists have the right (and perhaps also the obligation) to study publicly accountable behavior. By publicly accountable behavior we do not simply mean the behavior of public officials (though there the case is clearest) but also the behavior of any individual as he goes about performing public or secondary roles for which he is socially accountable—this would include businessmen, college teachers, physicians, etc.; in short, all people as they carry out jobs for which they are in some sense publicly accountable. One of the functions of our discipline, along with those of political science, history, economics, journalism, and intellectual pursuits generally, is to further public accountability in a society whose complexity makes it easier for people to avoid their responsibilities.


In the absence of such clear statements in favor of doing harm, we get articles like Wolf's, suggesting no limits on an IRB's ability to restrict research that could cause someone harm. Particularly chilling is Wolf's response to the charge that it was silly for the IRB to forbid UCSF researchers from participating while allowing community partners to proceed. (As the researchers noted, this had the effect of depriving the community partners of expertise while depriving store owners of the protections worked out between university researchers and prosecutors.)

Wolf concedes this point, and wishes for "a consistent set of ethical standards for all research, and I join the call of others to extend the scope of the federal regulations." (82) In other words, this law professor wants a world in which IRBs can forbid citizens from taking notes about the crimes committed in their neighborhoods, lest it "lead to adverse consequences" for the criminals.

I would prefer the world of Rainwater and Pittman, in which those business owners who break the law and poison their communities face some risk of exposure. To achieve such a world, researchers must, at times, intend harm.

Thursday, December 23, 2010

First, Do Some Harm, Part II: The AAA Ethics Task Force

In mid-October, the Ethics Task-Force of the American Anthropological Association solicited comments on the following text, a section of a draft Code of Ethics now being written:


Do No Harm

Anthropologists share a primary ethical obligation to avoid doing harm to the lives, communities or environments they study or that may be impacted by their work. This includes not only the avoidance of direct and immediate harm but implies an obligation to weigh carefully the future consequences and impacts of an anthropologist’s work on others. This primary obligation can supersede the goal of seeking new knowledge and can lead to decisions not to undertake or to discontinue a project. Avoidance of harm is a primary ethical obligation, but determining harms and their avoidance in any given situation may be complex.

While anthropologists welcome work benefiting others or increasing the well-being of individuals or communities, determinations regarding what is in the best interests of others or what kinds of efforts are appropriate to increase well-being are complex and value-laden and should reflect sustained discussion with those concerned. Such work should reflect deliberate and thoughtful consideration of both potential unintended consequences and long-term impacts on individuals, communities, identities, tangible and intangible heritage and environments.


As of December 13, 33 people (presumably all anthropologists, but I'm not sure) had posted comments. The comments are often nuanced, making it hard to say whether they endorse the language or not. But they broke down roughly as follows:

Do No Harm



Significantly, the most wholehearted supporters of the "do no harm" proposal are those who uncritically embrace the Belmont Report and the Common Rule. "'Do no harm' is an IRB principle, and so it should be in our code," writes Bethe Hagens. Four other responses, from Chip Colwell-Chanthaphonh, mkline, Robert T Trotter II, and Simon Craddock Lee, all seem to suggest that the AAA code should conform to those documents, without asking much about their origins or their fit to the practices and beliefs of anthropologists.

Four other responses--from Barbara Rose Johnston, Seamus Decker, socect, and Vicki Ina F. Gloer--endorse Hagens's idea that anthropologist should "intend no harm." Despite the Belmont Report's description of "the Hippocratic maxim ”do no harm” [as] a fundamental principle of medical ethics," this form is more faithful to the Belmont's overall section on beneficence.

Do Some Harm



Eight responses--almost as many--appear to reject the "do no harm" idea on the grounds that neutrality is impossible, and anthropologists should not hesitate to harm those who deserve it. "A blanket edict to 'Do No Harm' could easily lead to a professional paralysis when one considers that a few steps away from the person giving you this interview is someone who will not like, will want or need to fight, or will suffer consequences for what is said much further down the line," writes Benjamin Wintersteen. Murray Leaf concurs. "Do no harm is fine as principle of medical practice," he writes, "where you are working with a single individual. It is nearly meaningless when you (we) work with human communities, in which what is good and what is harm is usually in contention. As some of these posts suggests, what we do is often a matter of helping some while undermining the position of others. No harm at all, in such a context, would almost always be also no help at all–and no effect at all."

Bryan Bruns offers an example. "I work, in conjunction with communities and a government agency, to design and support a process in which communities are likely to, in a reasonably democratic way, act to restrain the behavior and thereby (harm) reduce the benefits of a few people (upstream irrigators, large landowners) who currently take advantage of others, it’s not clear how a principle of 'do no harm' would allow any practical engagement."

I would say that the responses by Dimitra Doukas, Joan P Mencher, Moish, Noelle Sullivan, and Ray Scupin all fall in this general category of respecting critical inquiry. Margaret Trawick's comment is harder to categorize. "I have been teaching 'Do no harm' to my students as the first ethical principle for anthropological fieldwork, for many years," she writes. "It is a difficult principle to follow, precisely because you never know what might cause harm, and therefore you have to THINK about what you are doing in the field more carefully than you might in everyday life. Good intentions are not enough. Additionally, 'harm to whom' is a good question . . . Sometimes to protect and advocate for one party (.e.g. Untouchables in India) is to, at the least, offend some other party – e.g. high caste Hindus." Given her understanding of this problem, I'm not sure why she teaches "do no harm" rather than something like "think about whom you are harming."

It's the Wrong Question



An even greater number of responses suggest that, in the words of Carl Kendall, "This principle is way too vague and self-directed to be practically useful." Kendall hints, perhaps cynically, that anthropologists need one set of principles these ethical principles to "pass IRB muster" and a second set "to protect communities and fieldworkers." Carolyn Fluehr-Lobban argues that "'Harm' should be problematized—are there agreed upon universal standards of harm, and where is there discussion of reasonable disagreement."

James Dow rejects the medical language of IRBs: "'Do no harm' is an good ethical principle to be applied to individual social relationships, which we hope that we understand; however, there is a problem when applying it to larger societies and cultures." Likewise, David Samuels writes that "The place where you need to get informed consent is at the point at which you have turned people into characters in your story. The medicalized pre-framing of the IRB process doesn’t cover that at all."

Taken as a whole, the responses suggest that only a minority of those commenting embrace the Belmont Report and the IRB process as enthusiastically as the AAA did in its 2004 statement that presents the active involvement of IRBs as a positive good. I hope the Task Force recognizes this, and takes the opportunity to reconsider the AAA's overall position in regard to IRB review.

[Hat tip to Alice Dreger. For a historical perspective on another discipline's efforts to craft a research ethics code, see Laura Stark, "The Science of Ethics: Deception, the Resilient Self, and the APA Code of Ethics, 1966–1973," Journal of the History of the Behavioral Sciences 46 (Fall 2010): 337–370.]

Wednesday, December 22, 2010

First, Do Some Harm, Part I: Denzin's Qualitative Manifesto

Three recent documents demonstrate the confusion that arises when people try to apply medical ethics to non-medical fields. I will describe them in individual entries.

In June 2010, Norman Denzin, Research Professor of Communications at the University of Illinois at Urbana-Champaign, published The Qualitative Manifesto: A Call to Arms (Left Coast Press). Chapter five seeks


to outline a code of ethics, a set of ethical principles for the global community of qualitative researchers. I want a large tent, one that extends across disciplines and professions, from anthropologists to archeologists, sociologists to social workers, health care to education, communications to history, performance studies to queer and disability studies.


Part of the impetus for this effort is Denzin's recognition that IRB guidelines may not match "guidelines grounded in human rights, social justice considerations" or disciplinary codes. He is familiar with the debate concerning IRBs, having read the Illinois White Paper, the AAUP reports, and "even a humanities and IRB blog where complaints are aired."

Denzin is also familiar with oral historians' concerns that IRBs impose inappropriate requirements, as well as statements of ethics from other qualititative researchers. He seeks to synthesize what he has learned in a footnoted dialogue, part of a "one-act play" entitled "Ethical Practices":


SCENE FOUR: Oral Historians

. . .

Speaker Two:: We do not want IRBs constraining critical inquiry, or our ethical conduct. Our commitment to professional integrity requires awareness of one's own biases and a readiness to follow a story, wherever it may lead. We are committed to telling the truth, even when it may harm people (Shopes, 2007a, p.4).

Speaker One:: When publishing about other people, my ethics require that I subject my writing to a fine-mesh filter: do no harm (Richardson, 2007, p. 170).

Speaker Two:: So there we have it. A set of methodological guidelines. (83)


No. What we have is a debate between Linda Shopes, a historian, and Laurel Richardson, a sociologist, about the ethical responsibility of an interviewer to a narrator. Their perspectives reflect important differences between their professions. They also refelct the particulars of the book in which Richardson's statement appears, an account of the last months of a dying friend--hardly the typical oral history or sociological study.

Denzin turns a blind eye to this debate, instead seeming to endorse both sides. In the play, Speaker Two states that "Beneficience, do no harm, is challenged in the oral history interview, for interviews may discuss painful topics, and they [sic] have the right to walk away at any time." That seems to endorse Shopes's position. But they book closes with a proposed ethical code that leans toward Richardson, calling on all qualitative researchers to "strive to never do harm." (122)

How can Denzin read and reprint historians' arguments, then reject them without even realizing he is doing so? Is the historians' position so hard to understand? Or is the lure of innocuity so powerful?

Sunday, December 12, 2010

Van den Hoonaard Reviews Ethical Imperialism

Will van den Hoonaard reviews Ethical Imperialism for the Canadian Journal of Sociology. The CJS blog asks that readers not quote or cite the advance version now online, but I suppose linking is OK.

Happy Fourth Birthday, Institutional Review Blog!

Friday, December 10, 2010

George Mason University Posts Consultant Report

The George Mason University Office of Research & Economic Development has posted the following:


The Huron Consulting Group's final report on the Office of Research Subject Protections and the Human Subjects Review Board is now available. The results are available in two formats: A comprehensive Final Report (PDF) and a Faculty Report Presentation (PPT)


As a Mason faculty member, I am involved in the continuing discussions of human research protections at the university. I will therefore refrain from comment except to applaud the university administration for posting these documents where they can inform decisions here and at other universities.

For comparable reports, see Institutional Review Boards at UC [University of California]: IRB Operations and the Researcher’s Experience, Report of the University Committee on Research Policy (UCORP), endorsed by the Academic Council, April 25, 2007, and University of Cincinnati, Final Report: Task Force on IRB and Compliance Roles and Responsibilities, May 2009.

Tuesday, December 7, 2010

Menikoff Passes the Buck

Joseph Millum, bioethicist at the National Institutes of Health, and Jerry Menikoff, director of the Office for Human Research Protections, acknowledge the widespread dissatisfaction with present human subjects regulations and wish that "ethics review could be streamlined under the current regulations if institutions, IRBs, and researchers adhered strictly to the definition of human subjects research and used the available options for exemptions, expedited review, and centralized review—options that remain underused in biomedical research." But they put too much blame for this overregulation on IRBs and research institutions rather than on their own agencies.

[Joseph Millum and Jerry Menikoff, "Streamlining Ethical Review," Annals of Internal Medicine 153, no. 10 (November 15, 2010): 655-657.]

Millum and Menikoff call for institutions to offer exemptions and expedited review whenever possible, not just to placate researchers but to protect research participants:


Following these measures is unlikely to reduce human subject protections. The categories of research that are exempt or eligible for expedited review are unlikely to include highly unethical studies. For example, studies in these categories almost always pose no more than minimal risk to participants, which should ameliorate concerns about participant harm. Thus, the absolute probability of increased use of these measures leading to more unethical research is low.

In addition, IRBs always have constraints on their time and resources, and any time they spend reviewing one protocol takes away time from reviewing others. Institutional review boards should prioritize their time to focus on protocols that are more likely to generate ethical issues but need a way to determine whether a study will raise ethical issues without actually reviewing the full protocol. The regulatory measures we have detailed identify categories of research that are unlikely to be ethically problematic. Using them therefore frees up resources for reviewing riskier research.


So far so good. But the article overlooks four ways in which the federal government, and OHRP in particular, encourages institutions to overregulate.

1. Federal Agencies Don't Gather Data



To address the problem of overregulation, it would be nice to know how big a problem it is. The best that Millum and Menikoff can say is that "A 1998 report found that for each category of exempt or expedited research, 25% to 77% of U.S. IRBs 'practice some form of review that was more rigorous than specified by the regulations.' There appear to be no data that contradict this picture today."

While that's true enough, should OHRP be satisfied with a single, 12-year-old report sponsored by NIH? Might not a more regular system of data collection help us understand why IRBs act the way they do? And who is to sponsor this, if not OHRP and NIH?

2. OHRP Presents Exemption Determinations as Difficult, But Does Nothing to Clarify Them



OHRP's guidance on "Exempt Research Determination" claims that "an institutional policy that allowed investigators to make their own exemption determinations, without additional protections, would likely risk inaccurate determinations." So far as I can tell, this claim is based on no empirical data. But it serves to present exemption determination as a risky business, one apt to go wrong, thus discouraging institutions from applying the exemptions.

It's true that the exemptions are poorly written. For example, 45 CFR 46.101(b)(2) exempts many studies unless "information obtained is recorded in such a manner that human subjects can be identified," but 46.101(b)(4) exempts projects when "information is recorded by the investigator in such a manner that subjects cannot be identified." Why does "by the investigator" appear in (b)(4) but not (b)(2)? If a research participant takes notes during an interview, or writes about the event in his diary, is the researcher now subject to IRB review?

If OHRP were serious about getting projects exempt, it would tell everyone what the exemptions are supposed to mean. Then it would study--not guess--if researchers are capable of applying those standards.

3. Federal Officials Model Overregulation



Millum and Menikoff ignore the terrible guidance issued by the federal government over the years. For example, in June 2008 OHRP and NIH contributed to a report on Expedited Review of Social and Behavioral Research Activities. That report offered thirteen hypothetical projects it termed "eligible for expedited review," overlooking the fact that eight of the thirteen were in fact exempt from any review.

Ironically, Millum and Menikoff provide the latest example of this problem. They write that "the work of a researcher who interviewed patients with HIV/AIDS about their medications and recorded their names would not be exempt under category 2, because disclosure of the patients’ HIV/AIDS status could harm them. However, if the researcher conducted the interviews anonymously and never recorded the patients’ names or other identifying information, the study would probably be exempt."

No, if the researcher conducted the interviews anonymously and never recorded the patients’ names or other identifying information then the study would be exempt, not just probably be exempt. If Menikoff, free to invent whatever hypothetical study he wishes, and immune from OHRP sanction, still can't manage to declare this project unambiguously exempt, then how can he expect real institutions to exempt real projects?

4. OHRP Second-Guesses Exemption Determinations



Millum and Menikoff write that "regulatory bodies need to reassure the research community that their primary concerns lie not with meeting bureaucratic requirements but with genuinely protecting human participants."

Yes, that would be lovely. But in the most high-profile case concerning exemptions in recent years, OHRP rejected Johns Hopkins University's determination that Peter Pronovost's to effort reduce the incidence of catheter-borne infections in ICUs was exempt from IRB regulations. After getting slammed in the press, OHRP retreated and decided that "regulations that govern human subjects research no longer apply." Still, the lesson for IRBs was that if they err on the side of exemption, they risk the whip.

All that happened before Menikoff's arrival. But in the two years under Menikoff, OHRP has kept issuing determination letters warning not of failures of genuine protection but instead of failures to meet bureaucratic requirements:


HHS regulations at 45 CFR 46.109 require that continuing review of research be conducted by the institutional review board (IRB) at intervals appropriate to the degree of risk, but not less than once per year. The regulations make no provision for any grace period extending the conduct of the research beyond the expiration date of IRB approval. We determine that during the period from 1991-1999 when the above-referenced research was conducted at UI, continuing review for the above-referenced research did not always occur at least once per year. For example, the first continuing review occurred on February 27, 1992 and subsequent continuing reviews apparently occurred on April 1, 1993, May 12, 1994, May 10, 1995, June 27, 1996, June 26, 1997, July 9, 1998, and July 15, 1999.


The article ends with the disclaimer that "The views expressed in this commentary are those of the authors and are not necessarily those of the U.S. Department of Health and Human Services or its operating divisions, the National Institutes of Health and the Office of the Assistant Secretary for Health." No, an article designed to "to extol the virtues of [streamlining] measures" is certainly not the view of HHS and its operating divisions.