Jerry Menikoff, MD, JD
Office for Human Research Protections
1101 Wootton Parkway, Suite 200
Rockville, MD 20852
Dear Dr. Menikoff:
Thank you for the opportunity to comment on the advance notice of proposed rulemaking, “Human Subjects Research Protections: Enhancing Protections for Research Subjects and Reducing Burden, Delay, and Ambiguity for Investigators” (docket ID number HHS–OPHS–2011–0005). I am grateful to all of the ANPRM’s creators for taking this first step toward a much needed reform of the present system of research regulation.
I write these comments as the author of “How Talking Became Human Subjects Research: The Federal Regulation of the Social Sciences, 1965-1991,” Journal of Policy History 21 (2009): 3-37, and Ethical Imperialism: Institutional Review Boards and the Social Sciences, 1965-2009 (Baltimore: Johns Hopkins University Press, 2010), both of which were graciously cited in the ANPRM. I also edited the 2011 special issue of the Journal of Policy History on human subjects research, which featured Susan Reverby’s influential article on the Public Health Service experiments in Guatemala. And since 2006, I have edited the Institutional Review Blog, http://www.institutionalreviewblog.com.
I contributed to the response to the ANPRM submitted earlier by the American Association of University Professors, and I was consulted by the authors of the response submitted by the American Historical Association. I endorse those two responses wholeheartedly.
In addition to the comments in those documents, I wish to offer the attached observations, which reflect only my views and may not represent those of the AAUP, AHA, George Mason University, or any other institution.
Zachary M. Schrag, PhD
Associate Professor of History
Zachary M. Schrag
Department of History and Art History
George Mason University
Comments on advance notice of proposed rulemaking,
“Human Subjects Research Protections: Enhancing Protections for Research Subjects and Reducing Burden, Delay, and Ambiguity for Investigators”
25 October 2011
Question 4. IRBs Need Help Assessing Risk
Question 4: Should the regulations be changed to indicate that IRBs should only consider “reasonably foreseeable risks or discomforts”?
A. I support this change and suggest adding the word “significant” to describe the risks that may be considered.
The ANPRM acknowledges that “it is not clear that . . . [IRB] members have appropriate expertise regarding data protections.” That is true, but it is not clear that IRB members have appropriate expertise regarding physical risk, psychological risk, the benefits of particular restrictions, or any of the other factors that they would need to do their appointed tasks.
The ANPRM notes that when identical proposals are submitted to multiple IRBs, researchers can expect “widely differing outcomes regarding the level of review required.” But that is just a small part of the problem. Given identical proposals, IRBs will disagree about a great many things. This is not just a problem for multi-site studies; it is also an indicator that IRBs are making many or most of their decisions based on guesswork. That is, if the same proposal given to three IRBs comes back with three wildly different demands for changes, at some level it means that two of the three have offered bad advice. In extreme cases, a committee may applaud part of an application as “eloquent and well-grounded in the literature,” only to fault the same section when the same application is reviewed after revisions.
Jay Katz pointed to the basic problem back in 1973:
The review committees work in isolation from one another, and no mechanisms have been established for disseminating whatever knowledge is gained from their individual experiences. Thus, each committee is condemned to repeat the process of finding its own answers. This is not only an overwhelming, unnecessary and unproductive assignment, but also one which most review committees are neither prepared nor willing to assume.
This statement is as true today as it was then.
To remedy this problem, I call for regulations and guidance that require IRBs to use available empirical evidence when making decisions. I concur with the 1999 recommendation of the Working Group of the Human Subjects Research Subcommittee of the National Science and Technology Council: “In determining whether there might be a reasonable risk or damage related to divulging the sensitive information, etc., it is not enough that there be merely some hypothetical possible risk that can be construed. Rather, the risks resulting from disclosure must be readily appreciable and significant.”
Regardless of the specific adjectives and adverbs used, any regulation should be accompanied by guidance recommending that IRBs base their decisions on empirical evidence. If a researcher can show that a given method is in regular use, and an IRB cannot show that the method regularly abuses research participants, the research should proceed.
IRBs should also document the reasons for their decisions, something they seem to be doing now at a low rate.
The University of Texas’s 2009 report, “Trust, Integrity, and Responsibility in the Conduct of Human Subjects Research,” encourages IRBs to act based on “evidence‐based research” and “empirical studies.” The federal government should do the same.
Finally, I recommend the establishment of a national clearinghouse to disseminate the empirical findings of researchers and IRBs.
Question 25. The Common Rule Should Cover Only Biomedical and Behavioral Research
Question 25. Are there certain fields of study whose usual methods of inquiry were not intended to or should not be covered by the Common Rule (such as classics, history, languages, literature, and journalism) because they do not create generalizable knowledge and may be more appropriately covered by ethical codes that differ from the ethical principles embodied in the Common Rule? If so, what are those fields, and how should those methods of inquiry be identified? Should the Common Rule be revised to explicitly state that those activities are not subject to its requirements?
Answer: The Common Rule currently claims to regulate all “research” even though it has no statutory authority to do so. It is this over-reaching that diminishes protections for research subjects while imposing burden, delay, and ambiguity on investigators. The Common Rule should be rewritten to emphasize its applicability only to biomedical and behavioral research.
I therefore endorse the proposal made by the American Anthropological Association, in its comments on the ANPRM, to limit the Common Rule to the oversight of two kinds of work:
1. Biomedical and other study procedures involving risks of physical harm to human participants: that is, more specifically, harm defined in 76 FR 44515 II(A) as “characterized by short term or long term damage to the body such as pain, bruising, infection, worsening current disease states, long-term symptoms, or even death.”
2. Human experimentation and other methodologies whose results depend for their validity on limiting or controlling the information available to research subjects: that is, study designs reliant either on the passive withholding of information concerning what the study is about or on the active provision of misinformation: e.g., the use of placebos in biomedical clinical trials; the use of confederates in behavioral research concerning competition, conformity and the like; and the deceptive presentation of fictional narratives as actual news reports in social research concerning public opinion.
This definition would achieve many of the objectives of the ANPRM and bring the regulations into compliance with the underlying statute and the intent of Congress.
Statutory authority covers only biomedical and behavioral research
As the ANPRM notes, the Common Rule draws its statutory authority primarily from 42 USC 289, which calls for the establishments of IRBs “to review biomedical and behavioral research involving human subjects.”
The ANPRM also cites 42 USC 300v, which established the President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research. Section 300v(a)(1) of that title calls for membership in the commission to be split among three groups: (A) “individuals who are distinguished in biomedical or behavioral research,” (B) “ individuals who are distinguished in the practice of medicine or otherwise distinguished in the provision of health care,” and (C) “individuals who are distinguished in one or more of the fields of ethics, theology, law, the natural sciences (other than a biomedical or behavioral science), the social sciences, the humanities, health administration, government, and public affairs.”
Thus, federal law distinguishes between “biomedical or behavioral research” on the one hand and “the social sciences, the humanities, health administration, government, and public affairs” on the other, and it covers only the former categories. As the deputy general counsel of the Department of Health, Education, and Welfare put it in 1979, “if Congress had wished . . . to cover all human subjects research, rather than just biomedical and behavioral, it could have done so.”
The Common Rule should reflect the underlying statutes and apply only to biomedical and behavioral research.
The ANPRM notes that “While physical risks generally are the greatest concern in biomedical research, social and behavioral studies rarely pose physical risk but may pose psychological or informational risks. Some have argued that, particularly given the paucity of information suggesting significant risks to subjects in certain types of survey and interview-based research, the current system over-regulates such research.” I agree with the latter assessment.
Both the statute and the regulations were designed to address concerns raised in the 1973 Senate hearings. As the secretary of HEW explained in 1976,
The types of risk situations against which the regulations were designed to protect are suggested by the areas of concern which were addressed in the legislative hearings held in conjunction with the enactment of section 474 of the Public Health Service Act, 42 USC 289 1-3 (added by Pub. L. 93-348) . . .
The subjects addressed included the use of FDA-approved drugs for any unapproved purpose; psycho-surgery and other techniques for behavior control currently being developed in research centers across the nation; use of experimental intrauterine devises; biomedical research in prison systems and the effect of that research on the prison social structure; the Tuskegee Syphilis Study; the development of special procedures for the use of incompetents or prisoners in biomedical research; and experimentation with fetuses, pregnant women, and human in vitro fertilization . . .
The hearings did not address the risks of survey, observation, and interview-based research. Nor has the experience subsequent decades shown that this kind of research is particularly risky. One can find exceptions, but these are rare. Stuart Plattner put it well in 2006. “In all the years I was responsible for human-subjects issues at NSF, I never learned of one case in which a respondent was actually harmed from participation in anthropological research.” He concluded, “although the possibility of harm to participants in ethnographic research is real, the probability of harm is very low.”
As the ANPRM notes, “Over-regulating social and behavioral research in general may serve to distract attention from attempts to identify those social and behavioral research studies that do pose threats to the welfare of subjects and thus do merit significant oversight.”
Different scholarly disciplines adhere to different ethical codes
The U.S. IRB system was designed by experts in the ethics of medical and psychological experimentation. The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research wrote the Belmont Report without any regard to the ethical codes developed by journalists or scholars in the social sciences and the humanities, and it is a poor fit for their work. Its insistence on the equitable selection of subjects is simply irrelevant when a researcher chooses people based on their unique characteristics. More significantly, instructions to “do no harm” cannot apply to investigative journalism and other forms of critical inquiry. IRBs have consistently proven themselves unable to make this distinction.
The new “excused” category may not work for research in which real names are the norm
When institutions do impose IRB authority on oral history and other research in which participants are generally identified, they can generally rule it exempt under the current Common Rule. But this category may disappear under the present proposal.
If that happens, this kind of research would be an awkward fit for the new “excused” category, which emphasizes privacy. While the new category rules “allow subjects to authorize researchers to disclose the subjects’ identities, in circumstances where investigators wish to publicly recognize their subjects in published reports, and the subjects appreciate that recognition,” its clear emphasis is on preserving the confidentiality of information. The present Common Rule does not require anonymity, but its emphasis on confidentiality has led IRBs to impose inappropriate demands on researchers, such as requiring oral historians to anonymize their narrators or destroy recordings and transcripts. If, as is likely, institutions determined that real-name research was not eligible for the new excused category, IRBs would find themselves reviewing research the ANPRM considers only a distraction.
Alternatively, institutions allow real-name research to proceed under the excused category. But if that were the case, then historians, journalists, and folkorists would find themselves submitting forms that said only that they did not intend to follow most of the provisions of the excused category. This would be nothing but a waste of time and paper.
non-generalizability has proven an unreliable tool
ANPRM’s Question 25 hints that regulators are considering letting journalists, historians and other humanists off the hook by declaring their work to be non-generalizable, and therefore not subject to regulation under the Common Rule.
Yet non-generalizability has proven an unreliable tool. OHRP has muddied the waters, apparently contending—for example—that “Open ended interviews are conducted with surviving Negro League Baseball players in order to create an archive for future research” would constitute generalizable research because “the creation of such an archive would constitute research under 45 CFR part 46 since the intent is to collect data for future research.”
This conflicts with OHRP’s earlier determination that “oral history interviews, in general, are not designed to contribute to generalizable knowledge.” If generalizable means that some future researcher might conceivably use the information, then nothing is non-generalizable. Do not daily newspapers; criminal, civil and congressional investigations; and disease monitoring all create an archive for future research?
Moreover, as Robert Townsend of the American Historical Association has noted,
The argument [that oral history is not generalizable] prompted some derision from outside the field, from academics who interpreted the phrase to say simply “history is not research.” (As a case in point, the vice president for research at my own university, after a fairly contentious meeting on the subject, wished me well on my “non-research dissertation.”)
We also received a number of complaints from within the discipline. Some historians argue that history does contribute generalizable knowledge, even if it bears little resemblance to the scientific definition of the word. And faculty members at history of medicine departments and in the social science side of history warned that this position undermined both their institutional standing and their ability to obtain grants. They made it clear that however finely worded, stating that history did not constitute research in even the most bureaucratic terms could have some real financial costs to the discipline.
More fundamentally, no one can be sure what generalizable means. It is left undefined in the Common Rule. The Belmont Report version is longer, but hardly more helpful:
The term “research” designates an activity designed to test an hypothesis, permit conclusions to be drawn, and thereby to develop or contribute to generalizable knowledge (expressed, for example, in theories, principles, and statements of relationships). Research is usually described in a formal protocol that sets forth an objective and a set of procedures designed to reach that objective.
This goes some way to distinguish research from diagnosis of an individual patient—the main goal of that section of the Belmont Report—but I am not even sure of that; I would hope that an MRI operator diagnosing a patient has an objective and a set of procedures designed to reach that objective.
Nor does it distinguish science from journalism, which regularly permits conclusions to be drawn and expresses statements of relationships.
Conversely, qualitative social scientists debate whether their work is generalizable. So “generalizable” covers research that the National Commission did not want covered and leaves uncovered research that the Commission did seek to regulate.
Serious observers have noted the problem. Tom Beauchamp recently complained that “generalizable knowledge,” like other terms, “can be understood in several ways.” Rena Lederman has found that “the regulatory definition did little to resolve the very ambiguities within medical practice for which it was designed. Heroic efforts of clarification can be found in works that interpret the Common Rule for IRBs. Nevertheless, to this day it continues to be a frequent topic of debate in IRB circles.” And in 2008, David Strauss of the Secretary’s Advisory Committee on Human Research Protections complained that “we shouldn’t be reviewing research that we don’t think needs to be reviewed because some folks 30 years ago, at the end of a long, hot day, decided to use the word ‘generalizable’ . . . We have to have language that makes sense to us.”
The American Anthropological Association’s proposal makes sense.
Questions 68 and 69. The federal government should monitor both the costs and benefits of IRB review
Question 68: With regard to data reported to the Federal government:
a. Should the number of research participants in Federally funded human subjects research be reported (either to funding agencies or to a central authority)? If so, how?
Answer. I agree with the ANPRM’s intent “not to expand the information that has to be reported.” Qualitative researchers should not be expected to produce quantitative data. That is, for a researcher whose results already depend on careful counts, it is relatively easy to report figures to federal regulators. But an ethnographer who attends mass events should not be expected to guess how many people he or she observed. Nor should an interviewer have to tabulate how many people he or she interviewed in a given year.
b. What additional data, not currently being collected, about participants in human subjects research should be systematically collected in order to provide an empirically-based assessment of the risks of particular areas of research or of human subjects research more globally?
Answer. These questions seem to envision collecting only data on adverse events and unanticipated problems that come about as a result of research. What about adverse events and unanticipated problems that result from IRB review? In order to know if the system is working well, we must measure its costs as well as its benefits. Thus, the government should establish a formal mechanism for registering researcher and participant complaints about inappropriate restrictions and requirements.
Question 69: There are a variety of possible ways to support an empiric approach to optimizing human subjects protections. Toward that end, is it desirable to have all data on adverse events and unanticipated problems collected in a central database accessible by all pertinent Federal agencies?
Answer. The most recent comprehensive data on the IRB system as a whole come from the mid-1970s survey conducted for the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. If new data are collected, they should be analyzed and presented in a form accessible to the general public as well as federal agencies.
Questions 9 and 73: Both the Belmont Report and the Common Rule require constant interpretation and periodic revision
Question 9: How frequently should a mandatory review and update of the list of research activities that can qualify for expedited review take place? Should the list be revised once a year, every two years, or less frequently?
A. Not only the expedited-review list be reviewed at least every three or four years, but so should the entire regulation. And the regulation should be overhauled every twelve years in the light of experience.
Question 73: To what extent do the existing differences in guidance on research protections from different agencies either facilitate or inhibit the conduct of research domestically and internationally? What are the most important such differences influencing the conduct of research?
A. The National Science Foundation and the Agency for International Development have posted interpretations that discourage overregulation of classroom activities, oral history, journalism, and other endeavors. This shows the value of reducing OHRP’s role as the sole lead agency for the interpretation of the Common Rule. Instead, agencies that sponsor non-health research should have a greater voice.
The ANPRM contemplates creating “a standing Federal panel” to review and update the list of research activities that can qualify for expedited review.
This is far too modest a proposal. In fact, a standing federal panel should be empowered to offer guidance on all elements of the regulations and to revise the regulations themselves periodically. Moreover, the Belmont Report should be retired and replaced with a statement on research ethics that can be updated to reflect current thinking and experience.
No ethical or legal statement can address all future cases, so a sound regulatory system will provide for interpretation and revision. Congress understood this need when it called for a National Advisory Council for the Protection of Subjects of Biomedical and Behavioral Research that would “review periodically changes in the scope, purpose, and types of biomedical and behavioral research being conducted and the impact such changes have on the policies, regulations, and other requirements of the Secretary for the protection of human subjects of such research.” Similarly, the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research did its work expecting that it would be constantly revised and interpreted. As commissioner Albert Jonsen has written
my colleagues and I fully anticipated that an Ethical Advisory Board (EAB) would he established as a standing agency within the Department of Health and Human Services. We had so recommended in almost all of our reports. We expected that such a Board could be the living oracle of Belmont’s principles. Just as our Constitution requires a Supreme Court to interpret its majestically open-ended phrases, and, if I may allude to my own Catholic tradition, as the Bible requires a living Magisterium to interpret its mystic and metaphoric message, so does Belmont, a much more modest document than Constitution or Bible, require a constantly moving and creative interpretation and application.
Congress abolished the National Advisory Council in 1978. Since then, there have been various federal bodies charged with reviewing the protection of human subjects. But these have been largely ineffectual. For one thing, they have lacked the power to issue guidance. Under the current system, SACHRP can make recommendations to OHRP, but only OHRP can issue the guidance. This creates a bottleneck.
In some cases, federal regulators have explicitly refused to offer clear decisions about murky regulatory language. In 2003, for example, Dr. Carome of OHRP issued guidance about the applicability of the Common Rule to oral history that left both historians and university administrators unsure how to proceed. Pressed to clarify his stance, he stated that OHRP was too busy to do so.
What little guidance the federal government has provided often takes the form of quasi-official statements that are not binding on institutions and therefore have little effect. For example, in 1999, a Working Group of the Human Subjects Research Subcommittee of the National Science and Technology Council offered some sound advice on interpreting the Common Rule. But the guidance came with the warning that it had been prepared by “a working group of individuals who attend the Human Subjects Research Subcommittee, Committee on Science, National Science and Technology Council. The document does not necessarily represent the position of any of their respective agencies.” Similarly, the National Science Foundation website presents some sensible interpretations in its “Frequently Asked Questions and Vignettes: Interpreting the Common Rule for the Protection of Human Subjects for Behavioral and Social Science Research.” However, their force is undercut by the disclaimer that “These notes represent the personal opinion of the Human Subjects Research Officer and do not supersede the official documents referred to.”
What we need is a permanent federal body with the power to issue prompt, clear, official guidance. It must be more representative than the current bodies such as SACHRP or the Presidential Commission for the Study of Bioethical Issues. In 2003, the National Research Council’s Panel on Institutional Review Boards, Surveys, and Social Science Research concluded that “Any committee or commission established to provide advice to the federal government on human research participant protection policy should represent the full spectrum of disciplines that conduct research involving human participants.” Neither the presidential commission nor SACHRP approach this standard.
The Canadian Panel on Research Ethics offers a promising model. Because its members are appointed by the Canadian Institutes of Health Research (CIHR), the Natural Sciences and Engineering Research Council (NSERC) and the Social Sciences and Humanities Research Council (SSHRC), the panel is more representative than its U.S. counterparts. And it has shown itself to be responsive to the concerns of researchers. The second edition of its Tri-Council Policy Statement, released in 2010, offers sensible guidance on organizational research, Internet research, and qualitative research—topics that U.S. bodies have scarcely addressed. Better still, it has promised ongoing interpretation of the regulations, the first of which appeared in August 2011. Australia likewise expects its guidelines be “reviewed at least every five years.”
I therefore recommend that the United States emulate the best features of the Canadian system of regulation: representation by all parties, regular updates to both regulations and ethical standards (every three to five years), and full-scale reconsideration at intervals of no more than twelve years. To this end, I endorse the American Anthropological Association’s call for
the creation of a commission constituted specifically of social scientists (e.g., sociologists and the like), humanistic social researchers (e.g., cultural anthropologists and the like), and humanists (e.g., historians, legal scholars, and the like). Rather than adapting strategies developed to protect biomedical information—which are fundamentally incompatible with core intellectual and ethical commitments of humanistic social studies—this commission would be tasked with developing alternative guidance appropriate for their fields.
The ANPRM notes that “although the regulations have been amended over the years, they have not kept pace with the evolving human research enterprise.” I expect the next decades to witness equally dramatic changes, so I suggest a mechanism for periodic revision of the regulations. While it is true that the ANPRM “offers a rare opportunity for needed modernization,” there is no reason for opportunities to remain rare.
 Linda C. Thornton, “The Role of IRBs in Music Education Research,” in Linda K. Thompson and Mark Robin Campbell, eds., Diverse Methodologies in the Study of Music Teaching and Learning (Charlotte, North Carolina: Information Age, 2008), 201-214; Jim Vander Putten, “Wanted: Consistency in Social and Behavioral Science Institutional Review Board Practices,” Teachers College Record, 14 September 2009; Alexander Halavais, “Rethinking the Human Subjects Process,” DMLcentral, June 14, 2010, http://dmlcentral.net/blog/alexander-halavais/rethinking-human-subjects-process.
 Brian Mustanski, “Ethical and Regulatory Issues with Conducting Sexuality Research with LGBT Adolescents: A Call to Action for a Scientifically Informed Approach,” Archives of Sexual Behavior 40 (April 29, 2011): 673-686.
 Jay Katz, testimony, U.S. Senate, Quality of Health Care—Human Experimentation, 1973: Hearings before the Subcommittee on Health of the Committee on Labor and Public Welfare, Part 3 (93d Cong., 1st sess., 1973), 1050
 James D. Shelton, “How to Interpret the Federal Policy for the Protection of Human Subjects or “Common Rule” (Part A),” IRB: Ethics and Human Research 21 (November-December 1999), 7.
 Mustanski, “Ethical and Regulatory Issues.”
 Survey Research Center, Institute for Social Research, University of Michigan, “2009 Follow-Up Survey of Investigator Experiences in Human Research,” December 2010, table 15.
 American Anthropological Association, “Comments on Proposed Changes to the Common Rule (76 FR 44512),” 19 October 2011, http://www.regulations.gov/#!documentDetail;D=HHS-OPHS-2011-0005-0431
 Zachary M. Schrag, Ethical Imperialism: Institutional Review Boards and the Social Sciences, 1965-2009 (Baltimore: Johns Hopkins University Press, 2010), 100.
 Department of Health, Education, and Welfare, “Secretary’s Interpretation of ‘Subject at Risk,’” Federal Register 41 (28 June 1976), 26572.
 Stuart Plattner, “Comment on IRB Regulation of Ethnographic Research,” American Ethnologist 33 (2006), 526.
 Jonathan T. Church, Linda Shopes, and Margaret A. Blanchard, “Should All Disciplines Be Subject to the Common Rule?,” Academe, May-June 2002, 62-69; Malone, Ruth E., Valerie B. Yerger, Carol McGruder, and Erika Froelicher. “‘It’s Like Tuskegee in Reverse’: A Case Study of Ethical Tensions in Institutional Review Board Review of Community-Based Participatory Research,” American Journal of Public Health 96, no. 11 (November 2006): 1914-1919.
 “Michael Carome’s Email”, n.d., http://www.nyu.edu/research/resources-and-support-offices/getting-started-withyourresearch/human-subjects-research/forms-guidance/clarification-on-oral-history/michael-caromes-email.html.
 Robert B. Townsend, “AHA Today: Getting Free of the IRB: A Call to Action for Oral History”, August 1, 2011, http://blog.historians.org/news/1382/getting-free-of-the-irb-a-call-to-action.
 T. L. Beauchamp, “Viewpoint: Why our conceptions of research and practice may not serve the best interest of patients and subjects,” Journal of Internal Medicine 269 (April 2011): 383-387.
 Rena Lederman, “Comparative ‘Research’: A Modest Proposal concerning the Object of Ethics Regulation,” PoLAR: Political and Legal Anthropology Review 30, no. 2 (November 1, 2007): 305-327.
 Secretary’s Advisory Committee on Human Research Protections, Transcript, Sixteenth Meeting, 16 July 2008, 264.
 National Science Foundation, “Frequently Asked Questions and Vignettes: Interpreting the Common Rule for the Protection of Human Subjects for Behavioral and Social Science Research,” http://www.nsf.gov/bfa/dias/policy/hsfaqs.jsp; “Protection of Human Subjects in Research Supported by USAID,” 26 December 2006 (http://www.usaid.gov/policy/ads/200/200mbe.pdf
 Public Law 93-348.
 Albert R. Jonsen, “On the Origins and Future of the Belmont Report,” in James F. Childress, Eric M. Meslin, and Harold T. Shapiro, eds., Belmont Revisited: Ethical Principles for Research with Human Subjects (Washington: Georgetown University Press, 2005), 10.
 Schrag, Ethical Imperialism, 157.
 Shelton, “How to Interpret the Federal Policy,” 6-9.
 National Science Foundation, “Frequently Asked Questions and Vignettes.”
 National Science Foundation, “Human Subjects,” http://www.nsf.gov/bfa/dias/policy/human.jsp
 Constance F. Citro, Daniel R. Ilgen, and Cora B. Marrett, eds., Protecting Participants and Facilitating Social and Behavioral Sciences Research (Washington: National Academies Press, 2003), 8.
Panel on Research Ethics, TCPS 2 Interpretations, http://www.pre.ethics.gc.ca/eng/policy-politique/interpretations/Default/
 Australian National Health and Medical Research Council, Australian Research Council, and Australian Vice-Chancellors’ Committee, National Statement on Ethical Conduct in Human Research (Canberra: Australian Government, 2007), 97.
 Ezekiel J. Emanuel and Jerry Menikoff, “Reforming the Regulations Governing Research with Human Subjects,” New England Journal of Medicine, 25 July 2011