[Anthony J. Langlois, "Political Research and Human Research Ethics Committees," Australian Journal of Political Science 46 (2011): 139-154, DOI: 10.1080/10361146.2010.544287. Also available as a preliminary preprint. Thanks to Professor Langlois for mentioning the essay on socialsciencepsace.com.]
Langlois credits the drafters of the 2007 statement, who "took seriously the concerns of those working in the humanities and social sciences, and engaged directly with many of the issues and dilemmas that had been evident for some time." Most helpfully, they added a chapter on qualitative research. As a Human Research Ethics Committee (HREC) chair, he found that "Many of the more egregious offenses of the medical model of research ethics had been eliminated or clearly demarcated as not applcable."
But Langlois notes two major areas in which medical assumptions continue to vex non-medical researchers.
Methodological Misfit
The first is methodological. The medical model implicit in the National Statement assumes that researchers will work on their own schedule, so they can plan their proposals to be ready for the next monthly meeting of the HREC. Political researchers, by contrast, may react to breaking news, such as the April 2010 plane crash that killed much of Poland's political elite. These researchers have no time to get HREC clearance, especially when, as Langlois has been told, ethics committees "have flatly refused to consider urgent requests for expedited review of research proposals."
(Langlois does not mention the flip side of this issue. Just as political research may move too fast for ethics review, it may also move too slowly. As Christopher Leo has written, "Many researchers concerned with politics and policy stay in regular touch with politicians and public servants and, in the process, ask them questions the answers to which may well be used in future publications." Thus, rather than starting suddenly in response to a plane crash, research may begin so gradually that the researcher does not notice.)
Ethical Misfit
Langlois's second concern is ethical. He argues that the National Statement constructs research participants "as vulnerable private individuals toward whom researchers have especial responsibilities which are derived from the power they hold in their role as researchers. More than this, human research participants are conceptualised primariliy in the role that they play within society as patients or clients or as some other form of unequal (meaning, that is, less powerful), dependent, vulnerable, private individual."
But for political researchers, "a research participant may be a voter—and ordinary private person. But they may also be an electoral representative, a Minister of the Crown, a judge, a public broadcaster, a private broadcaster with much political influence, a Chief Executive Officer of a company which employs a major percentage of the population, a terrorist, an enemy combatant, an economic adviser, a novelist, a Vice Chancellor . . . the list could go on interminably. For each of these, the relationship which the person has to society is different, not just in degree, but in kind, to the relationship which a medical patient or a therapist's client has to society."
Studying powerful people challenges the implied meanings of two of the National Statement's four principles: beneficence and respect.
What does it mean to require researchers to "minimise the risks of harm or discomfort to participants" in such research? As Langlois notes, "For a series of types of political research (and indeed for other activities which today are increasinly counted as research outputs when they are engaged in by academics, such as journalism), causing harm (or at least discomfort) may be the whole point of the exercise. If one is engaged in research about political corruption, human rights abuses, the influence of unions or industry barons over policy, branch stacking, political intrigue and so on, one may have an eminently justifiable intention to cause harm to one's research participant."
As for respect, Langlois writes, "Rather than the relationship between researcher and research participant being one of 'trust, mutual responsibility and ethical quality', as envisaged by the National Statement, it is far more likely to be one of suspicion, dissimulation, or even—. . . in relation to freedom of information laws—coercion."
In practice, Langlois thinks that HRECs already understand these problems, and they deal with these problems informally. But, he contends, "the National Statement is supposed to operate as a national guideline for HRECs, not as a document which spawns a host of under-the-counter practices and procedures which are undocumented, inconsistent with one another, and philosophically contradictory."
Solutions
Langlois notes that Australia intends to revise its National Statement "at least every five years." (Take that, you thirty-year-old 45 CFR 46!) In preparation for the 2012 revision, he offers two suggestions.
To fix the methodological problem, he suggests that alongside the current scheme of project accreditation, the statement create a system of researcher accreditation. "Rather than submitting an ethics clearance application before each 'project', researchers would be required to maintain a running log of research activity, and to submit an annual report of research activity," which could be scrutinized by an HREC.
And to fix the ethical problem, he suggests that the statement "providing guidelines for interpreting the Statement's research ethics review principles" for "research participants who were public, political, social, corporate and powerful agents of the body politik."
Both of these sound like significant improvements. I would, however, offer two caveats.
First, I am not sure that it is possible to reconcile the ethical gap by merely "interpreting the Statement's research ethics review principles" in new ways. Beneficence is beneficence, and it is a pillar of medical research. It is not a principle of all research, and it might be better to exempt critical inquiry from this principle than to twist words so that exposing someone's misdeeds becomes a form of beneficence. (Canada's TCPS2 more or less takes this approach, stating, in effect, that a researcher should not harm a participant, except in those cases where a researcher should harm a participant.)
Second, I am not sure it is right to categorize the appropriate targets of such critical inquiry as "public, political, social, corporate and powerful agents of the body politik," that is, distinguishing the person rather than the action. A tobacco baron on vacation might deserve more privacy than a humble storekeeper selling illegal cigarettes.
I think I prefer the formula suggested in 1967 by Lee Rainwater and David Pittman, who called for sociologists to "study publicly accountable behavior. By publicly accountable behavior we do not simply mean the behavior of public officials (though there the case is clearest) but also the behavior of any individual as he goes about performing public or secondary roles for which he is socially accountable—this would include businessmen, college teachers, physicians, etc.; in short, all people as they carry out jobs for which they are in some sense publicly accountable." [Lee Rainwater and David J Pittman, "Ethical Problems in Studying a Politically Sensitive and Deviant Community," Social Problems 14 (Spring 1967), 365.]
Overall, Langlois's essay is an elegant statement of how hard it is to adapt an ethics-review process designed for medical research ethics to other disciplines. In other words, ethical imperialism runs strong.
2 comments:
Once again you have dragged out the straw man of "do no harm" to make the case for exempting social science research. The principle of beneficence does not mean that research cannot harm subjects. It means that the risks of the research are reasonable in relation to the benefits of the research. The benefits of the research can be directly to the subjects or the importance of the knowledge to be obtained. If you are studying malfeasance in a public official, then the social benefit of the knowledge to be obtained can justify the risk to the subject. The requirement to be minimize risks means that the research uses the least risk possible to obtain valid resulting. The purpose of research is not to punish people, it is to find out important information. Inflicting more harm than is necessary is just cruelty.
With regard to his methodological concern, any review committee that refuses to review research on breaking events because it does not meet its schedule is not fulfilling its obligations. This is not because of regulations, it is because of institutional bureaucracy. There are lots of institutions that have developed procedures for reviewing breaking events.
Thank you for these comments.
"Do no harm" is not a straw man but a principle embedded in both the Belmont Report and the National Statement.
The Belmont Report offers a two-part definition of beneficence: "In this document, beneficence is understood in a stronger sense, as an obligation. Two general rules have been formulated as complementary expressions of beneficent actions in this sense: (1) do not harm and (2) maximize possible benefits and minimize possible harms."
The National Statement also offers a two-part definition:
" 1.6 The likely benefit of the research must justify any risks of harm or discomfort to participants. The likely benefit may be to the participants, to the wider community, or to both.
"1.7 Researchers are responsible for:
a. designing the research to minimise the risks of harm or discomfort to participants;
b. clarifying for participants the potential benefits and risks of the research; and
c. the welfare of the participants in the research context."
As Robert Veatch has noted, such formulas aren't really expressions of beneficence alone; they present two distinct principles: beneficence and nonmaleficence. ["Ranking, Balancing, or Simultaneity: Resolving Conflicts among the Belmont Principles," in Belmont Revisited: Ethical Principles for Research with Human Subjects, Childress, Meslin, and Shapiro, eds, 186-7.] And it is the nonmaleficence clauses that pose problems for social researchers.
What you describe is something else still. You write, "The principle of beneficence does not mean that research cannot harm subjects. It means that the risks of the research are reasonable in relation to the benefits of the research." But as Veatch explains, "if one believes that doing good and avoiding harm are merely two poles of a utility calculation, then the misleading term beneficence should be replaced with a term that avoids implying that only the positive dimension is being considered. The principle of utility is the obvious choice . . ." Thus, you are really describing utility, not beneficence.
I also suspect you are thinking not of the Belmont Report or the National Statement, but of the Common Rule, which does not use the term "beneficence." When you write of "the least risk possible to obtain valid result[]," you may be thinking of 45 CFR 46.111(a)(1)(i), which indeed calls for risk minimization to be weighed against "sound research design."
Neither Belmont nor the National Statement offer such a qualifier. Can we agree that both documents would be improved by the insertion of a "sound research design" qualifier that would replace beneficence and nonmaleficence with utility?
I agree with you that the HRECs that failed to respond to urgent requests from researchers were not fulfilling their obligations under the National Statement (particularly section 5.1.28(d) which requires that "review processes and procedures are expeditious.") Chapter 5.6 of the National Statement does require "procedures for receiving, handling and seeking to resolve complaints about the conduct of review bodies in reviewing research proposals." It would be interesting to hear from Professor Langlois whether researchers whose work was derailed by dilatory ethics committees took advantage of these procedures.
Post a Comment