Sunday, September 20, 2015

Alarmist Claims about Public Administration Research

Two scholars from the University of South Africa claim that more than one in four articles they sampled in two journals of public administration involved “research of a more than minimal risk level.” This claim appears to be based on a misunderstanding of U.S. regulations.

[Jacobus S. Wessels and Retha G. Visagie, “The Eligibility of Public Administration Research for Ethics Review: A Case Study of Two International Peer-Reviewed Journals,” International Review of Administrative Sciences, September 3, 2015, 0020852315585949, doi:10.1177/0020852315585949.]

The error appears in Table 2, on p. 11 of the article. The authors list the following categories of “potential benefit and risks of the data-collection methods or techniques used”:

  • Individual interviews as a data-collection method (greater than minimal risk)
  • Group interviews as a data-collection method (greater than minimal risk)
  • Observation as a data-collection method (no risk to greater than minimal risk)
  • Conceptual research (no risk)

Apparently, Wessels and Visagie believe that under U.S. definitions, all individual and group interviews should be regarded as greater than minimal risk. This is not correct.

Though the authors claim to be following the categories established by the U.S. regulations (p. 12), they have not consulted OPRR’s 1998 Categories of Research That May Be Reviewed by the Institutional Review Board (IRB) through an Expedited Review Procedure, a list of the most common forms of minimal risk research. This list specifically includes “Research on individual or group characteristics or behavior (including, but not limited to, research on perception, cognition, motivation, identity, language, communication, cultural beliefs or practices, and social behavior) or research employing survey, interview, oral history, focus group, program evaluation, human factors evaluation, or quality assurance methodologies.”

Wessels and Visagie sampled 70 journal articles and report, “nearly 18 (26% of the total sample) of the articles reported on research of a more than minimal risk level.” This figure, repeated four times, is undoubtedly an overestimate, since it includes all interview projects, regardless of risk.

To be sure, public administration researchers may occasionally wander into challenging ethical territory. For instance, Wessels and Visagie cite an article by Iranian scholar Behzad Mashali, who asked government officials about their beliefs about corruption. In the United States, such a project would qualify as both exempt and minimal risk, but Iran jails journalists without due process, so what might its government do to employees who speak, even abstractly, about corruption?

I don’t know, but neither do Wessels and Visagie. Instead of doing careful analysis of the case studies they’ve identified, they have deployed a simple and erroneous criterion to produce a misleading claim.

1 comment:

Unknown said...

The purpose of our article was not to analyse or interpret the US regulatory language at all, but to “report on research aimed at assessing why Public Administration research is eligible for research ethics review at all”. Our conceptual framework was never intended to be a replication of the US regulatory language, but a deduction from our reading of the Belmont Report and several scholarly works related to the report and research ethics review, including our evolving institutional policy on research ethics. In conclusion, we do not agree that the “article relies on a misunderstanding of U.S. regulatory language”. We developed three risk-associated categories (no involvement, indirect involvement and direct involvement) that are context-specific. We acknowledged as a limitation of this study that “a simplistic picture of research ethics risk is presented in the article” (see page 17). We also stated: “we are in the process of developing more refined risk categories for research in Public Administration in current research (see page 17).
It seems that your main concern is our equation of “more than minimal risk” to “direct involvement” as you perceive it as a misrepresentation of the meaning “more than minimal risk” as used in the US regulatory practice. We have deduced the risk categorisation used in this article from various discussions and interpretations of the principles as stated in the Belmont Report. In fact, we have developed for the purpose of this article only three risk categories founded on the notion of ‘human participant involvement’. On p 12 of our article we explicitly state that for “the purpose of this study, ‘minimal risk’ refers to the probability that harm or inconvenience anticipated in the research is not greater in itself than that encountered in ordinary daily life (OHRP, 1993); ‘more than minimal risk’ refers to research with the potential to harm or create inconvenience for human subjects. We stated explicitly on page 12 of the article that the risk category – ‘more than minimal risk’ reflect a potential to harm or to create inconvenience to human subjects. For the purpose of this research we regarded direct involvement of human participants through qualitative or quantitative methods as having that potential. Consequently, the conceptual framework presented in Table 2 has been used for the quantitative content analysis. We have deliberately decided to use for the purpose of this study a broad risk categoris action based on the nature of involvement of human participants. In fact, our current research focus specifically on a refined risk assessment framework as stated previously. We take notice of the practice in the countries such as the US and Canada. However, we do not regard these countries’ official risk categories and the definitions of these categories as the final word in this regard.

As we categorised the use of secondary data (indirect involvement) as of minimal risk, it makes logically sense that direct involvement through interview- or survey-based research be classified as having the potential for more than minimal risk. It depends on the classification system

Considering the definitions in our risk classification system the 26 percentage figure was not an over statement and indicates a “potential’ for more than minimal risk. In our discussion (on page 17) we acknowledged as a limitation of the study “the lack of information typically needed to determine the extent to which research procedures have met ethical principles …” We further acknowledged “our inability to assess the actual behaviour of researchers’ through the methods used”.
We never claimed that the findings indicated that real harm occurred during the research. The findings of case studies are contextual and a limitation of this type of research is that the findings are not generalizable.

Kobus Wessels and Retha Visagie
University of South Africa