Wednesday, January 18, 2012

Survey: 28% of Linguists Report Inappropriate IRB Demands

The response to the ANPRM from the Linguistic Society of America led me to an interesting article describing the result of an international survey of linguists about their encounters with ethics review. Though the author of the article claims that "in general, ethics regulation appears to be working," her data suggest that IRB review exacts a heavy cost in time and knowledge with little clear benefit.

[Claire Bowern, "Fieldwork and the IRB: A Snapshot," Language 86 (December 2010): 897-905 | DOI: 10.1353/lan.2010.0048]


Bowern surveyed "approximately 100 linguistic fieldworkers" in several countries (about half in the U.S. and Canada) about their responses to human subjects review.

She finds:

for the most part, the regulation of field linguistic research is working, and the problems are concentrated in just a few (though complex) areas. These primarily involve informed consent and its documentation, and provisions for anonymity. A rare but worrying problem is that some ethics boards are requiring the destruction of primary research materials.

More specifically, when asked whether they had changed their research as a result of IRB review, 71 percent of those responding had not, and many of those who had made only minor modifications. Yet some number (Bowern does not say how many) had curtailed or abandoned a project because of IRB review.

Moreover, though 57 of 79 responders reported no methodological conflicts,

Of the remaining twenty-two responses, nine mentioned problems with written consent forms. Some were required to use forms that in their view were too technical, or that exaggerated the risks to which participants would be exposed. Others were required to gain ‘informed consent’ in writing even when working with nonliterate research participants, and as a result both researcher and research participants felt that the consent process created an atmosphere of intimidation. One researcher mentioned having been reprimanded for submitting a consent form signed with an ‘X’.

The other most common problem involved the use of standardized questions. As mentioned above, several respondents reported that their IRB required them to clear all research questions in advance, which was incompatible with the emergent research method the researcher wished to use.

Others mentioned problems with IRBs requiring the destruction of primary data on an endangered language, and one mentioned an issue involving the secondary use of data. Two mentioned that their protocols had initially been rejected because of their IRB’s incorrect assumptions about the cultural background of research participants (for example, one person reported that their IRB had assumed that all speakers of nonstandard US English are African American, and therefore that the research was targeting a particular ethnic group). A few mentioned the area of payment (that an IRB required payment to research participants in cash (and recorded by receipt), which offended local customs). Another respondent gave the example of an ethics board requiring responses to be anonymous in language description where the consultants had expressed a wish to be identified and acknowledged for their work on their language.

Finally, a few people stated that the IRB process had made them more conscious of their ethical responsibilities toward research participants and their communities.

Moreover, several respondents had shied away from interviewing children and teenagers for fear of additional IRB requirements, though doing so could have helped document language shift among speakers of endangered languages. And some had destroyed materials to meet IRB demands, despite an ethical duty to preserve such materials for future researchers.

Bowern puts her findings in rosy terms:

In general, the review process appears to be working, in that more than two thirds of the respondents were seeking approval, gaining it with a minimum of protocol revision, conducting their research, and not reporting problems even when given the opportunity to do so anonymously. The majority of respondents were not required to alter their protocols; a few were asked to make minor changes, which did not affect the results and probably led to a better experience for the participants. Problems are confined to a few areas. This suggests that the ‘social science victim narrative’, as Stark (2007:785) has called the idea that social scientists are ill-served by IRBs, is not as prevalent in linguistics as we might have imagined from anecdotal reports.

Bowern apparently did not ask researchers if IRB changes protected participants, so she is guessing when she writes that those changes "probably led to a better experience for the participants." To the contrary, only "a few" respondents said the "the IRB process had made them more conscious of their ethical responsibilities toward research participants and their communities." Bowern does not compare the number of researchers with this response to the number who reported that participants felt intimidated by the consent process.

More significantly, saying "more than two thirds" faced only minor problems is the same as saying "more than a quarter" (22 of 79 respondents) had their work significantly disrupted by IRB meddling. I hope that not all linguists share Bowern's tolerance of a system that so commonly intimidates participants and diminishes knowledge.

2 comments:

Claire Bowern said...

Your summary of my article is quite misleading. The point is that the vast majority of the problems were clearly concentrated in two or three quite specific areas: 1) mandated data destruction; 2) inappropriately legalistic consent forms; and 3) written consent forms for working with participants who do not read and write. Part of the purpose of the article was to give linguists (particularly fieldworkers) resources to quote when preparing IRB applications so that it's clear that (1) is a totally inappropriate way to treat linguistic data; and that (2) and (3) are harmful to research participants.

The number of people who curtailed or abandoned projects because of IRB review was 3/94 - and two of those were curtailed because they assumed that it would be too much trouble to fill out the paperwork for working with children. But given that their protocols were already classed as non-exempt, there's no indication that this assumption was correct.

Many of the revisions were protocol clarification questions. A number of researchers also reported that their IRB did not require further modification to protocols once procedures had been clarified (I cannot recall how many mentioned this but I can check the original responses if you really want to know).

If I was tolerant of a system that intimidates participants and diminishes knowledge, I wouldn't be taking time out of my other research activities to help linguists deal with their IRBs. Showing that the clear majority of linguists following standard protocols for linguistic field research get their protocols approved is very useful: it provides linguists working with IRBs who don't do that with a way to show who's out of step, along with information about why field linguists work the way they do and how it protects research participants.

Zachary M. Schrag said...

I agree that your findings are very useful, and I hope that linguists will use them to show that IRBs are out of step. Our disagreement is whether a system that fails 28% of the time is "working."