Having agreed that IRB review sometimes produces unnecessary delays, particularly when multiple IRBs must sign off on a collaborative project, the workshop participants
found that while there might be some fairly intractable issues, as there are for any established institution, some of the difficulties that IRBs and investigators encountered were a result of reinventing the wheel locally, and a general lack of transparency in the process of approving human subjects research. The elements required to make good decisions on planned research tend to be obscure and unevenly distributed across IRBs. From shared vocabularies between IRBs and investigators, to knowledge of social computing contexts, to a clear understanding of the regulations and empirical evidence of risk, many of the elements that delay the approval of protocols and frustrate researchers and IRBs could be addressed if the information necessary was more widely accessible and easily discoverable.
Rather than encouraging the creation of national or other centralized IRBs, more awareness and transparency would allow local solutions to be shared widely. Essentially, this is a problem of networked learning: how is it that investigators, IRB members, and administrators can come quickly to terms with the best practices in DML research? Not surprisingly, we think digital media in some form can be helpful in that process of learning.
That is not an implausible idea. Plans for IRBs to share problems and solutions date back to the early 1970s, and they resulted in such institutions as PRIM&R and the journals, IRB: Ethics & Human Research and, more recently, the Journal of Empirical Research on Human Research Ethics. But these are fairly low-bandwidth channels: infrequent conferences and journal issues, with a few dozen sessions or articles per year, and devoted primarily to biomedical research. Hardly enough to generate a sustained discussion of an issue like social computing.
Alternatively, there exist online exchanges, like the IRB Forum. But these may lack the rigor of the journals. Rather than offering "empirical evidence of risk," as Halavais wants, they can amplify unrealistic fears. As Norman Bradburn testified before the National Bioethics Advisory Commission in 2000:
What is bothersome to me is that -- and the trend that I see in IRB's -- is that they are becoming more and more conservative, that is there is a kind of network at least in the ones that -- there is a kind of -- I do ont know what you call it -- ListServ kind of network that administrators of IRB's communicate with one another and they sort of say here is a new problem, how do you handle that, and then everybody sort of responds. And what happens is the most conservative view wins out because people see, oh, gee, they interpret it that way so maybe we better do it too. So over time I have seen things getting more and more restrictive . . .
May I suggest, then, that without proper supervision, digital media can be a liability rather than an asset. The challenge for Halavais and his colleagues is to build a conversation that combines immediacy and scholarly care.