By Dr Elizabeth Pearson
Terrorist propaganda videos, extremist narratives, child sexual exploitation images. These are amongst the materials that content moderators across social media platforms deal with on a daily basis. They’re also materials that academic researchers engage with in order to better understand particular forms of online crime. Over the past decade, the emotional and psychological harms of working on risky topics online have been scrutinized in the case of for instance, those working for social media platforms or the police. Meanwhile, far less attention has been paid to the harms experienced by academic researchers. Even when they are dealing with the same types of materials.
Some two years ago, the REASSURE (Researcher Security, Safety and Resilience) project team became interested in finding out about and documenting the experiences of academic researchers of online extremism and terrorism. We knew, as a community, that some colleagues were increasingly keen to talk about mental health issues that they attributed, in part, to online and extremist and terrorist content they had worked with. (The publication of the in-depth REASSURE report documenting experiences within the terrorism and extremism online research community is forthcoming).
But we were also aware that other sectors were managing these issues better. Outside of academia we could see an ever-growing awareness of the potential harms of repeated and/or long-term exposure to risky online material. This is true not just of content moderators, and those in tech, but also journalists, police, and those working in humanitarian aid. Also, a variety of other professionals exposed to traumatic material or stories. In some cases, those improved responses were due to legal action by employees. But more importantly, more formalised responses to those harms were apparent in all these other sectors than tend to currently exist within research institutions – whether universities or think tanks.
On 6 September, 2022 REASSURE held a workshop with a small group of stakeholders professionally tasked around online extremism and terrorism or similar, including from government, policing, journalism, the charitable sector, think tanks, and academia. The task was to discuss the harms experienced, efforts to mitigate them, and how those efforts have evolved over time. The aim was to initiate discussion to guide further in-depth research that will lead not just to documenting harms, but providing good practice guidelines to ensure online extremism and terrorism researchers’ future safety and security.
Learning from outside academia
Given the sensitive nature of some of the issues discussed, the workshop was conducted under the Chatham House Rule. But there are some key ideas we can share:
The type of institutional box-ticking and reliance on ad hoc support via informal networks that emerged from our interviews with university- and think tank-based online extremism and terrorism researchers were not familiar to the other professionals invited to the workshop. The representatives from the police, journalism, and charities had all been engaging with the issue of harms for some time and instituting formal mechanisms to prevent them.
Finding the right language to describe those harms mattered. REASSURE interviewees told us about nightmares, poor boundaries, social withdrawal and, in some cases, depression. But few used terms such as vicarious or secondary trauma, or PTSD to describe their symptoms – even when those terms were a good fit. Nor did REASSURE interviewees discuss ‘trauma-informed responses’ within their institutions or help from their institutions to avoid ‘moral injury’. The language used to describe harms, risks, and responses to them had evolved amongst professionals in other sectors over the course of the past decade. REASSURE, however, found few academic institutions that framed the possible risks to researchers in such terms.
Locating good practice is clearly at least partly about trial and error. Representatives of other sectors at the workshop had tried different ways of helping employees. Responses ranged from mandatory counselling, to break-out rooms, to Slack chats, to enforced weeks-long breaks from particular types of content collection, consumption, and analysis. Not all of these responses had proved initially helpful. In the best cases, employers listened to staff, and adjusted their responses accordingly. Workshop attendees emphasized that not all teams are the same and so may respond differently to the same interventions.
Protecting people isn’t all about practices; it’s also about cultures. REASSURE found wider unhealthy working cultures in academia impacted on the harms caused by researching hateful and violent content. Poor work-life balance, bullying, impact- and visibility-related pressures, along with precarity for junior researchers — all part of the working conditions across much of academia – increased the risks for harm from research on online extremism and terrorism. Take the issue of work hours, for instance. A recent study found the threshold for harm from engaging with extreme content was ten hours per week. It is not unusual for academic researchers, particularly early career researchers, to do this amount of research in a day.
Everybody at the workshop, and indeed everyone interviewed for our forthcoming REASSURE report, agreed that they enjoyed and valued their work, however difficult. Knowing that you are contributing to knowledge and having a positive ‘real world’ is a huge motivation in doing research that is potentially harmful or traumatic.
Next steps – get involved
The publication of the forthcoming REASSURE report is a first step in reducing harms to online extremism and terrorism researchers arising out of their work. It documents the experiences of academics working in our sub-field, good and bad. It shows a resilient community, supporting one another, often with much kindness and good humour, but little institutional help. That community wants to work with our institutions to improve formal supports.
The workshop was an important part of the next stage of REASSURE, the focus of which will be determining good practices from other sectors and working on tailoring these to academic settings. We are particularly interested in talking to those professionally tasked in relation to online extremism and terrorism, and adjacent harmful content types, about what has worked for them, in terms of harm prevention. If you would like to contribute on the latter, please drop us a line at firstname.lastname@example.org.
Dr. Elizabeth Pearson, Lecturer in criminology, Royal Holloway, University of London is writing on behalf of the REASSURE Team
Image credit: Pexels