Invisible Risks: content moderators and the trauma of child sexual abuse materials 

Middlesex uni

In September of 2021, the End Violence Partnership and the Technology Coalition awarded five organisations with grants to tackle online child sexual exploitation and abuse (CSEA). Through these grants, organizations across the world will undertake groundbreaking research to inform both policy and practice to tackle online child sexual exploitation and abuse (CSEA). One of those organizations is Middlesex University, which is launching the project Invisible risks: Combating secondary trauma to safeguard children from November 2021 to June 2023.  

Through this project, researchers at Middlesex University, in collaboration with INHOPE and other sector-specific organisations, will explore and quantify the issues facing content moderators, specifically as it relates to their exposure of traumatic child sexual abuse material. They will also identify coping strategies currently used by content moderators, and highlight what works – and what does not work – for individuals and organisations that are charged with reviewing this content. Results of this study will be used to develop a pilot intervention to support and protect the mental health of content moderators. 

End Violence spoke to principal investigators Elena Martellozzo and Jeffrey DeMarco to learn more about this project and the impact it will have for content moderations in the United Kingdom and around the world. 

Middlesex University’s project focuses on a critical, but often overlooked, group of individuals affected by online child sexual abuse: content moderators. What is the context of how content moderators, specifically for child sexual abuse material, live and work? 

In recent years, there has been a steep rise in the number of content moderators. More than 100,000 people moderate online child abuse content, with many working in developing nations. As most of the moderating work is outsourced to third party vendors the working conditions for different employees can differ substantially.  

More than 100,000 people moderate online child abuse content, with many working in developing nations. 

Elena Martellozzo and Jeffrey DeMarco, Middlesex University

For example, in America, moderators may work from an office, whereas in the Philippines, they can work in malls. Similarly, some employees are told they can take as many breaks as they need, whereas others are closely monitored and have limited access to time away from their work. Nevertheless, one constant is the job tends to be badly paid.  

Typically, content moderators are assigned to queues of online content that have been flagged and they must decide whether the flag should be ignored or upheld. They can be faced with videos that range from reasonably banal to extremely graphic, and it is unpredictable which they will be about to view. For those moderating child sexual abuse material (CSAM), they will have to view videos of children being abused or raped, and these images can be repeatedly uploaded meaning that moderators will have to remove multiple copies of the same content.  

Some moderators are given quotas requiring them to screen hundreds of items a day and leaving little time per item to decide how to mark the content. They are also given targets as to the number of errors they are allowed to make and, in some cases, future work depends on having high accuracy (i.e. an auditor would agree with their decision).  

The exposure to traumatic content, needing to hit high accuracy and throughput targets, low wages, insufficient counselling, and confidentiality clauses exacerbate psychological distress and can cause burnout.  

At a high-level, what do you know about how moderators are affected by consuming videos and photos of online child sexual abuse? What does current support for this role look like?  

Content moderation is not an easy job. Employee turnover is high and most leave the job within a year. Repeated, prolonged exposure to disturbing content, coupled with limited workplace support and counselling, can significantly impair the psychological well-being of human moderators. 

Secondary traumatic stress (STS) occurs in response to working with traumatic material; thus, content moderators would be expected to be at increased risk of developing STS. Additionally, there is work suggesting that STS is often comorbid with other mental health problems including anxiety and depression. Therefore, it is likely that repeated exposure to online child sexual abuse will increase moderators’ risk of developing anxiety, depression, and stress disorders.  

It is likely that repeated exposure to online child sexual abuse will increase moderators’ risk of developing anxiety, depression, and stress disorders. 

Elena Martellozzo and Jeffrey DeMarco, Middlesex University

Qualitative reports suggest a significant number of moderators may develop PTSD and there are numerous lawsuits relating to the development of PTSD while working as a content moderator. Anecdotally, it is also associated with heart disease, interpersonal conflict, and substance abuse. If left untreated, this can lead to absenteeism, lower quality of life, burnout, and work dissatisfaction. In addition, the reliance on outsourcing means the level of support for moderators varies greatly. Some moderators report the ability to take multiple breaks from viewing material or having a maximum shift length, whilst others describe being micromanaged and being unable to leave their desk for more than a few minutes during a shift. Some employees are given access to qualified "wellness coaches" or have onsite counsellors and are given phone support line whereas others are offered nothing. The provision of support is likely affected in part by where globally the worker is situated. 

This project will be done alongside INHOPE and the Internet Watch Foundation. What roles will each partner take, and why are these collaborations important?  

Given their expertise, reach and network of relevant stakeholders involved in the industry, INHOPE and the Internet Watch Foundation (IWF) are integral to the success of this research. The IWF will be used a case study of good practice for supporting their content moderators and providing the mental care their need. Working alongside the IWF will assist the team to gain a better understanding of moderators’ experiences, resilience, adversity, and vulnerability; experience of secondary trauma and compassion fatigue; and therefore, how to develop a uniform resource for support moving forward, for organisations both small and large.  

Both organisations will be used in advisory and gatekeeping roles and will advise on best practice when it comes to recruitment, barriers to participation and dissemination. Further to this, during stage one of this research we will develop a theory of change that will underpin the direction of the research project and the development of the intervention. Stakeholders will be invited to help us refine assumptions around content moderation, mental health and trauma and understand what outcomes are seen as important to moderators’ wellbeing. Thus, the voice and expertise of our partners will shape the design and focus of the study.  

The voice and expertise of our partners will shape the design and focus of the study. 

Elena Martellozzo and Jeffrey DeMarco, Middlesex University

Furthermore, these partnerships will be key to recruitment and dissemination of the intervention. Both agencies can signpost our research to their extensive list of partners in order to enhance recruitment across all stages of the project and to maximize uptake of the intervention. They can also be used to overcome potential barriers to recruitment or the use of the intervention by co-hosting information events about the research.  

As a result of this project, Middlesex University hopes to develop a pilot intervention to protect the mental health of moderators. What will this intervention look like? 

The intervention will be a toolkit comprising of different online modules, each tailored to tackle or strengthen a particular aspect of content moderation as indicated by the project. Each module will be informed by psychological theory and research and will be developed with input from mental health professionals.  

An example module is “Changing Thoughts,” as negative thinking is linked to several mental health problems. Cognitive behavioural therapy techniques such as practicing gratitude is an evidence-based way to tackle this problem.  

Moderators can pick and choose the modules they believe are most suitable to them and work through them at their own time and in any order, they can repeat any module as many times as they want. Use of the toolkit will also be confidential and anonymous to protect the users’ privacy.  

Although the intervention will be in English, it will be as user-friendly as possible to encourage uptake. It will also be hosted online and therefore have the potential to reach any and all moderators regardless of where they are in the world and including those who have left the profession.  

We will share the intervention with multiple agencies through agency websites, as well as the INHOPE network and advisory board members to reach as many individuals as possible. A conceptual map of a wider hub to work across organisations will also be developed with the recommendation to further extend the intervention toolkit and provide a resource for content moderators.  

How will this research translate to impacts in tech industry? What impact will it have on other sectors?  

Our research will deliver multiple actionable insights and outputs on the psychological impact of working with CSAM material. It will also reveal how well-being and work performance are affected because of reviewing this content.  

These insights will be usable across agencies and different staff working at different levels, such as those working directly with the content, but also those responsible for their management.  

Our project will identify and set out the needs of content moderators, proposed actions required for change that can guide other organizations in their implementation of support packages, and suitable assessment indicators. Additionally, our study will achieve one of the first comprehensive surveys of need around work task stress and its psychological effects, including identifying areas of resilience.  

The findings will also inform multi-disciplinary recommendations for how best to support content moderators and other workers who engage with harmful and difficult material. We will co-create an intervention designed to strengthen areas of resilience and tackle specific stressors which will be the first intervention of its kind and an invaluable resource for workers and agencies. 

The findings of this research may lay the foundation for the development of an accredited Continued Professional Development (CPD) course for those looking to work in content moderation across sectors. Although it will not be a direct output of this proposed research, structuring a ‘work needs assessment’ profile and standardized course with learning outcomes can assist those interested in a career in content moderation or, more broadly, in child protection. The training would provide an understanding what will be expected and what tools they may need in fulfilling their role while also assisting senior management across organizations by providing a shared framework of expectation and abilities in the hiring and deployment process. This, in turn, may help reduce staff turnover and support the cumbersome, yet necessary hiring and training process.  

What do you think needs to happen to end online child sexual exploitation and abuse for good?  

Developing better automated systems using artificial intelligence (AI) and machine learning, alongside better communication and pooling of resources across organisations, could help to reduce the online presence of child sexual exploitation and abuse. For example, using one database to index banned items so that an item only needs to be allocated an ID once, and disabling all future uploads.  

However, we are cognisant that AI does not protect what is to be human and does not eradicate the exploitation and abuse of children. Therefore, it is imperative to ensure that human moderation continues but for it to be successful, experts working in this challenging area of work receive the support they necessitate and are guided through their time as content moderators, so they can ultimately operate both safely and effectively.