The Global Trade Union Alliance of Content Moderators has called on tech companies – including TikTok, Meta, Alphabet and OpenAI – to adopt mental health protections throughout their supply chains for the employees who shield internet users from violent, disturbing and harmful material.
The call is the first act of the alliance, which is supported by UNI Global Union, and it comes alongside a eport released this week.
Based on worker surveys and in-depth interviews with content moderators in six countries, as well as insights from crisis journalism and emergency response fields, The People Behind the Screens: Why Tech Companies Need New Protocols for Safe Content Moderation lays out the traumatic, high-pressure conditions workers endure as well as concrete steps to reduce unnecessary psychological harm. These include measures to address PTSD, depression, burnout and even suicidality among moderation workers.
Protocols for sustainable content moderation:
- Exposure limits: Reduce direct exposure through time constraints and image moderation techniques.
- Realistic productivity quotas: Eliminate all quotas for egregious content and enact reasonable quotas, with a human in command principle, for other types of content.
- Mandatory trauma training: Regular, trauma-informed training for moderators and supervisors.
- Accessible counselling: 24/7 employee assistance, continuing post-contract.
- Stable employment and living wages: Formalized employment with permanent contracts, living wages and full employment benefits.
- Joint occupational health and safety committees: Regular oversight and audits by democratically elected safety committees.
- Enhanced migrant worker protections: Policies that prevent exploitation, provide job security, protect against labour abuses and support vulnerable workers.
- Right to organise and collectively bargain: Commitment to workers’ right to organize a union without interference and to collectively bargain with recognized unions.
“Exposure to distressing content may be inherent to moderation, but trauma does not have to be,” said Christy Hoffman, UNI Global Union’s general secretary. “Other frontline sectors have long implemented proven mental health protections, and there is no justification for tech companies not similarly safeguarding workers in their supply chains.”
Even though content moderators are employed through contractors, tech companies like TikTok, Meta, Alphabet and OpenAI control the core aspects of content moderation – such as training procedures, review policies, moderation tools and performance metrics – and are ultimately responsible for the conditions of subcontracted workers. Over 80 per cent of workers surveyed said their employer needs to do more to support their mental health.
A content moderator from Türkiye said: “Beheadings, child abuse, people making love – how many times can you watch these things before you stop feeling anything at all? . . . We lose our ability to be human. We lose our emotions. We can’t be happy. We can’t be sad. We become robots.”
Trauma is intensified by high-pressure productivity demands, with algorithm-driven quotas pushing moderators to process disturbing content at unreasonable speeds. Moderators report that up to half of their low wages often come from productivity bonuses, incentivizing workers to process large volumes of egregious content rapidly, heightening both the risk of burnout and exposure to trauma.
“In just one year, our daily video targets more than doubled. We have to watch videos running at double or triple speed, just to keep up. There’s no time to think. No time to process. The only way to hit the numbers is to skip toilet breaks, meals and rest,” said one moderator in Tunisia.
Human labour
Automation is frequently cited as the future of content moderation, yet AI remains deeply reliant on human labour.
Moderators train AI systems, review their errors and must identify the most disturbing content – often under increased strain from algorithmic management systems. Rather than replacing human oversight, AI exacerbates the burden on workers, making robust labour protections more critical than ever.
“Content moderators are exposed daily to levels of psychological harm that would be unacceptable in any other industry," said Dr Annie Sparrow, associate professor Global Health Icahn School Medicine at Mount Sinai in New York City. "These new safety protocols are not just timely, they are essential. If implemented and enforced, they will save lives, prevent long-term mental health damage, and finally bring accountability to one of the most invisible, exploited sectors of the digital economy."
Last month, UNI supported the launch of the Global Trade Union Alliance of Content Moderators in Nairobi, Kenya, where unions from nine countries ratified and helped shape the protocols.
UNI has long fought for the rights of tech supply chain workers, including content moderators. It represents service workers – including in tech and outsourcing sectors – in 150 countries.