Moderators Protect Us From The Worst Of The Internet. That Comes At Huge Personal Cost
Unless you’re a moderator for a local community group discussing garbage collections or dog park etiquette, you are unlikely to fully understand the sheer volume and scale of abuse directed at people online.
But when social media moderation and community management is part and parcel of your daily work, the toll on people and their loved ones can be enormous. Journalists, often early in their careers, can be on the receiving end of torrents of abuse.
If they come from culturally or linguistically diverse backgrounds, that reluctance to report can be even higher than other colleagues.
There’s growing employer concern about how moderating confronting content can affect people’s wellbeing. Employers also have a duty to keep their staff safe at work, including online.
The ABC wanted to understand what this looked like in practice. Its internal survey data shows just how bad the problem has become for moderators who are employed to keep audience members safe when contributing to online discussions.
What did the ABC find?
In 2022, the ABC asked 111 staff who were engaged in online moderation as part of their jobs to self-report the frequency of exposure to potentially harmful experiences.
First it was important to understand just how long people were spending online moderating content. For those who had to moderate content every day, 63% they did it for less than an hour and a half, and 88% moderated for less than three hours.
The majority of staff surveyed saw potentially harmful content every week.
71% of moderators reported seeing denigration of their work weekly, with 25% seeing this daily.
Read more: Can human moderators ever really rein in harmful online content? New research says yes
Half reported seeing misogynistic content weekly, while more than half said they saw racist content weekly.
Around a third reported seeing homophobic content every week.
In the case of abusive language, 20% said they encountered it weekly.
It’s a confronting picture on its own, but many see more than one type of this content at a time. This compounds the situation.
It is important to note the survey did not define specifically what was meant by racist, homophobic or misogynistic content, so that was open to interpretation from the moderators.
A global issue
We’ve known for a few years about the mental health problems faced by moderators in other countries.
Some people employed by Facebook to filter out the most toxic material and have gone on to take the company to court.
In one case in the United States, Facebook reached a settlement with more than 10,000 content moderators that included U$52 million (A$77.8 million) for mental health treatment.
In Kenya, 184 moderators contracted by Facebook are suing the company for poor working conditions, including a lack of mental health support. They’re seeking U$1.6 billion (A$2.3 billion) in compensation.
The case is ongoing and so too are other separate cases against Meta in Kenya.
In Australia, moderators during the height of the COVID pandemic reported how confronting it could be to deal with social media users’ misinformation and threats.
A 2023 report by Australian Community Managers, the peak body for online moderators, found 50% of people surveyed said a key challenge of their job was maintaining good mental health.
What’s being done?
Although it is not without its own issues, the ABC is leading the way in protecting its moderators from harm.
It has long worked to protect its staff from trauma exposure with a variety of programs, including a peer support program for journalists. The program was supported by the Dart Centre for Journalism and Trauma Asia Pacific.
But as the level of abuse directed at staff increased in tone and intensity, the national broadcaster appointed a full-time Social Media Wellbeing Advisor. Nicolle White manages the workplace health and safety risk generated by social media. She’s believed to be the first in the world in such a role.
As part of the survey, the ABC’s moderators were asked about ways they could be better supported.
Turning off comments was unsurprisingly rated as the most helpful technique to promote wellbeing, followed by support from management, peer support, and preparing responses to anticipated audience reactions.
Turning off the comments, however, often leads to complaints from at least some people that their views are being censored. This is despite the fact media publishers are legally liable for comments on their content, following a 2021 High Court decision.
Educating staff about why people comment on news content has been an important part of harm reduction.
Some of the other changes implemented after the survey included encouraging staff not to moderate comments when it related to their own lived experience or identity, unless they feel empowered in doing so.
The peer support program also links staff others with moderation experience.
Managers were urged to ensure that self-care plans were completed by staff to prepare for high-risk moderation days (such as the Voice referendum). These includes documenting positive coping mechanisms, how to implement boundaries at the end of a news shift, debriefing and asking staff to reflect on the value in their work.
Research shows one of the most protective factors for journalists is being reminded that the work is important.
But overwhelmingly, the single most significant piece of advice for all working on moderation is to ensure they have clear guidance on what to do if their wellbeing is affected, and that seeking support is normalised in the workplace.
Lessons for others
While these data are specific to the public broadcaster, it’s certain the experiences of the ABC are reflected across the news industry and other forums where people are responsible for moderating communities.
It’s not just paid employees. Volunteer moderators at youth radio stations or Facebook group admins are among the many people who face online hostility.
What’s clear is that any business or volunteer organisation building a social media audience need to consider the health and safety ramifications for those tasked with maintaining those platforms, and ensure they build in support strategies.
Australia’s eSafety commissioner has developed a range of publicly available resources to help.
The author would like to acknowledge the work of Nicolle White in writing this article and the research it reports.
Alexandra Wake is a member of Dart Asia Pacific, having previously served as a director of its Board. She is currently a joint recipient of an Australian Research Council Discovery Grant, Australian Journalism, Trauma and Community.