With fewer moderators, the internet could change considerably for the millions of people now reliant on social media as their primary mode of communication with the outside world. The automated systems Facebook, YouTube, Twitter, and other sites use vary, but they often work by detecting things like keywords, automatically scanning images, and looking for other signals that a post violates the rules. They are not capable of catching everything, says Kate Klonick, a professor at St. John’s University Law School and fellow at Yale’s Information Society Project, where she studies Facebook. The tech giants will likely need to be overly broad in their moderation efforts, to reduce the likelihood that an automated system misses important violations.
“I don’t even know how they are going to do this. [Facebook’s] human reviewers don’t get it right a lot of the time. They are amazingly bad still,” says Klonick. But the automatic takedown systems are even worse. “There is going to be a lot of content that comes down incorrectly. It’s really kind of crazy.”
That could have a chilling effect on free speech and the flow of information during a critical time. In a blog post announcing the change, YouTube noted that “users and creators may see increased video removals, including some videos that may not violate policies.” The site’s automated systems are so imprecise that YouTube said it would not be issuing strikes for uploading videos that violate its rules, “except in cases where we have high confidence that it’s violative.”
As part of her research into Facebook’s planned Oversight Board, an independent panel that will review contentious content moderation decisions, Klonick has reviewed the company’s enforcement reports, which detail how well it polices content on Facebook and Instagram. Klonick says what struck her about the most recent report, from November, was that the majority of takedown decisions Facebook reversed came from its automated flagging tools and technologies. “There’s just high margins of error; they are so prone to over-censoring and [potentially] dangerous,” she says.
Facebook, at least in that November report, didn’t exactly seem to disagree:
Zuckerberg said Wednesday that many of the contract workers that make up those teams would be unable to do their jobs from home. While some content moderators around the world do work remotely, many are required to work from an office due to the nature of their roles. Moderators are tasked with reviewing extremely sensitive and graphic posts about child exploitation, terrorism, self-harm, and more. To prevent any of it from leaking to the public, “these facilities are treated with high degrees of security,” says Roberts. For example, workers are often required to keep their cell phones in lockers and can’t bring them to their desks.
Zuckerberg also told reporters that the offices where content moderators work have mental health services that can’t be accessed from home. They often have therapists and counselors on staff, resiliency training, and safeguards in place that force people to take breaks. (Facebook added some of these programs last year after The Verge reported on the bleak working conditions at some of the contractors’ offices.) As many Americans are discovering this week, the isolation of working from home can bring its own stresses. “There’s a level of mutual support that goes on by being in the shared workspace,” says Roberts. “When that becomes fractured, I’m worried about to what extent the workers will have an outlet to lean on each other or to lean on staff.”
There are no easy choices to make. Sending moderators back to work would be an inexcusable public health risk, but making them work from home raises privacy and legal concerns. Leaving the task of moderation largely up to the machines means accepting more mistakes and a reduced ability to rectify them at a time when there is little room for error.
Tech companies are left between a rock and a hard place, says Klonick. During a pandemic, accurate and reliable moderation is more important than ever, but the resources to do it are strained. “Take down the wrong information or ban the wrong account and it ends up having repercussions for how people can speak—full stop—because they can’t go to a literal public square,” she says. “They have to go somewhere on the internet.”
WIRED is providing unlimited free access to stories about the coronavirus pandemic. Sign up for our Coronavirus Update to get the latest in your inbox.
More From WIRED on Covid-19