The tech backlash converged on Silicon Valley’s favorite legal protection Wednesday morning, as Republican and Democratic lawmakers agreed that the internet is dark and full of terrors—cyberbullying, scams, deep fakes, election interference and terrorist content—to name a few—and tech companies should be able to fix it.
As for exactly how, neither the members of the three House committees that called the hearing, nor a panel of witnesses—including representatives from Google and Reddit, and four additional experts—could say.
“Let me just start by asking all of you, just by a show of hands, who thinks that online platforms could do a better job of moderating the content on their websites,” US Representative Mike Doyle (D-Pennsylvania) asked the witnesses.
Sheepishly, each one raised a hand. “I agree,” Doyle responded, noting a consensus that the state of online content moderation is lacking, to put it kindly.
He warned there were only two paths forward: Either tech companies come up with a fix—and fast—or lawmakers will. “And if you put that on our shoulders, you may see a law that you don’t like very much and that has a lot of unintended consequences for the internet,” he cautioned.
That was one of many thinly veiled threats lobbed by lawmakers, as House members expressed their frustration at the tech industry’s foibles, including live-streamed mass shootings, manifestations of online extremism, coordinated influence operations, and moderation scandals. Unlike previous Congressional hearings on the fate of big tech, this one didn’t devolve into unrelated grandstanding, embarrassing questions about how the internet works, or tangents about the social media habits of lawmakers’ grandchildren. Instead, it was something even more unsettling (for techies, at least): a genuinely substantive debate over the future of tech platforms big and small.
Front and center was Section 230 of the Communications Decency Act, the law that protects tech platforms from liability for what users post and allows them to police user content. It’s the reason that companies like Facebook can allow politicians to use the platform to spread lies, and YouTube can ban channels for supremacist content, without being dragged to court.
Lawmakers debated whether or not to amend the law to force tech companies to purge their platforms of a host of wrongs, arguing that platforms have the power to, among other things, stop online child sexual abuse, romance scams, illicit e-pharmacies, and terrorist content.
“When was the last time anybody here saw a dick pic on Facebook?” Gretchen Peters, executive director of the Alliance to Counter Crime Online, asked lawmakers in a strange mid-testimony non-sequitur. ”If they can keep genitalia off of these platforms, they can keep drugs off of these platforms, they can keep child sexual abuse off of these platforms. The technology exists.”
Lawmakers largely appeared to agree, despite the fact that the accuracy of such technology and its applicability to genitalia and beyond—especially given Americans’ aversion to censorship—is questionable at best. Other countries efforts to force tech companies to purge more questionable content from their platforms by changing similar safe harbor protections have faced criticism for chilling free speech. India’s attempt to revamp its own version of Section 230 earlier this year required platforms to deploy automated tools to ensure unlawful content never appears online. But critics said the rule could push companies to censor more content than necessary to avoid accidentally letting through something illegal.
Electronic Frontier Foundation legal director Corynne McSherry testified that over-moderation has had similar unintended effects in the US. She cited Tumblr’s use of automated flagging technology to ban adult content from the platform last year, which led to the removal of a patent application drawing and a cartoon scorpion wearing a hard hat. Plus, McSherry said, new legal burdens are likely to stifle competition, as only the largest tech platforms will have the resources to comply with stringent content moderation requirements.