Beyond convenience, there is a certain logic to banning deepfakes while relying on fact-checking for old-school video manipulation. “With a shallow fake, you can release or link to the authentic content,” said Renée DiResta, a disinformation researcher (and a WIRED contributor). “With a deepfake, because it’s made of whole cloth, there is no counter or clarifying content that you can point someone to.” But, DiResta added, the reliance on fact-checking overlooks the problem of misleading or false content going viral long before it can be debunked. “One of the key complaints has been that it’s too slow—that the thing has already gone viral long before it’s been fact checked; most people don’t see the fact check; and most people, at this point, if it’s political, will disbelieve the fact check because the fact checking organization will be accused of being partisan.” The infamous Pelosi video, for instance, was viewed millions of times before Facebook got around to labeling it false.
“I think it’s a good policy,” said Sam Gregory, program director at the human rights nonprofit WITNESS, which was one of several groups that gave Facebook feedback on how to handle deepfakes. “I think it’s really important that the platforms state clearly how they’re going to handle deepfakes before they’re a widespread problem.” But, he added, “this policy is not applying to the vast majority of existing visual misinformation and disinformation.” Especially outside the US, he explained, that means material that is either doctored or intentionally mislabeled—as when years-old videos from around the world are passed off as anti-Hindu violence via WhatsApp (a Facebook subsidiary) in India to stoke hatred against Muslims. Dealing with that kind of disinformation, Gregory said, is more complicated, and will require both better policies and better tools, like reverse video searches, to help users and journalists debunk hoaxes more quickly.
When it comes to deepfakes, it remains to be seen whether Facebook’s new ban will be up to the challenge. It’s a question not just of whether the platform’s detection technology can reliably detect AI-enabled fake videos—the company is currently running a contest, along with Microsoft and academic institutions, to encourage researchers to come up with better methods—but also of whether users will trust its explanations of why certain content gets removed. Facebook’s enforcement of content restrictions already draws howls of censorship and “shadowbans,” particularly from conservatives citing Silicon Valley liberal bias, despite the company’s denials and the lack of evidence that politics, not violating behavior, plays a role. There’s little reason to expect a deepfake ban to play out any differently, especially when politics is involved.
And politics is ultimately the rub, at least when it comes to misleading video in the US. It’s probably not a coincidence that Bickert’s blog post appeared right before she was set to appear at a congressional hearing on online manipulation and deception. As the 2020 election looms only 10 probably interminable months away, it’s hard to find anyone from across the political spectrum who is satisfied that Facebook will play a benign role in the democratic process. With its announcement, the company seems to be trying to convince Washington, and the country, that it’s up to the task ahead. But while it deserves credit for coming up with a plan to address tomorrow’s disinformation threats, the country is still waiting on evidence that it can solve the problems that have already arrived.
More Great WIRED Stories