Last weekend, the president of the United States urged Americans to vote twice in the upcoming election. This brazen—and illegal—suggestion spread quickly across social media and once again underscored the unprecedented risks of this election season: the Covid-19 pandemic, an onslaught of disinformation, and online echo chambers stoking vitriol that could turn to violence. With more Americans than ever working, going to school, and gathering online, social media platforms have an urgent responsibility to step up in order to ensure the integrity of this election. So far, they haven’t done nearly enough.
As a former chair of the Federal Elections Commission, this is an issue that’s near and dear to my heart. During my tenure I made the changing role of technology in our elections a major focus. I know there’s a road map to protect our elections. Unfortunately, the FEC does not even have a quorum currently, and therefore cannot take action on this. Quorum or no quorum, the FEC has been discussing online advertising for five years and failed to regulate the industry in any way. Protecting the 2020 election requires social media companies to act now.
WIRED OPINION
ABOUT
Ann Ravel is the former chair of the Federal Election Commission and the Digital Deception project director at MapLight. She is a Democratic candidate for the California State Senate.
To be sure, companies across Silicon Valley have taken some important steps. Facebook’s Voting Information Center, Twitter’s expansion of its civic integrity policy, and YouTube’s crackdown on videos using hacked materials are a strong start. But, as we saw in the wake of Trump’s “double voting” comments, their current actions don’t go nearly far enough. Unless platforms take additional, proactive steps soon, the United States will be caught flat-footed against disinformation and distrust—whether those seeds are planted by online trolls or the sitting president.
A new Election Integrity Roadmap released by the nonprofit group Accountable Tech shows that a different path is possible. Created in conjunction with leading technologists, civil rights leaders, and disinformation experts, the Roadmap outlines tangible steps that platforms can take to defend the integrity of the November elections. Because the recommendations are grounded in platforms’ existing policies and technologies, they can immediately be implemented at scale to help social media companies responsibly navigate everything from early voting through the official certification of results.
During the early voting period, as the Roadmap lays out, platforms should implement an “election integrity strike system” to limit the reach of repeat disinformation super-spreaders. Research shows that a disproportionate amount of harmful misinformation on social media platforms can be tied back to a relatively small number of accounts, groups, and websites. By imposing a series of escalating limitations with each new infraction, platforms can crack down on these actors before the most volatile period of election season.
As Election Day nears, platforms should increase their efforts, including by growing their capacity to monitor electoral content, temporarily turning off harmful algorithms like Facebook Group recommendations that push users towards divisive content, and creating a Platform Poll Watchers program to serve as the first line of defense against disinformation. Just as election observers are deployed at the polls, social media companies would create specialized verification labels for state Election Directors and nonpartisan civil society groups, allowing them to promote credible information, flag specific pieces of misleading content, and counter-message false narratives in real time.
Platforms’ responsibilities don’t end when the polls close. The spread of disinformation as ballots are being counted has the potential to cause chaos and even incite violence. The Roadmap offers a powerful idea: Just as the Voting Rights Act required certain states to pre-clear new voting laws, social media platforms should require highly influential accounts to pre-clear election-related posts, including those by President Trump. These posts would be subject to proactive detection and rapid human review, blocking content that would violate incitement of violence or civic integrity policies.
With less than two months until the election, social media giants have so far shown that self-regulation is no enough. Ultimately, we need Congress to step in and provide clear laws to prevent the spread of online misinformation, but that is not going to happen before November. That’s why it’s more important than ever for social platforms to act responsibly.