It was 2010 and techno-optimism was surging. A whopping 75 percent of American adults were online—a big jump from the 46 percent that were logging on a decade prior—cruising through the information age largely from the comfort of their own homes for the first time en masse. Social media was relatively new and gaining traction—especially among young people—as the world’s attention appeared to shift to apps from the browser-based web.
The Pew Research Center marked the new decade by asking 895 leading technologists, researchers, and critics for predictions of what the internet-connected world of 2020 would look like. On one subject, there was an overwhelming consensus: 85 percent of respondents agreed that the “social benefits of internet use will far outweigh the negatives over the next decade,” noting that the internet by and large “improves social relations and will continue to do so through 2020.” They pointed to the ease of communication and wealth of knowledge granted by the information age as reasons to be optimistic about the future.
What could possibly go wrong?
A lot, as it turns out. An early sign of the coming infopocalypse came in the form of A Gay Girl in Damascus. The blog chronicled the life of its author, Amina Arraf, a 35-year-old gay Syrian woman participating in an uprising against President Bashar al-Assad. It quickly found a global audience, who became enraptured with Arraf’s moving prose and vivid description of queer life in the Middle East. The Guardian described her as “an unlikely hero of revolt in a conservative country.”
Until June 6, 2011, when a different kind of post appeared on the blog. It was a panicked update from Arraf’s cousin explaining that she had been thrown into the back of red minivan by three mysterious men in downtown Damascus. News of the kidnapping quickly spread around the globe, resulting in reports from The Guardian, The New York Times, Fox News, CNN, and more. A “Free Amina” campaign led to the creation of posters and other websites. The State Department even reportedly started an investigation into her disappearance.
Six days after the so-called kidnapping, the truth emerged: The gay girl from Damascus was a straight 40-year-old American man from Georgia named Tom.
The blog, social media accounts, and nearly six years of forum postings under the name Amina Arraf were all fake. The hoax rocked the blogosphere and marked a turning point in public awareness of digital deception. The Washington Post said it illustrated the “ease of fudging authenticity online.”
The internet has always been awash with deception, dating to the web’s earliest days. A 1998 paper by Judith Donath, a researcher and adviser at Harvard’s Berkman Klein Center, detailed the effects of trolling, misinformation, and disinformation on Usenet groups. The troubles sound familiar:
Even as the web blossomed in the following decade, and more people gained access, these concerns largely stayed below the surface. But the last decade has made the extent—and the consequences—of online falsehoods all the more clear.
Flaws emerged in the web’s key measuring sticks—likes, clicks, follower counts, views, and so on. In July 2012 a startup made headlines by reporting that only one in every five clicks on its Facebook ads appeared to come from humans. The rest, the company alleged, were from bots. The assertion seems almost quaint now. But at the time, it was viewed as “an explosive claim that could give pause to brands trying to figure out if advertising works on Facebook.”
It marked a new era of doubt online. The following month, in August 2012—on a Friday before a holiday weekend, in typical tech company fashion—Facebook said it had identified and removed fake Likes used by a number of pages to make them seem more popular than they were.
“Facebook says the crackdown ‘will be a positive change for anyone using Facebook.’ But that’s not true,” Ryan Tate wrote for WIRED at the time. “Fraudsters are clearly using Facebook, too, hence all the fake ‘likes.’ And they’ll be racing to thwart Facebook’s filters. Summer ends this weekend with a victory for Facebook’s ‘like’ engineers. But the arms race has just begun.”
In 2013, YouTube faced its own uncomfortable reality. The volume of fake traffic from bots pretending to be real viewers rivaled the traffic from actual humans. Some employees worried the imbalance could bring about what they called “the Inversion,” where YouTube’s manipulation detection systems would get confused and interpret fake views as real, and flag those made by humans as suspicious.
That scenario never came to pass, but the scourge of fake engagement plagues social media giants to this day. The practice has become so profitable and popular that entire sub-industries have formed to both produce fake likes, followers, and views, and catch those who purchase false engagement.