How ISIS and Russia Won Friends and Manufactured Crowds

The Islamic State built a global brand using the power of social media. Now, Russia is following a similar playbook—and it’s all too easy.
Image may contain Audience Human Crowd Person Sunglasses Accessories Accessory Electronics Phone and Mobile Phone

Since November 2016, a national battle has raged about the role of social media in politics. People bemoaned the viciousness of trolls, the impact of incendiary fake news, the frog memes and Twitter bots and YouTube conspiracy videos. All the stories of manipulation and unintended consequences began igniting angry debates and prompted a long overdue conversation: what is the proper role of social networks in public discourse?

This important question stems from a new paradigm that started roughly a decade ago. That’s when social media turned everyone into a content creator, giving them the tools to not only say their piece but to amplify it, to grow an audience with little to no budget. Citizen journalists, bloggers, and grassroots activists bypassed the editorial old guard, gaining so much influence that they were elevated to an estate of the realm: The Fifth Estate.

The social networks facilitated and enabled this new guard, simultaneously providing a captive user base, a virality engine infrastructure, no editorial oversight, and fairly limited rules. Not unexpectedly, the emergence of a relatively lawless federated system for reaching mass audiences attracted the attention of a bad actor.

Not Russia. ISIS.

The online battle against ISIS was the first skirmish in the Information War, and the earliest indication that the tools for growing and reaching an audience could be gamed to manufacture a crowd. Starting in 2014, ISIS systematically leveraged technology, operating much like a top-tier digital marketing team. Vanity Fair called them “The World’s Deadliest Tech Startup,” cataloging the way that they used almost every social app imaginable to communicate and share propaganda: large social networks such as Facebook; encrypted chat apps such as Telegram; messaging platforms including Kik and WhatsApp. They posted videos of beheadings on YouTube, and spoke to their followers on Internet radio stations. Perhaps most visibly, they were on Twitter, which they used for recruiting and for reach. Each time ISIS successfully executed an attack, they used Twitter to claim responsibility and tens of thousands of followers were ready to cheer them on with favorites and retweets. And in one of the pioneering instances of automated, manufactured crowds, thousands of bots were used for amplification and share-of-voice.

ISIS built a brand on social media. They had recognizable iconography—the flag, the colors, the high-production video openers—and by deftly using social media platforms, they built a virtual caliphate. They did it boldly and transparently, using the platforms in the way that they were meant to be used: to build an audience and connect with followers.

Social networks are designed to profit from enabling advertisers to grow, reach, or corral an audience. Growing an audience typically involves producing compelling content, aiming for social engagement and amplification, paying for boosted posts or ads (most of which are labeled in some way). Companies do it, grassroots organizers do it, and politicians do it. ISIS did it. And what they couldn’t achieve through organic growth, they simply manufactured.

Manufacturing a crowd is a bit different from growing an audience. Purchasing likes, ratings, followers, or bots; relying on automation to artificially amplify a message; gaming algorithms to get something trending or highly rated by a recommender system; using sockpuppets to leave comments and shape narratives. It’s mass deception: hard to detect, and societally corrosive.

Even in the presence of a overt terrorist organization manipulating their products, the platforms were slow to react. Twitter in particular was initially paralyzed by its commitment to being “the free speech wing of the free speech party”, and struggled to address the growing problem. As ISIS’ presence grew, articles were written throughout 2014 and 2015 about the “tough choice” the platforms faced as ISIS exploited them. Whither free speech? One man’s terrorist is another man’s freedom fighter. Etcetera, etcetera. And as the government began to plead with the platforms to take action, the EFF weighed in with a January 2016 statement titled “Companies Should Resist Government Pressure and Stand Up for Free Speech” arguing that “tech companies are not created to investigate terrorism.”

There was no systemic solution available for mitigating systemic manipulation. Twitter made half-hearted whack-a-mole attempts to shut down ISIS accounts. YouTube and Facebook tried to stay on top of taking down the videos. But it didn’t matter much; ISIS’ prolific content production and cross-platform visibility made it highly like that mainstream media would see their content and amplify it even as they condemned it.

The first major skirmish of the information war demonstrated to anyone watching that no one was in charge, either in government or in the private sector.

And it seems that others were watching. Closely. As the latest Mueller indictments indicate, Russia’s information warfare began around the same time as ISIS, but only a handful of folks who worked in national security had any inkling that it was happening. The first story to reach the mainstream was a piece Adrian Chen wrote in June 2015 for the New York Times Magazine,The Agency”, about the Internet Research Agency, a Kremlin-linked troll farm that specializes in creating fake personas who conduct influence operations and manipulate conversations online. The article came out in the thick of discussions on what to do about ISIS’ social propaganda. One of the things Chen uncovered was that the IRA was behind the previously-unexplained 2014 social media hoax claiming that a Louisiana chemical plant had exploded. The cross-platform strategy may sound familiar now: Someone had created a fake Facebook page for a non-existent media outlet called ‘Louisiana News’ and seeded it with content to appear legitimate and active. They edited Wikipedia entries, posted a video to YouTube of ISIS claiming responsibility for the non-existent explosion, broke the “news” on Twitter, even sent text messages to local residents. At the time, researchers called it “media hacking,” as the goal seemed to be break through into mainstream coverage; the term “fake news” didn’t yet exist.

Russia’s information operations strategy was slightly different from ISIS’. Although they, too, took a systemic approach to leveraging audiences on all social platforms, they didn’t want to be discovered, they wanted to blend in. They liberally manufactured crowds, running tens of thousands of Twitter bot accounts and hundreds of thousands of Facebook sockpuppets. Fake people pushed an agenda in social media comments, got topics trending on Twitter, and reached mainstream audiences. A handful of voices responded to intervention attempts and content takedowns with the same dire warnings about censorship, but public opinion seems to be shifting. This time, perhaps because of widespread fury at the prospect that a disinformation campaign may have influenced an election, government and platforms alike are beginning to take action.

Misinformation, disinformation, and propaganda are not new; they are centuries old, although we seem to have lost our awareness of their potential for societal disruption in the decades after the Cold War. These old modes of persuasion have had profound political and social repercussions on a global level - and they’re far more effective when combined with the precision delivery of social networks.

It’s impossible to address cross-platform manipulation when no one is responsible for even monitoring it. As uncomfortable as it may be for the platforms, gaining control of the problem means analyzing the narrative, the voices spreading the message, and dissemination patterns across the social ecosystem. We are never going to be able to fight an information war—or even manage the very minor skirmishes caused by conspiracy theorists or people coming together for the lulz—if we aren’t more aware of manipulation and dubious emerging narratives at a systems level. It’s time that platforms come together to make it more difficult to manufacture a crowd.


Information Wars

Photograph by WIRED/Getty Images