Social networks have been exposed. No one can pretend that they are simply neutral platforms—mere tubes and pathways, like phone lines, that allow us to share snippets of our lives. That fiction was laid bare on November 8.
Over the next year the mainstream culture will grapple, for real, with the civic and political effects of our lives online. Plenty of intellectuals, with eyebrows cocked, have warned that this reckoning was coming. But it took the US election—and the ascent of Donald Trump, the insult-hurling, falsehood-circulating tweeter-in-chief—to shine a blinding arc light onto the role of technology on the political stage.
We are thus heading into a very McLuhan-esque year. Marshall McLuhan—the Patron Saint of WIRED—made his name in the 1960s, studying how pivotal technologies produced widespread, nonobvious changes. The Gutenberg press, he argued, created a spirit of “detachment” that propelled science while giving a new sense of agency to individuals. Electricity had a “tactile” effect, keeping us in constant contact with the world via the telegraph, telephone, and TV. The photocopier imposed a “reign of terror” on publishers by letting everyday folks copy documents.
People assume McLuhan was always a cheerleader for these shifts. But his thinking could vibrate with anxiety at the coming impact of electronic media. He suspected we could have too much contact with each other—that we’d become fearful and angry by incessant exposure to the world at large. He might have looked at the rise of Donald Trump on Twitter and nodded in recognition; a young McLuhan had watched charismatic European fascists in the 1940s use radio to inject hypernationalism directly into the souls of their supporters.
When Trump won last year, to widespread shock, liberal critics attacked the major social networks for enabling several unsettling trends. Platforms like Facebook and Twitter were viral hotbeds for conspiracy theories and disinformation. Memes that reared to life on image boards and fringe political sites—jittery with misogyny and white nationalism and hatred of Hillary Clinton—made the leap to the mainstream on social networks. Dangerous falsehoods, like the idea that Clinton ran a child-trafficking ring out of a pizzeria, spread widely; indeed, on Facebook the top 20 fabricated stories netted more engagement than real stories from news sources that actually did factual reporting, as BuzzFeed found. (This isn’t a problem only in the US: Anti-Muslim conspiracy stories are avidly circulated on Facebook in Myanmar, while Germans trade Facebook posts claiming Angela Merkel is Adolf Hitler’s daughter.) The same was true on Twitter, which turned out to be an efficient tool for small numbers of people to widely propagate abuse and hate speech.
Meanwhile, the “filter-bubble” effect, which writer Eli Pariser had pinpointed years before, arrived in full force. As my friend Zeynep Tufekci, a sociologist at the University of North Carolina and author of an upcoming book about political organizing in the digital age, tells me, “I’m Facebook friends with some people who support Trump, but I don’t recall seeing their Facebook updates—it appears the algorithms assumed I wouldn’t be interested.”
We can’t indict social media alone, or even primarily, for the rise of disinformation and politically abusive behavior. Traditional media—cable TV, radio, newspapers—recklessly amplified nonsense this political season (and were played shamelessly by Russia’s email hacking). They need their own reckoning. But social networks increasingly influence how people learn about the world. According to the Pew Research Center, about 44 percent of Americans cite Facebook as a source of news. It is a crucial part of “where we put the cursor of our attention all day long,” says Tim Wu, author of The Attention Merchants.
So here’s the question lingering in the air: How should social networks grapple with their civic impact? As we will discover, these issues will be devilishly hard to resolve.
The optimistic view is that there’s good precedent for fighting crap online. Back in the ’00s, internet giants waged a war against spam and content farms. To cut down on spam entreaties from Nigerian princes and the like, email providers used machine learning to detect spammish content; they also created shared blacklists. To quash content farms—low-quality insta-websites designed to game Google’s number one slot—the search engine rolled out an ambitious new ranking scheme called Panda, which down-ranked sites that used tricks like keyword stuffing (putting oodles of invisible, unrelated phrases on a page). Remarkably, it worked: Content farms vanished, and bulk spam is now mostly a marginal problem.
Social networks could use similar strategies to solve their current civic dilemmas. Consider fake news, an area where, as scholars have shown, algorithmic analysis could help identify crap. Software created by Kate Starbird, a professor of design and engineering, was able to distinguish with 88 percent accuracy whether a tweet was spreading a rumor or correcting it when analyzing chatter about a 2014 hostage crisis in Sydney. And Filippo Menczer, a professor of informatics at Indiana University, has found that Twitter accounts posting political fakery have a heat signature: They tweet relentlessly and rarely reply to others.
Social networks sit atop piles of data that can help identify bogus memes—and they can rely on their users’ eagerness to help too. Sure enough, Facebook has already begun to develop tools along these lines. In December it unveiled a system that makes it easier for anyone to flag a post if it seems like deliberate misinformation. If a link that purports to be a news story gets flagged by lots of users, it’s sent to a human Facebook team. That team posts it to a queue, where a group of external fact-checking organizations, including Snopes and Politifact, can check to see if they think the story is suspect. If they do, Facebook slaps a warning on it (“Disputed by 3rd-Party Fact Checkers”) and offers links to rebuttals by Snopes or the other checking sites. If a user tries to share that story later on, Facebook warns them before they post that it’s disputed. The goal isn’t to catch all falsehoods; the system targets the most blatant and viral posts.
There are plenty of other tweaks platforms could make. Craig Silverman, a BuzzFeed editor who has closely studied fake news, argues that Facebook and Twitter ought to make it easier to see the provenance of a link; right now, those from carefully reported sources like The Wall Street Journal look the same as ones from conspiracy sites. The platforms could instead emphasize logos and names so a user might realize, “Wait a minute, this domain name is HillaryClintonStartedAids.com,” Silverman says.
Now let’s look at the filter-bubble phenomenon. If they wanted to, Facebook or Twitter could compose algorithms that expose us to people, ideas, or posts that aren’t in such lockstep with our views. As when Facebook suggests related content, “you could use these mechanisms to surface ideas that are ideologically challenging,” Pariser notes. Or as Tufekci argues: “Show more crosscutting stuff! I’m not saying drown the users in it. But the default shouldn’t be we’re just gonna feed you candy.”
Let your imagination go wild and you can concoct even more aggressive, more ambitious reforms. Imagine if you got rid of all the markers of virality: no counts of likes on Facebook, retweets on Twitter, or upvotes on Reddit! Artist Ben Grosser created a playful browser plug-in called the Facebook Demetricator that does precisely this. It’s fascinating to try: Suddenly social media stops being a popularity contest. You start assessing posts based on what they say instead of because they racked up 23,000 reposts.
Some scholars argue Facebook should go whole hog on editing and hire human teams to more comprehensively review trending stories, deleting ones built on lies. In fact, Facebook did just that last year until a conservative outcry ended the practice.
The biggest impediment to all this change, though, is economic. Traditional media organizations publish and broadcast nonsense because it attracts eyeballs for ads. New media have inherited this problem in spades: Facebook and Twitter and YouTube know—in vivid, quantitative detail—just how much their users prefer to see posts they agree with ideologically, seductive falsehoods included. Spam got on people’s nerves, so companies were eager to stamp it out; on some level, any attempts by social platforms to fight fake news and confirmation bias will come into conflict with their users’ appetite for them.
Nonetheless, public pressure did, in fact, prod Facebook to action after the election. Imagine if still greater pressure were to impel social networks to make even stronger moves against falsehoods and filter bubbles. Would we like the result?
It’s unclear. Waging war on disinformation isn’t easy, because not everyone agrees on what disinformation is. It’s unambiguous that “the pope endorses Donald Trump” isn’t true. But how about “Hillary Clinton lied about having pneumonia, so she’s a lying snake”? The most effective disinformation usually begins with an actual fact then amplifies, distorts, or elides; ban the distortion and you risk looking like you’re banning the nugget of truth too. Online interactions are conversation, and conversation has always been filled with bluster, jokes, and canards. “The idea that only truth should be allowed on social networks is antithetical to how people socially interact,” says Karen North, a professor of digital social media at the University of Southern California.
Or consider this example raised by New York University media theorist Clay Shirky: Activists last winter who supported the Dakota Pipeline protests were encouraged to use Facebook to “check in” at that location, as a show of support and to confuse police. Those false check-ins “are fake news,” Shirky notes. Any policy aimed at enforcing truth on Facebook could easily be used to quash that activity.
“Look, fake news is a real problem,” he says. “But do liberals really want to hand the decisions over to a single large corporation?” Asking the platforms to be granular arbiters of truth would endow them with even more power than they already wield.
Whatever one can say about Donald Trump, he understands—and masterfully plays—the media, old and new. He uses Twitter to perform an end run around journalism, to utter falsehoods that are repeated by his followers and circulated further by mainstream news. When he attacks someone in a tweet, his supporters harass the target. Like other merchants of disinformation online, Trump exhales such a cloud of half-baked assertions that it leaves people mistrustful of everything. If you can do that, hey: What does it matter if social networks slap a “Disputed” label on your post? As Jon Favreau, one of President Obama’s former speechwriters, puts it: “Trump doesn’t care if we think he’s telling the truth—he just wants his supporters to doubt that anyone’s telling the truth.”
And yet Trump has millions of followers, eager ones. This is what gives pause to Jay Rosen, a professor of journalism at New York University. “You have to think about the demand side,” he says. It’s not enough to ask why people spread political disinformation, he says. You also have to ask, “Why do people want to consume this stuff so much?”
Ponder that and you begin to realize: There are limits to what technological fixes can achieve in civic life. Though social networks amplify American partisanship and distrust of institutions, those problems have been rising for years. There are plenty of drivers: say, two decades of right-wing messaging about how mainstream institutions—media, universities, scientists—cannot be trusted (a “retreat from empiricism,” as Rosen notes). And as my friend Danah Boyd, head of the Data and Society think tank, points out, we’ve lost many mechanisms that used to bridge cultural gaps between Americans from different walks of life: widespread military service, affordable colleges, mixed neighborhoods.
The old order was flawed and elitist and locked out too many voices; it produced seeming consensus by preventing many from being heard. We’re still fumbling around for new mechanisms that can replace that order and improve upon it, Pariser tells me. “It reminds me of how the secular world hasn’t found a replacement for some of the uses and tools that religions served. And the new media world hasn’t found a replacement for some of the ways that consensus was manufactured in the old world.” This is the year we need to begin rebuilding those connections—on our platforms and in ourselves.
Contributing editor Clive Thompson (@pomeranian99) is the author of Smarter Than You Think.
This article appears in the February issue. Subscribe now.