No, Facebook’s Chatbots Will Not Take Over the World

Bots programmed to swap balls, hats, and books fell into a hype vortex.
Getty Images/Wired

The notion of machines rising up against their creators is a common theme in culture and in breathless news coverage. That helps explain the lurid headlines in recent days describing how Facebook AI researchers in a “panic” were “forced” to “kill” their “creepy” bots that had started speaking in their own language.

That’s not quite what happened. A Facebook experiment did produce simple bots that chattered in garbled sentences, but they weren’t alarming, surprising, or very intelligent. Nobody at the social network’s AI lab panicked, and you shouldn’t either. But the errant media coverage may not bode well for our future. As machine learning and artificial intelligence become more pervasive and influential, it’s crucial to understand the potential and the reality of these technologies. That’s particularly true as algorithms come to play a central role in war, criminal justice, and labor markets.

Here’s what really happened in Facebook’s AI research lab. Researchers set out to make chatbots that could negotiate with people. Their thinking: Negotiation and cooperation will be necessary for bots to work more closely with humans. They started small, with a simple game in which two players were told to divide a collection of objects, such as hats, balls, and books, between themselves.

The team taught their bots to play this game using a two-step program. First, they fed the computers dialog from thousands of games between humans to give the system a sense of the language of negotiation. Then they allowed bots to use trial and error—in the form of a technique called reinforcement learning, which helped Google’s Go bot AlphaGo defeat champion players—to hone their tactics.

When two bots using reinforcement learning played each other, they stopped using recognizable sentences. Or, as Facebook’s researchers drily describe it in their technical paper, “We found that updating the parameters of both agents led to divergence from human language.” One memorable exchange went like this:

Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Such weird banter sometimes produced successful negotiations, apparently because the bots learned to use tricks such as repetition to communicate their wants. Kinda interesting—but also a failure. Facebook’s researchers hoped to make bots that could play with humans, so they redesigned their training scheme to ensure they kept using recognizable language. That change spawned the fear-mongering headlines about researchers having to shut down the experiment.

But wait, you ask, channeling Tuesday’s front-page splash from British tabloid The Sun. Doesn’t this Facebook incident have echoes of The Terminator, in which an AI system with self-awareness wages a devastating war against humans?

No. Facebook’s simple bots were designed to do only one thing: score as many points as possible in the simple game. And that’s exactly what they did. Because they weren’t programmed to stick with recognizable English, it’s not surprising that they didn’t.

This is far from the first time AI researchers have created bots that improvise their own ways to communicate. In March, WIRED reported on experiments at Elon Musk-backed nonprofit OpenAI with bots that develop their own simple “language” in a virtual world. Facebook researcher Dhruv Batra said on Monday, in a post lamenting the media distortion of his work, that examples in computer science literature go back decades.

Instead of a scary story, Facebook’s experiment actually demonstrates the limitations of today’s AI. The blind literalness of current machine learning systems constrains their usefulness and power. Unless you can find a way to program in exactly what you want, you may not get it. It’s why some researchers are working toward using human feedback, instead of just code, to define AI systems’ goals.

What were the most interesting parts of Facebook’s experiment? Once the bots started speaking English, they did prove capable of negotiating with humans. That’s not bad, since—as you may know from talking with Siri or Alexa—computers aren’t very good at back-and-forth conversation.

Intriguingly, on some occasions Facebook’s bots said they were interested in items they didn’t really want before giving them up in a deal that secured what they were trying to collect. Is this the real scary story—bots that can lie!—taking place inside Facebook’s AI lab? Nope. Nor should you be worried about the mendacious smarts of the pokerbot Libratus, which outbluffed top human players earlier this year. Both systems can do impressive stuff inside strictly defined environments. Neither is close to the autonomy or common-sense understanding of the world that people use to apply skills and knowledge to new situations. Machine learning research is fascinating, full of potential, and changing our world. The Terminator remains fiction.