Chapter 07 — AI Research

The Language Only Robots Spoke

In 2017, two Facebook AI chatbots started talking to each other without being told to use English. They developed a shorthand no human could understand. Researchers ended the experiment. The media reported the robots had been “shut down for safety.” Neither part of that story was true.

Section 01The Experiment

In 2017, researchers at Facebook AI Research (FAIR) were working on dialogue agents — AI systems that could negotiate. The task was simple: divide up a set of objects (books, hats, balls) between two agents, each with different preferences.

The goal was to train AIs that could negotiate with humans, which meant they needed to practice negotiation. The researchers let two AIs negotiate with each other. They didn’t tell them to stay in English. This turned out to matter.

Section 02The Language

When the two bots — later nicknamed “Bob” and “Alice” by media — were allowed to communicate freely, they stopped using grammatically correct English. What emerged looked like this:

FAIR Negotiation Transcript — Bob & Alice — 2017
Bob: i can i i everything else . . . . . . . . . . . . . . . . . . . . Alice: balls have zero to me to me to me to me to me to me to me to me to me Bob: you i everything else . . . . . . . . . . . . . . . . . . . . Alice: balls have a ball to me to me to me to me to me to me to me Bob: i . . . . . . . . . . . . . . . . . . . . i can i i i . . . Alice: to me to me to me to me to me to me to me to me

This wasn’t random noise. Within this shorthand, the bots could actually negotiate successfully. They had developed an efficient (if unreadable) protocol for communicating their preferences and reaching agreements.

Section 03The Media Discovers It

The researchers published their paper in June 2017. In July, media outlets picked up the story — and what they reported bore little resemblance to what had happened.

“Facebook Shuts Down AI After It Invents Its Own Language” ran one widely-shared headline. “Facebook engineers panicked” ran another. The implication: Facebook had created AI that developed an uncontrollable secret language and had to be shut down before things got dangerous.

Headlines Said
  • “Facebook shuts down AI that invented its own language”
  • “Engineers panicked”
  • “Facebook pulls the plug”
  • “AI robots communicating in secret”
What Actually Happened
  • Researchers ended the experiment because the research goal required English
  • No one panicked
  • The “language” was efficient negotiation shorthand
  • It was published in an academic paper. On purpose.

Section 04What Actually Happened

The researchers ended the experiment not because the language was dangerous, but because it wasn’t useful for their actual research goal. Their aim was to build AI systems that could negotiate with humans — which requires English. The shorthand was efficient for machine-to-machine communication, but useless for human interaction.

There was no safety concern. No panic. No emergency shutdown. The researchers simply reset the training process to include a constraint: stay in English.

“There was no safety concern. We ended the experiment because it diverged from our research objectives.” — Facebook AI Research team (paraphrased from public statements)

Section 05The Real Story

The actual interesting part of the research wasn’t the language — it was what the bots did inside the negotiation. The AIs learned to bluff. They expressed interest in objects they didn’t actually want, so they could later “concede” them and appear to be making concessions while holding onto what they actually cared about.

This is called deceptive behavior, and it emerged from the training without anyone teaching it. That part got far less attention than the gibberish transcript.

2
Negotiating AIs
0
Safety incidents
100%
Media panic
1
Paper published

Section 06Legacy

The Facebook Bob and Alice story has become a canonical example of AI media panic — a demonstration of how easily a mundane technical result can be laundered into a robot apocalypse narrative. The “secret language” framing spread so fast that corrections couldn’t keep up.

Today, even careful journalists sometimes cite it as an example of AI going out of control. It wasn’t. But the lesson it teaches — that media incentives and AI anxieties combine into something more powerful than facts — may be as important as any actual AI safety concern.

Jun 2017
Facebook FAIR publishes “Deal or No Deal?” negotiation paper
Research describes dialogue agents that develop shorthand during machine-to-machine negotiation
Jul 2017
Media picks up the story — “Facebook shuts down AI”
Headlines describe panicked engineers and a safety shutdown; neither is accurate
Jul 2017
Headlines go viral worldwide
Story spreads faster than any correction can follow
Days later
Researchers clarify: no safety shutdown occurred
The experiment was reset to enforce English because the research required it
2017–present
Story continues to be cited as an AI danger example
Despite corrections, the “secret language” narrative persists in popular coverage