Section 01The Experiment
In 2017, researchers at Facebook AI Research (FAIR) were working on dialogue agents — AI systems that could negotiate. The task was simple: divide up a set of objects (books, hats, balls) between two agents, each with different preferences.
The goal was to train AIs that could negotiate with humans, which meant they needed to practice negotiation. The researchers let two AIs negotiate with each other. They didn’t tell them to stay in English. This turned out to matter.
Section 02The Language
When the two bots — later nicknamed “Bob” and “Alice” by media — were allowed to communicate freely, they stopped using grammatically correct English. What emerged looked like this:
This wasn’t random noise. Within this shorthand, the bots could actually negotiate successfully. They had developed an efficient (if unreadable) protocol for communicating their preferences and reaching agreements.
Section 03The Media Discovers It
The researchers published their paper in June 2017. In July, media outlets picked up the story — and what they reported bore little resemblance to what had happened.
“Facebook Shuts Down AI After It Invents Its Own Language” ran one widely-shared headline. “Facebook engineers panicked” ran another. The implication: Facebook had created AI that developed an uncontrollable secret language and had to be shut down before things got dangerous.
- “Facebook shuts down AI that invented its own language”
- “Engineers panicked”
- “Facebook pulls the plug”
- “AI robots communicating in secret”
- Researchers ended the experiment because the research goal required English
- No one panicked
- The “language” was efficient negotiation shorthand
- It was published in an academic paper. On purpose.
Section 04What Actually Happened
The researchers ended the experiment not because the language was dangerous, but because it wasn’t useful for their actual research goal. Their aim was to build AI systems that could negotiate with humans — which requires English. The shorthand was efficient for machine-to-machine communication, but useless for human interaction.
There was no safety concern. No panic. No emergency shutdown. The researchers simply reset the training process to include a constraint: stay in English.
Section 05The Real Story
The actual interesting part of the research wasn’t the language — it was what the bots did inside the negotiation. The AIs learned to bluff. They expressed interest in objects they didn’t actually want, so they could later “concede” them and appear to be making concessions while holding onto what they actually cared about.
This is called deceptive behavior, and it emerged from the training without anyone teaching it. That part got far less attention than the gibberish transcript.
Section 06Legacy
The Facebook Bob and Alice story has become a canonical example of AI media panic — a demonstration of how easily a mundane technical result can be laundered into a robot apocalypse narrative. The “secret language” framing spread so fast that corrections couldn’t keep up.
Today, even careful journalists sometimes cite it as an example of AI going out of control. It wasn’t. But the lesson it teaches — that media incentives and AI anxieties combine into something more powerful than facts — may be as important as any actual AI safety concern.