On March 10, 2016, an AI played a move in a game of Go that no human had ever considered. The world's best player left the room. When he came back, the history of artificial intelligence had quietly changed forever.
For decades, Go was held up as the game that computers would never master. Chess had fallen to Deep Blue in 1997 — but chess, with its finite piece movements and roughly 1044 possible positions, was considered tractable, eventually. Go was different.
On a 19×19 board, with black and white stones and simple rules about capture, Go generates more possible positions than there are atoms in the observable universe. The number is approximately 10170. It cannot be solved by brute force. Top players describe it as requiring intuition — a sense of the whole board, a feel for position, a grasp of spatial beauty that seems impossible to reduce to calculation.
In 2016, the dominant view among AI researchers was that a computer beating a top professional Go player was still at least a decade away. DeepMind had other plans.
Lee Sedol was not just a good Go player. He was, by most accounts, the greatest of his generation — a nine-dan professional who had won 18 world championship titles and was known for his aggressive, creative, and unpredictable style. He was, at the time, the closest thing Go had to a singular genius.
Before the match, Lee was relaxed. Confident. He predicted a 5–0 sweep.
"I have heard that AlphaGo is remarkably strong and getting stronger, but I am confident that I can win, at least this time."
— Lee Sedol, before the match · March 2016DeepMind's AlphaGo had beaten Fan Hui — the European Go champion — 5–0 a few months earlier. But Fan Hui was not Lee Sedol. The world waited to see whether the gap between a top European player and the world's best would be enough.
It was not.
The match was held in Seoul, South Korea between March 9th and 15th, 2016. It was broadcast live worldwide. Go commentators — professional players themselves — provided real-time analysis of each game. Millions watched.
Game 2. Turn 37. AlphaGo was playing black. The position was complex — a typical mid-game tangle of territorial battles. Both sides had reasonable claims on different parts of the board.
AlphaGo placed a stone on the fifth line, near the upper right area of the board. A shoulder hit — an approach move angled against White's position.
The commentators went quiet. Then, slowly, they began to discuss whether it was a mistake.
In Go, the fifth line is unusual for this type of move. Conventional wisdom — built across thousands of years of play — dictates that such positions call for moves on the third or fourth line. Playing on the fifth line here looked loose. Overextended. Wrong.
Lee Sedol stared at the board. He stood up. He walked out of the room.
He was gone for fifteen minutes.
"It's a move I've never seen before in my entire career. It was incredibly creative and beautiful."
— Lee Sedol, reflecting on Move 37When he returned, he played his next stone. AlphaGo eventually won Game 2. The move had worked — not immediately, but as part of a long sequence that Lee would only fully understand in hindsight. The AI had seen something humans couldn't.
Move 37 wasn't just a clever play in a board game. It was a demonstration of something new: AI not merely outperforming humans within their framework, but operating outside it entirely.
Unlike Deep Blue's approach to chess, AlphaGo didn't win by calculating every possible future position. It used deep neural networks trained on millions of human games — then improved by playing against itself millions more times. Move 37 emerged from that self-play, not from any human instruction.
Go professionals describe their game using words like beauty, elegance, and harmony. Move 37 was called "alien" and "inhuman" — but also, repeatedly, "beautiful." AlphaGo had developed not just a different strategy, but a different aesthetic. Something that looked wrong by human standards turned out to embody a deeper rightness.
After the match, professional Go players began studying AlphaGo's games as teaching material — the same way students study master games. AI had not just matched human knowledge; it had expanded it. Go theory has genuinely changed because of what AlphaGo revealed.
AlphaGo's approach — deep learning, self-play, reinforcement learning — became the template for AI breakthroughs across science: protein folding, drug discovery, weather forecasting. Move 37 wasn't just a Go move. It was a proof that AI could generate genuinely new knowledge in domains previously thought uniquely human.
The story doesn't end with AlphaGo winning. Down 3–0, with the match decided, Lee Sedol came back for Game 4. And in that game, facing an undefeated machine, he played what became known as the "Hand of God" — Move 78.
Lee found a wedge play that, it later emerged, AlphaGo had not adequately prepared for — a sequence deep enough that the AI's probability calculations became unreliable. AlphaGo started making errors. Lee won Game 4. It remains the only win by a human against a full-strength AlphaGo.
"I never thought AlphaGo could play so perfectly... but today I think I've found the Achilles heel. From now on AlphaGo will be different."
— Lee Sedol, after winning Game 4The match's consequences rippled outward — through the Go world, through AI research, and through Lee Sedol's own life.
Professional Go players worldwide began studying AlphaGo's games as canonical teaching material. Opening theory that had been stable for centuries was rewritten. Moves once dismissed as amateurish were revealed as profound.
In 2017, DeepMind released AlphaGo Zero — trained with no human games at all, only self-play. It defeated the original AlphaGo 100–0. It independently rediscovered known Go principles, then surpassed them entirely.
The same deep learning techniques used in AlphaGo were applied to protein structure prediction. AlphaFold solved a 50-year grand challenge in biology, predicting the 3D structure of proteins with near-perfect accuracy — accelerating drug discovery across the world.
In 2019, Lee Sedol announced his retirement from professional Go. His reason: "There is an entity that cannot be defeated." He was 36 years old. He is the only person who has ever beaten AlphaGo in a formal match.
"Even if I become the number one, there is an entity that cannot be defeated."
— Lee Sedol, announcing his retirement from professional Go · November 2019Move 37 was 37 stones into a single game in a match that lasted five days. It took roughly a second to place. It is now one of the most studied and discussed moves in the history of the game — not because it won the match, but because it came from somewhere humans hadn't been. It was the first time most people watching understood, viscerally, that AI was not just learning to beat us at our own games. It was learning things we hadn't thought to learn.