01 — The GameThe Last Frontier
For decades, Go was held up as the game that computers would never master. Chess had fallen to Deep Blue in 1997 — but chess, with its finite piece movements and roughly 1044 possible positions, was considered tractable, eventually. Go was different.
On a 19×19 board, with black and white stones and simple rules about capture, Go generates more possible positions than there are atoms in the observable universe. The number is approximately 10170. It cannot be solved by brute force. Top players describe it as requiring intuition — a sense of the whole board, a feel for position, a grasp of spatial beauty that seems impossible to reduce to calculation.
In 2016, the dominant view among AI researchers was that a computer beating a top professional Go player was still at least a decade away. DeepMind had other plans.
02 — The ChampionLee Sedol
Lee Sedol was not just a good Go player. He was, by most accounts, the greatest of his generation — a nine-dan professional who had won 18 world championship titles and was known for his aggressive, creative, and unpredictable style. He was, at the time, the closest thing Go had to a singular genius.
Before the match, Lee was relaxed. Confident. He predicted a 5–0 sweep.
DeepMind's AlphaGo had beaten Fan Hui — the European Go champion — 5–0 a few months earlier. But Fan Hui was not Lee Sedol. The world waited to see whether the gap between a top European player and the world's best would be enough.
It was not.
03 — The MatchFive Games in Seoul
The match was held in Seoul, South Korea between March 9th and 15th, 2016. It was broadcast live worldwide. Go commentators — professional players themselves — provided real-time analysis of each game. Millions watched.
Fifth Line — Human moves cluster predictably. Then one stone lands where no one has played before. For fifteen minutes, the room goes quiet.
04 — The MoveWhat Happened on Turn 37
Game 2. Turn 37. AlphaGo was playing black. The position was complex — a typical mid-game tangle of territorial battles. Both sides had reasonable claims on different parts of the board.
AlphaGo placed a stone on the fifth line, near the upper right area of the board. A shoulder hit — an approach move angled against White's position.
The commentators went quiet. Then, slowly, they began to discuss whether it was a mistake.
In Go, the fifth line is unusual for this type of move. Conventional wisdom — built across thousands of years of play — dictates that such positions call for moves on the third or fourth line. Playing on the fifth line here looked loose. Overextended. Wrong.
Lee Sedol stared at the board. He stood up. He walked out of the room.
He was gone for fifteen minutes.
When he returned, he played his next stone. AlphaGo eventually won Game 2. The move had worked — not immediately, but as part of a long sequence that Lee would only fully understand in hindsight. The AI had seen something humans couldn't.
05 — Why It MatteredMore Than a Game
Move 37 wasn't just a clever play in a board game. It was a demonstration of something new: AI not merely outperforming humans within their framework, but operating outside it entirely.
Not Brute Force
Unlike Deep Blue's approach to chess, AlphaGo didn't win by calculating every possible future position. It used deep neural networks trained on millions of human games — then improved by playing against itself millions more times. Move 37 emerged from that self-play, not from any human instruction.
A New Aesthetic
Go professionals describe their game using words like beauty, elegance, and harmony. Move 37 was called "alien" and "inhuman" — but also, repeatedly, "beautiful." AlphaGo had developed not just a different strategy, but a different aesthetic. Something that looked wrong by human standards turned out to embody a deeper rightness.
Knowledge Beyond Human Knowledge
After the match, professional Go players began studying AlphaGo's games as teaching material — the same way students study master games. AI had not just matched human knowledge; it had expanded it. Go theory has genuinely changed because of what AlphaGo revealed.
The Proof of Concept
AlphaGo's approach — deep learning, self-play, reinforcement learning — became the template for AI breakthroughs across science: protein folding, drug discovery, weather forecasting. Move 37 wasn't just a Go move. It was a proof that AI could generate genuinely new knowledge in domains previously thought uniquely human.
06 — The Human Strikes BackGame 4 and the Hand of God
The story doesn't end with AlphaGo winning. Down 3–0, with the match decided, Lee Sedol came back for Game 4. And in that game, facing an undefeated machine, he played what became known as the "Hand of God" — Move 78.
Lee found a wedge play that, it later emerged, AlphaGo had not adequately prepared for — a sequence deep enough that the AI's probability calculations became unreliable. AlphaGo started making errors. Lee won Game 4. It remains the only win by a human against a full-strength AlphaGo.
07 — AftermathWhat Happened Next
The match's consequences rippled outward — through the Go world, through AI research, and through Lee Sedol's own life.
Go Theory Transformed
Professional Go players worldwide began studying AlphaGo's games as canonical teaching material. Opening theory that had been stable for centuries was rewritten. Moves once dismissed as amateurish were revealed as profound.
AlphaGo Zero
In 2017, DeepMind released AlphaGo Zero — trained with no human games at all, only self-play. It defeated the original AlphaGo 100–0. It independently rediscovered known Go principles, then surpassed them entirely.
AlphaFold Changes Biology
The same deep learning techniques used in AlphaGo were applied to protein structure prediction. AlphaFold solved a 50-year grand challenge in biology, predicting the 3D structure of proteins with near-perfect accuracy — accelerating drug discovery across the world.
Lee Sedol Retires
In 2019, Lee Sedol announced his retirement from professional Go. His reason: "There is an entity that cannot be defeated." He was 36 years old. He is the only person who has ever beaten AlphaGo in a formal match.
Move 37 was 37 stones into a single game in a match that lasted five days. It took roughly a second to place. It is now one of the most studied and discussed moves in the history of the game — not because it won the match, but because it came from somewhere humans hadn't been. It was the first time most people watching understood, viscerally, that AI was not just learning to beat us at our own games. It was learning things we hadn't thought to learn.
What if Move 37 was only the beginning — if the same capacity for alien insight that stunned a Go master is already at work in protein folding, materials science, and mathematical proof, generating solutions that no human would have found alone? The alien move was a gift. But we are not building the institutions, the peer review, or the interpretive frameworks fast enough to recognize the next Move 37 when it arrives — not on a game board, but in a proposal that could cure a disease, solve an energy crisis, or reshape a field.