01 — The LaunchMeta's Big Bet on Science
On November 15, 2022, Meta AI unveiled Galactica — a large language model trained on 48 million scientific papers, textbooks, encyclopedias, and reference material. It was designed to help researchers summarize literature, explain concepts, and assist with scientific writing.
Meta encouraged the public to try the demo. They were proud of it.
The public tried the demo. Within hours, they were sharing screenshots on Twitter.
02 — The ProblemConfident, Wrong, and Impossible to Tell Apart
Galactica's output looked like science. It used the correct style, cited things in the right format, deployed technical vocabulary fluently. The problem was that the content was often invented — and it had no mechanism for flagging when it was making things up. A hallucination wrapped in a proper citation format is harder to spot than obvious nonsense.
False Positive — Data points rise with absolute confidence. Each one glows. Each one is wrong. Confidence and correctness are not the same thing.
03 — The 72 HoursA Timeline of Scientific Embarrassment
Galactica Launches
Meta publishes the Galactica paper and opens a public demo. Press coverage is initially positive.
First Screenshots Surface
Researchers begin sharing Galactica outputs on Twitter. A prompt asking about the history of bears in space returns a plausible-sounding but entirely invented narrative. Other prompts generate fake citations to papers that don't exist.
Scientists Go Public
Michael Black, director of the Max Planck Institute for Intelligent Systems, posts a detailed critique. Others follow. The hashtag #Galactica fills with examples of confident scientific nonsense.
The Race Content Emerges
Users demonstrate that Galactica generates pseudoscientific content about race and intelligence that reads like it came from a peer-reviewed paper. The criticism intensifies sharply.
Yann LeCun Tries to Defend It
Meta's chief AI scientist Yann LeCun pushes back on critics on Twitter. The defense does not go well. Researchers point out that defending a model generating confident scientific misinformation is not a great look for one of the world's most prominent AI scientists.
Meta Pulls the Demo
Meta quietly takes down the Galactica public demo. No announcement. No explanation. The paper and model weights remain available, but the "try it yourself" interface disappears. Three days after launch, it is gone.
04 — The RoastingScientists Were Not Kind
05 — The TimingTwo Weeks Before ChatGPT
Galactica was pulled on November 17, 2022. Thirteen days later, OpenAI launched ChatGPT — the fastest-growing consumer application in history, reaching 100 million users in two months. One week: a scientific AI so embarrassing it lasted three days. The next: a general AI that captivated the world.
The Worst Two Weeks
Meta's Galactica became the cautionary tale that preceded the biggest AI launch in history — the footnote before the ChatGPT footnote.
What Galactica Got Right
The underlying idea wasn't wrong. Scientific AI assistants are genuinely useful — they just need to know what they don't know. Galactica's failure was overconfidence, not ambition.
The Hallucination Problem
Galactica didn't invent AI hallucination — it demonstrated the problem in a domain where hallucination was immediately verifiable and unforgivable. The confidence calibration problem it exposed still affects every LLM today.
Cite Your Sources
After Galactica, citation accuracy became a first-order concern in AI research. The case — alongside the AI lawyer story — is now a standard reference for why hallucinated citations are not just embarrassing, but harmful.
What if Galactica was never taken down — if it had been deployed quietly into clinical decision support or medical literature databases, and the confident wrong answers spent years compounding before anyone ran the tests that broke it publicly?