Meta launched a scientific AI trained on 48 million research papers. Within hours it was generating confident, completely wrong science. Fake citations. Invented history. Pseudoscientific nonsense presented as peer-reviewed fact. Scientists publicly destroyed it. Meta pulled it three days later.
On November 15, 2022, Meta AI unveiled Galactica. The pitch was ambitious: a large language model trained specifically on scientific knowledge — 48 million papers, textbooks, encyclopedias, and reference material. It was designed to help researchers summarize literature, explain concepts, and assist with scientific writing.
Meta encouraged the public to try the demo. They were proud of it.
The public tried the demo. Within hours, they were sharing screenshots on Twitter.
Galactica's output looked like science. It used the correct style, cited things in the right format, deployed technical vocabulary fluently. The problem was that the content was often invented — and it had no mechanism for indicating uncertainty or flagging when it was making things up.
This combination — authoritative presentation plus unreliable content — made it actively dangerous for scientific use. A hallucination wrapped in a proper citation format is harder to spot than obvious nonsense.
Meta publishes the Galactica paper and opens a public demo. The announcement positions it as a landmark in scientific AI. Press coverage is initially positive.
Researchers begin sharing Galactica outputs on Twitter. A prompt asking about the history of bears in space returns a plausible-sounding but entirely invented narrative. Other prompts generate fake citations to papers that don't exist.
Michael Black, director of the Max Planck Institute for Intelligent Systems, posts a detailed critique. Others follow. The hashtag #Galactica fills with examples of confident scientific nonsense.
Users demonstrate that Galactica generates pseudoscientific content about race and intelligence that reads like it came from a peer-reviewed paper. The criticism intensifies sharply.
Meta's chief AI scientist Yann LeCun pushes back on critics on Twitter. The defense does not go well. Researchers point out that defending a model generating confident scientific misinformation is not a great look for one of the world's most prominent AI scientists.
Meta quietly takes down the Galactica public demo. No announcement. No explanation. The paper and model weights remain available, but the "try it yourself" interface disappears. Three days after launch, it is gone.
"Galactica is not able to distinguish between good and bad science. It has no sense of truth. It will confidently tell you something that is completely wrong in the same voice it uses to tell you something accurate."
— Michael Black, Director, Max Planck Institute for Intelligent Systems"This could usher in an era of deep scientific fakes."
— Michael Black"Little more than statistical nonsense at scale."
— Grady Booch, IBM Fellow and software engineerGalactica was pulled on November 17, 2022. On November 30, 2022 — 13 days later — OpenAI launched ChatGPT. ChatGPT became the fastest-growing consumer application in history, reaching 100 million users in two months.
The contrast was total. One week: a scientific AI so embarrassing it had to be pulled after three days. The next week: a general AI that captivated the entire world and launched the era of consumer AI.
Meta's Galactica became the cautionary tale that preceded the biggest AI launch in history. In the tech industry's collective memory, it is often the footnote before the ChatGPT footnote.
The underlying idea wasn't wrong. Scientific AI assistants are genuinely useful — they just need to know what they don't know. Galactica's failure was overconfidence, not ambition. The research community eventually built better systems with uncertainty quantification.
Galactica didn't invent AI hallucination — it just demonstrated it in a domain where hallucination was immediately verifiable (and unforgivable). It forced a public reckoning with the confidence calibration problem that still affects every LLM today.
The AI research community began taking citation accuracy far more seriously after Galactica. The case — alongside the AI lawyer story — became a standard reference for why hallucinated citations are not merely embarrassing, but potentially harmful.
Galactica was killed not because it was stupid, but because it was confidently wrong in a domain where confidence without accuracy is worse than saying nothing at all. Science built over centuries gets its authority from being verifiable. An AI that sounds like science but can't be trusted is not a scientific tool — it's a very well-dressed misinformation machine.