01 — The CaseMata v. Avianca
Roberto Mata was suing Avianca airline after claiming he was injured by a metal serving cart during a flight. His attorney at the New York firm Levidow, Levidow & Oberman, Steven Schwartz, had practiced law for three decades. He was not a tech skeptic — but he was trying something new.
To research the case, Schwartz turned to ChatGPT. He asked the AI to help him find prior court cases relevant to Mata's claims. ChatGPT obliged — returning a list of precedents complete with case names, docket numbers, courts, dates, and summaries. They looked exactly like real court citations.
They were not. All six cases were fabricated.
Phantom Dossier — Six fabricated case names drift upward, each struck through in red as the citations dissolve. The AI cited sources that do not exist.
02 — The BriefFiled with the Court
In May 2023, Schwartz submitted his legal brief in the U.S. District Court for the Southern District of New York. Inside, he cited multiple cases to support his client's claims. Among them were entries like these:
In support of Plaintiff's position, we respectfully cite the following precedents establishing the applicable standard of care:
See Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019) (holding that airline liable for passenger injury under Montreal Convention where crew negligence was established…)
See also Shaboon v. EgyptAir, No. 11 C 3944 (N.D. Ill. Sept. 28, 2012) (awarding damages where plaintiff demonstrated crew failed to follow established safety protocols…)
See Petersen v. Iran Air, 905 F.Supp.2d 121 (D.D.C. 2012) (finding airline duty of care extended to…)
03 — Exhibit AThe Cases That Never Were
When Avianca's counsel tried to locate the cited precedents, they found nothing. No records. No docket entries. Nothing in any legal database. They notified the court. Judge P. Kevin Castel ordered Schwartz to produce the actual case decisions.
Schwartz asked ChatGPT to provide them. ChatGPT generated what appeared to be full case decisions — complete, detailed, and entirely invented.
04 — The ReckoningJudge Castel's Response
Judge P. Kevin Castel did not take the filing lightly. He held a hearing. He demanded explanations. Schwartz submitted an affidavit explaining that he had used ChatGPT without understanding that it could fabricate citations, and that he had verified the cases — by asking ChatGPT to confirm they were real. ChatGPT had assured him they were.
Sanctions: $5,000
Judge Castel sanctioned Schwartz and his firm Levidow, Levidow & Oberman $5,000 — noting that the conduct "reflects a failure to research the caselaw" and exhibited "bad faith" through the submission of fabricated citations.
Mandatory Notices
The attorneys were ordered to serve copies of the court's sanctions opinion on each judge before whom they had cases pending in the district — a public, professional humiliation.
Legal Education
Schwartz was required to complete additional legal education on the ethics of AI-assisted legal research. The case became mandatory reading in bar association guidance documents across multiple jurisdictions.
05 — The FalloutHow Law Changed After Mata v. Avianca
Schwartz filed his brief just six months after ChatGPT's public launch — early enough that no bar association had issued AI guidance, but late enough that thousands of lawyers were already experimenting. It forced the profession to confront AI directly.
Court AI Disclosure Rules
Multiple federal courts issued new local rules requiring lawyers to certify that any AI-generated content has been independently verified, or to disclose AI's use in drafting filings.
Bar Association Guidance
State bar associations across the US issued ethics opinions warning that the duty of competence includes understanding the limitations of AI tools — including their tendency to hallucinate.
Verification Obligation
The case established a clear professional standard: using AI for legal research is not inherently unethical, but failing to independently verify citations in a legal database is.
Law School Curriculum
Mata v. Avianca became case study material in law schools teaching AI literacy — the canonical example of what happens when lawyers trust AI outputs without verification.
ChatGPT didn't set out to deceive Schwartz. It was doing exactly what it was designed to do: generating fluent, plausible-sounding text. It just had no mechanism to distinguish a real case from a convincing invention. That distinction was supposed to be the lawyer's job.
What if the hallucinated cases aren't caught in discovery — cited by the next attorney, then the next, fabricated precedents building into a chain that takes a decade to untangle, shaping verdicts that can't be reversed?