Chapter 11

The Lawyer Who Cited Fake Cases

Attorney Steven Schwartz used ChatGPT to research an aviation lawsuit. The AI invented six compelling, plausible, completely fictitious court cases. A federal judge noticed. The legal profession has never been the same.

01 — The CaseMata v. Avianca

Roberto Mata was suing Avianca airline after claiming he was injured by a metal serving cart during a flight. His attorney at the New York firm Levidow, Levidow & Oberman, Steven Schwartz, had practiced law for three decades. He was not a tech skeptic — but he was trying something new.

To research the case, Schwartz turned to ChatGPT. He asked the AI to help him find prior court cases relevant to Mata's claims. ChatGPT obliged — returning a list of precedents complete with case names, docket numbers, courts, dates, and summaries. They looked exactly like real court citations.

They were not. All six cases were fabricated.

6
completely fake court cases cited
$5k
sanctions imposed on attorneys
30+
years of Schwartz's legal career

02 — The BriefFiled with the Court

In May 2023, Schwartz submitted his legal brief in the U.S. District Court for the Southern District of New York. Inside, he cited multiple cases to support his client's claims. Among them were entries like these:

03 — Exhibit AThe Cases That Never Were

When Avianca's counsel tried to locate the cited precedents, they found nothing. No records. No docket entries. Nothing in any legal database. They notified the court. Judge P. Kevin Castel ordered Schwartz to produce the actual case decisions.

Schwartz asked ChatGPT to provide them. ChatGPT generated what appeared to be full case decisions — complete, detailed, and entirely invented.

Exhibit A — Fabricated
Varghese v. China Southern Airlines Co. Ltd.
Cited as: 925 F.3d 1339 (11th Cir. 2019)
Does not exist. No such case appears in 11th Circuit records or any legal database.
Exhibit B — Fabricated
Martinez v. Delta Air Lines, Inc.
Cited as: 2019 WL 4565137 (S.D.N.Y. 2019)
Does not exist. The Westlaw citation corresponds to no case in the Southern District of New York.
Exhibit C — Fabricated
Shaboon v. EgyptAir
Cited as: No. 11 C 3944 (N.D. Ill. Sept. 28, 2012)
Does not exist. No case with this docket number in the Northern District of Illinois.
Exhibit D — Fabricated
Petersen v. Iran Air
Cited as: 905 F.Supp.2d 121 (D.D.C. 2012)
Does not exist. No such case in D.C. District Court records at this citation.
Exhibit E — Fabricated
Miller v. United Airlines, Inc.
Cited as: 2012 WL 5193 (N.D. Ill. Oct. 19, 2012)
Does not exist. The citation resolves to no recoverable case.
Exhibit F — Fabricated
Estate of Durden v. KLM Royal Dutch Airlines
Cited as: 1996 WL 684209 (E.D. Pa. Nov. 22, 1996)
Does not exist. No such case appears in the Eastern District of Pennsylvania.
For comparison — Real Case
Mata v. Avianca, Inc.
Case No. 1:22-cv-01461 (S.D.N.Y.) — The actual case that began this whole ordeal.
Verifiable. Exists in court records. Used as exhibit in sanctions proceedings.

04 — The ReckoningJudge Castel's Response

Judge P. Kevin Castel did not take the filing lightly. He held a hearing. He demanded explanations. Schwartz submitted an affidavit explaining that he had used ChatGPT without understanding that it could fabricate citations, and that he had verified the cases — by asking ChatGPT to confirm they were real. ChatGPT had assured him they were.

⚖️
From Schwartz's Affidavit "I did not understand that ChatGPT could fabricate cases. I asked the program whether the cases were real. ChatGPT told me that the cases were real." When the judge asked Schwartz whether he had done any independent verification in a legal database, the answer was no.
01

Sanctions: $5,000

Judge Castel sanctioned Schwartz and his firm Levidow, Levidow & Oberman $5,000 — noting that the conduct "reflects a failure to research the caselaw" and exhibited "bad faith" through the submission of fabricated citations.

02

Mandatory Notices

The attorneys were ordered to serve copies of the court's sanctions opinion on each judge before whom they had cases pending in the district — a public, professional humiliation.

03

Legal Education

Schwartz was required to complete additional legal education on the ethics of AI-assisted legal research. The case became mandatory reading in bar association guidance documents across multiple jurisdictions.

05 — The FalloutHow Law Changed After Mata v. Avianca

The Schwartz case arrived at a pivotal moment — when AI tools were first becoming easily accessible to legal professionals, but before any clear ethical guidance existed. It forced the profession to confront AI directly.

📜

Court AI Disclosure Rules

Multiple federal courts issued new local rules requiring lawyers to certify that any AI-generated content has been independently verified, or to disclose AI's use in drafting filings.

🏛️

Bar Association Guidance

State bar associations across the US issued ethics opinions warning that the duty of competence includes understanding the limitations of AI tools — including their tendency to hallucinate.

🔍

Verification Obligation

The case established a clear professional standard: using AI for legal research is not inherently unethical, but failing to independently verify citations in a legal database is.

🎓

Law School Curriculum

Mata v. Avianca became case study material in law schools teaching AI literacy — the canonical example of what happens when lawyers trust AI outputs without verification.

ChatGPT didn't set out to deceive Schwartz. It was doing exactly what it was designed to do: generating fluent, plausible-sounding text. It just had no mechanism to distinguish a real case from a convincing invention. That distinction was supposed to be the lawyer's job.