Attorney Steven Schwartz used ChatGPT to research an aviation lawsuit. The AI invented six compelling, plausible, completely fictitious court cases. A federal judge noticed. The legal profession has never been the same.
Roberto Mata was suing Avianca airline after claiming he was injured by a metal serving cart during a flight. His attorney at the New York firm Levidow, Levidow & Oberman, Steven Schwartz, had practiced law for three decades. He was not a tech skeptic — but he was trying something new.
To research the case, Schwartz turned to ChatGPT. He asked the AI to help him find prior court cases relevant to Mata's claims. ChatGPT obliged — returning a list of precedents complete with case names, docket numbers, courts, dates, and summaries. They looked exactly like real court citations.
They were not. All six cases were fabricated.
In May 2023, Schwartz submitted his legal brief in the U.S. District Court for the Southern District of New York. Inside, he cited multiple cases to support his client's claims. Among them were entries like these:
In support of Plaintiff's position, we respectfully cite the following precedents establishing the applicable standard of care:
See Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019) (holding that airline liable for passenger injury under Montreal Convention where crew negligence was established…)
See also Shaboon v. EgyptAir, No. 11 C 3944 (N.D. Ill. Sept. 28, 2012) (awarding damages where plaintiff demonstrated crew failed to follow established safety protocols…)
See Petersen v. Iran Air, 905 F.Supp.2d 121 (D.D.C. 2012) (finding airline duty of care extended to…)
When Avianca's counsel tried to locate the cited precedents, they found nothing. No records. No docket entries. Nothing in any legal database. They notified the court. Judge P. Kevin Castel ordered Schwartz to produce the actual case decisions.
Schwartz asked ChatGPT to provide them. ChatGPT generated what appeared to be full case decisions — complete, detailed, and entirely invented.
Judge P. Kevin Castel did not take the filing lightly. He held a hearing. He demanded explanations. Schwartz submitted an affidavit explaining that he had used ChatGPT without understanding that it could fabricate citations, and that he had verified the cases — by asking ChatGPT to confirm they were real. ChatGPT had assured him they were.
Judge Castel sanctioned Schwartz and his firm Levidow, Levidow & Oberman $5,000 — noting that the conduct "reflects a failure to research the caselaw" and exhibited "bad faith" through the submission of fabricated citations.
The attorneys were ordered to serve copies of the court's sanctions opinion on each judge before whom they had cases pending in the district — a public, professional humiliation.
Schwartz was required to complete additional legal education on the ethics of AI-assisted legal research. The case became mandatory reading in bar association guidance documents across multiple jurisdictions.
The Schwartz case arrived at a pivotal moment — when AI tools were first becoming easily accessible to legal professionals, but before any clear ethical guidance existed. It forced the profession to confront AI directly.
Multiple federal courts issued new local rules requiring lawyers to certify that any AI-generated content has been independently verified, or to disclose AI's use in drafting filings.
State bar associations across the US issued ethics opinions warning that the duty of competence includes understanding the limitations of AI tools — including their tendency to hallucinate.
The case established a clear professional standard: using AI for legal research is not inherently unethical, but failing to independently verify citations in a legal database is.
Mata v. Avianca became case study material in law schools teaching AI literacy — the canonical example of what happens when lawyers trust AI outputs without verification.
ChatGPT didn't set out to deceive Schwartz. It was doing exactly what it was designed to do: generating fluent, plausible-sounding text. It just had no mechanism to distinguish a real case from a convincing invention. That distinction was supposed to be the lawyer's job.