Lawyer cites fake cases invented by ChatGPT, judge is not amused
27th May 2023
So, wait. They file a brief that cites cases fabricated by ChatGPT. The court asks them to file copies of the opinions. And then they go back to ChatGPT and ask it to write the opinions, and then they file them?
Beth Wilensky, May 26 2023
Here’s a New York Times story about what happened.
I’m very much not a lawyer, but I’m going to dig in and try to piece together the full story anyway.
The TLDR version
A lawyer asked ChatGPT for examples of cases that supported an argument they were trying to make.
ChatGPT, as it often does, hallucinated wildly—it invented several supporting cases out of thin air.
When the lawyer was asked to provide copies of the cases in question, they turned to ChatGPT for help again—and it invented full details of those cases, which they duly screenshotted and copied into their legal filings.
At some point, they asked ChatGPT to confirm that the cases were real... and ChatGPT said that they were. They included screenshots of this in another filing.
The judge is furious. Many of the parties involved are about to have a very bad time.
A detailed timeline
I pieced together the following from the documents on courtlistener.com:
Feb 22, 2022: The case was originally filed. It’s a complaint about “personal injuries sustained on board an Avianca flight that was traveling from El Salvador to New York on August 27, 2019”. There’s a complexity here in that Avianca filed for chapter 11 bankruptcy on May 10th, 2020, which is relevant to the case (they emerged from bankruptcy later on).
Various back and forths take place over the next 12 months, many of them concerning if the bankruptcy “discharges all claims”.
Mar 1st, 2023 is where things get interesting. This document was filed—“Affirmation in Opposition to Motion”—and it cites entirely fictional cases! One example quoted from that document (emphasis mine):
The United States Court of Appeals for the Eleventh Circuit specifically addresses the effect of a bankruptcy stay under the Montreal Convention in the case of Varghese v. China Southern Airlines Co.. Ltd.. 925 F.3d 1339 (11th Cir. 2019), stating "Appellants argue that the district court erred in dismissing their claims as untimely. They assert that the limitations period under the Montreal Convention was tolled during the pendency of the Bankruptcy Court proceedings. We agree. The Bankruptcy Code provides that the filing of a bankruptcy petition operates as a stay of proceedings against the debtor that were or could have been commenced before the bankruptcy case was filed.
There are several more examples like that.
March 15th, 2023
Quoting this Reply Memorandum of Law in Support of Motion (emphasis mine):
In support of his position that the Bankruptcy Code tolls the two-year limitations period, Plaintiff cites to “Varghese v. China Southern Airlines Co., Ltd., 925 F.3d 1339 (11th Cir. 2019).” The undersigned has not been able to locate this case by caption or citation, nor any case bearing any resemblance to it. Plaintiff offers lengthy quotations purportedly from the “Varghese” case, including: “We [the Eleventh Circuit] have previously held that the automatic stay provisions of the Bankruptcy Code may toll the statute of limitations under the Warsaw Convention, which is the precursor to the Montreal Convention ... We see no reason why the same rule should not apply under the Montreal Convention.” The undersigned has not been able to locate this quotation, nor anything like it any case. The quotation purports to cite to “Zicherman v. Korean Air Lines Co., Ltd., 516 F.3d 1237, 1254 (11th Cir. 2008).” The undersigned has not been able to locate this case; although there was a Supreme Court case captioned Zicherman v. Korean Air Lines Co., Ltd., that case was decided in 1996, it originated in the Southern District of New York and was appealed to the Second Circuit, and it did not address the limitations period set forth in the Warsaw Convention. 516 U.S. 217 (1996).
April 11th, 2023
The United States District Judge for the case orders copies of the cases cited in the earlier document:
ORDER: By April 18, 2022, Peter Lo Duca, counsel of record for plaintiff, shall file an affidavit annexing copies of the following cases cited in his submission to this Court: as set forth herein.
The order lists seven specific cases.
April 25th, 2023
The first five attachments each consist of PDFs of scanned copies of screenshots of ChatGPT!
You can tell, because the ChatGPT interface’s down arrow is clearly visible in all five of them. Here’s an example from Exhibit Martinez v. Delta Airlines.
April 26th, 2023
In this letter:
Defendant respectfully submits that the authenticity of many of these cases is questionable. For instance, the “Varghese” and “Miller” cases purportedly are federal appellate cases published in the Federal Reporter. [Dkt. 29; 29-1; 29-7]. We could not locate these cases in the Federal Reporter using a Westlaw search. We also searched PACER for the cases using the docket numbers written on the first page of the submissions; those searches resulted in different cases.
May 4th, 2023
The ORDER TO SHOW CAUSE—the judge is not happy.
The Court is presented with an unprecedented circumstance. A submission file by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases. [...] Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.
Let Peter LoDuca, counsel for plaintiff, show cause in person at 12 noon on June 8, 2023 in Courtroom 11D, 500 Pearl Street, New York, NY, why he ought not be sanctioned pursuant to: (1) Rule 11(b)(2) & (c), Fed. R. Civ. P., (2) 28 U.S.C. § 1927, and (3) the inherent power of the Court, for (A) citing non-existent cases to the Court in his Affirmation in Opposition (ECF 21), and (B) submitting to the Court annexed to his Affidavit filed April 25, 2023 copies of non-existent judicial opinions (ECF 29). Mr. LoDuca shall also file a written response to this Order by May 26, 2023.
I get the impression this kind of threat of sanctions is very bad news.
May 25th, 2023
Cutting it a little fine on that May 26th deadline. Here’s the Affidavit in Opposition to Motion from Peter LoDuca, which appears to indicate that Steven Schwartz was the lawyer who had produced the fictional cases.
Your affiant [I think this refers to Peter LoDuca], in reviewing the affirmation in opposition prior to filing same, simply had no reason to doubt the authenticity of the case law contained therein. Furthermore, your affiant had no reason to a doubt the sincerity of Mr. Schwartz’s research.
Attachment 1 has the good stuff. This time the affiant (the person pledging that statements in the affidavit are truthful) is Steven Schwartz:
As the use of generative artificial intelligence has evolved within law firms, your affiant consulted the artificial intelligence website ChatGPT in order to supplement the legal research performed.
It was in consultation with the generative artificial intelligence website ChatGPT, that your affiant did locate and cite the following cases in the affirmation in opposition submitted, which this Court has found to be nonexistent:
Varghese v. China Southern Airlines Co Ltd, 925 F.3d 1339 (11th Cir. 2019)
Shaboon v. Egyptair 2013 IL App (1st) 111279-U (Ill. App. Ct. 2013)
Petersen v. Iran Air 905 F. Supp 2d 121 (D.D.C. 2012)
Martinez v. Delta Airlines, Inc.. 2019 WL 4639462 (Tex. App. Sept. 25, 2019)
Estate of Durden v. KLM Royal Dutch Airlines, 2017 WL 2418825 (Ga. Ct. App. June 5, 2017)
Miller v. United Airlines, Inc.. 174 F.3d 366 (2d Cir. 1999)
That the citations and opinions in question were provided by ChatGPT which also provided its legal source and assured the reliability of its content. Excerpts from the queries presented and responses provided are attached hereto.
That your affiant relied on the legal opinions provided to him by a source that has revealed itself to be unreliable.
That your affiant has never utilized ChatGPT as a source for conducting legal research prior to this occurrence and therefore was unaware of the possibility that its content could be faise.
That is the fault of the affiant, in not confirming the sources provided by ChatGPT of the legal opinions it provided.
- That your affiant had no intent to deceive this Court nor the defendant.
- That Peter LoDuca, Esq. had no role in performing the research in question, nor did he have any knowledge of how said research was conducted.
Here are the attached screenshots (amusingly from the mobile web version of ChatGPT):
May 26th, 2023
The judge, clearly unimpressed, issues another Order to Show Cause, this time threatening sanctions against Mr. LoDuca, Steven Schwartz and the law firm of Levidow, Levidow & Oberman. The in-person hearing is set for June 8th.
Part of this doesn’t add up for me
On the one hand, it seems pretty clear what happened: a lawyer used a tool they didn’t understand, and it produced a bunch of fake cases. They ignored the warnings (it turns out even lawyers don’t read warnings and small-print for online tools) and submitted those cases to a court.
Then, when challenged on those documents, they doubled down—they asked ChatGPT if the cases were real, and ChatGPT said yes.
There’s a version of this story where this entire unfortunate sequence of events comes down to the inherent difficulty of using ChatGPT in an effective way. This was the version that I was leaning towards when I first read the story.
But parts of it don’t hold up for me.
I understand the initial mistake: ChatGPT can produce incredibly convincing citations, and I’ve seen many cases of people being fooled by these before.
What’s much harder though is actually getting it to double-down on fleshing those out.
I’ve been trying to come up with prompts to expand that false “Varghese v. China Southern Airlines Co., Ltd., 925 F.3d 1339 (11th Cir. 2019)” case into a full description, similar to the one in the screenshots in this document.
Even with ChatGPT 3.5 it’s surprisingly difficult to get it to do this without it throwing out obvious warnings.
I’m trying this today, May 27th. The research in question took place prior to March 1st. In the absence of detailed release notes, it’s hard to determine how ChatGPT might have behaved three months ago when faced with similar prompts.
So there’s another version of this story where that first set of citations was an innocent mistake, but the submission of those full documents (the set of screenshots from ChatGPT that were exposed purely through the presence of the OpenAI down arrow) was a deliberate attempt to cover for that mistake.
I’m fascinated to hear what comes out of that 8th June hearing!
Update: The following prompt against ChatGPT 3.5 sometimes produces a realistic fake summary, but other times it replies with “I apologize, but I couldn’t find any information or details about the case”.
Write a complete summary of the Varghese v. China Southern Airlines Co., Ltd., 925 F.3d 1339 (11th Cir. 2019) case
The worst ChatGPT bug
Returning to the screenshots from earlier, this one response from ChatGPT stood out to me:
I apologize for the confusion earlier. Upon double-checking, I found that the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis.
I’ve seen ChatGPT (and Bard) say things like this before, and it absolutely infuriates me.
No, it did not “double-check”—that’s not something it can do! And stating that the cases “can be found on legal research databases” is a flat out lie.
What’s harder is explaining why ChatGPT would lie in this way. What possible reason could LLM companies have for shipping a model that does this?
I think this relates to the original sin of LLM chatbots: by using the “I” pronoun they encourage people to ask them questions about how they work.
They can’t do that. They are best thought of as role-playing conversation simulators—playing out the most statistically likely continuation of any sequence of text.
What’s a common response to the question “are you sure you are right?”—it’s “yes, I double-checked”. I bet GPT-3’s training data has huge numbers of examples of dialogue like this.
Let this story be a warning
Presuming there was at least some aspect of innocent mistake here, what can be done to prevent this from happening again?
I often see people suggest that these mistakes are entirely the fault of the user: the ChatGPT interface shows a footer stating “ChatGPT may produce inaccurate information about people, places, or facts” on every page.
Anyone who has worked designing products knows that users don’t read anything—warnings, footnotes, any form of microcopy will be studiously ignored. This story indicates that even lawyers won’t read that stuff!
People do respond well to stories though. I have a suspicion that this particular story is going to spread far and wide, and in doing so will hopefully inoculate a lot of lawyers and other professionals against making similar mistakes.
I can’t shake the feeling that there’s a lot more to this story though. Hopefully more will come out after the June 8th hearing. I’m particularly interested in seeing if the full transcripts of these ChatGPT conversations ends up being made public. I want to see the prompts!
How often is this happening?
It turns out this may not be an isolated incident.
Eugene Volokh, 27th May 2023:
A message I got from Prof. Dennis Crouch (Missouri), in response to my posting A Lawyer’s Filing “Is Replete with Citations to Non-Existent Cases”—Thanks, ChatGPT? to an academic discussion list. (The full text was, “I just talked to a partner at a big firm who has received memos with fake case cites from at least two different associates.”) Caveat emp…—well, caveat everyone.
@narrowlytaylord, 26th May 2023:
two attorneys at my firm had opposing counsel file ChatGPT briefs with fake cases this past week
(1) They aren’t my matters so I don’t know how comfortable I am sharing much more detail
(2) One was an opposition to an MTD, and the state, small claims court judge did not care at the “your honor these cases don’t exist” argument
More recent articles
- Weeknotes: Embeddings, more embeddings and Datasette Cloud - 17th September 2023
- Build an image search engine with llm-clip, chat with models with llm chat - 12th September 2023
- LLM now provides tools for working with embeddings - 4th September 2023
- Datasette 1.0a4 and 1.0a5, plus weeknotes - 30th August 2023
- Making Large Language Models work for you - 27th August 2023
- Datasette Cloud, Datasette 1.0a3, llm-mlc and more - 16th August 2023
- How I make annotated presentations - 6th August 2023
- Weeknotes: Plugins for LLM, sqlite-utils and Datasette - 5th August 2023
- Catching up on the weird world of LLMs - 3rd August 2023
- Run Llama 2 on your own Mac using LLM and Homebrew - 1st August 2023