Sunday, April 27, 2025

AI-Generated Court Filings & the Perils of Fictitious Citations: A U.S. Case Law Overview of Hallucinations

Introduction

The rise of generative AI tools in late 2022 and 2023 (e.g. OpenAI’s ChatGPT) brought new possibilities to the legal profession. Attorneys quickly began experimenting with AI to research and draft court filings. However, these tools are prone to “hallucinations” – a phenomenon where the AI confidently fabricates plausible-sounding but false information, including bogus quotes or case citations (8).

Early on, experts and judges warned that AI chatbots can “make stuff up — even quotes and citations” (4). Despite such warnings, a series of U.S. cases since 2023 revealed attorneys filing AI-generated briefs containing nonexistent case law and fictitious facts, leading to court sanctions and professional embarrassment. This post provides a historical overview of these incidents, examines the legal and ethical ramifications (from court procedure to attorney discipline), and concludes with best-practice recommendations for the responsible use of AI in legal work.

Mata v. Avianca (2023): The First “ChatGPT Lawyer” Sanction

One of the earliest and most widely publicized incidents was Mata v. Avianca, Inc. in New York in 2023 – a cautionary tale that became a wake-up call for the legal community. In this personal injury case against Avianca Airlines, the plaintiff’s attorneys submitted a brief in opposition to a motion to dismiss that cited six seemingly relevant court decisions. The only problem: none of those cases actually existed. They had been invented by ChatGPT, which the lawyer had used for legal research (1).

Opposing counsel and the judge discovered that the cited cases were fictitious when they could not be found in any legal database (1). The court ordered the filing attorney to produce the sources, and in response the attorney filed copies of AI-generated fake opinions purporting to be the missing cases

It was only after a further order to show cause that the attorney finally admitted that an AI tool had been used and was the source of the phony citations. By that point, the attorneys had doubled down on the falsehoods for nearly three months, even standing by the “fake opinions” after they were questioned (2).

U.S. District Judge P. Kevin Castel was not amused. In a blistering sanctions order issued in June 2023, Judge Castel found that the attorneys had acted in bad faith, engaging in “acts of conscious avoidance and false and misleading statements to the court” (1). The court emphasized that filing fictitious case law is a serious breach of an attorney’s duties: “Many harms flow from the submission of fake opinions. The opposing party wastes time and money in exposing the deception. The Court’s time is taken from other important endeavors. The client may be deprived of arguments based on authentic judicial precedents. … It promotes cynicism about the legal profession and the American judicial system.”(6). In short, the lawyers violated their duty of candor and wasted judicial resources.

Sanctions were imposed. The two attorneys (and their law firm) were jointly fined $5,000 (1). The judge also ordered them to inform their client of the sanctions and to send letters enclosing the sanctions order to all the judges who had been falsely identified as authors of the fake cases – a humbling form of accountability. In his order, Judge Castel acknowledged that there is nothing “inherently improper” about using AI as a tool for assistance, but he stressed that “lawyer ethics rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.” (1). In other words, regardless of how an attorney drafts a brief, the attorney bears ultimate responsibility for verifying that the authorities cited are real and valid.

Notably, the lawyers in Mata later claimed they had been unaware that ChatGPT could fabricate case law. In a statement, their firm described the incident as a “good faith mistake”, saying they “fail[ed] to believe that a piece of technology could be making up cases out of whole cloth.” (1). But ignorance was no excuse. The incident swiftly entered legal lore as the “ChatGPT lawyer” fiasco, serving as a stark reminder that novel technology does not absolve attorneys of traditional duties of truthfulness and due diligence.

A Growing Pattern: Subsequent Cases in 2024 and 2025

After the Avianca case, one might have expected attorneys to proceed with caution. Yet, in the following months, similar missteps occurred across the United States, suggesting that the Avianca incident was not an isolated one but rather the beginning of a troubling trend. Judges increasingly found themselves confronting filings tainted by AI-generated falsehoods. By 2025, one federal court remarked that “the epidemic of citing fake cases has continued unabated,” noting several recent examples across jurisdictions (7).

Some notable cases include:

  • Gauthier v. Goodyear Tire & Rubber Co. (E.D. Tex. 2024): In a products liability case in the Eastern District of Texas, an attorney filed a brief containing nonexistent case citations apparently supplied by an AI tool. The court sanctioned him with a $2,000 fine and even required the attorney to attend a continuing legal education course on generative AI as part of the penalty (9). The lawyer was also ordered to provide a copy of the sanctions order to his client, underscoring the breach of trust with the client.
  • United States v. Hayes (E.D. Cal. 2025): In January 2025, a federal judge in California fined a defense attorney $1,500 for submitting a brief in a criminal case that cited fake case law. The attorney claimed the error was due to an “inadvertent citation error,” but the court found this explanation inadequate. In addition to the fine, the judge directed that a copy of the sanctions order be sent to the state bar, potentially for disciplinary review (10).
  • Mid-Central Operating Engineers Fund v. Hoosiervac LLC (S.D. Ind. 2025): A judge in Indiana encountered yet another filing rife with AI-fabricated citations and took an even tougher stance. In February 2025, the court recommended a $15,000 sanction to drive home the seriousness of the misconduct (11). This appears to be one of the heaviest monetary sanctions to date for an AI-related filing violation, reflecting the court’s view that a strong deterrent was needed.
  • Wadsworth v. Walmart, Inc. (D. Wyo. 2025): Even large law firms fell prey to AI’s pitfalls. In this products liability case (involving an allegedly defective toy hoverboard), three lawyers from the prominent plaintiffs’ firm Morgan & Morgan filed a set of motions in limine that cited nine non-existent cases (5). The filing attorneys later admitted that they had used an internal AI research tool (a proprietary platform within their firm) which had “hallucinated” these case citations. Once the errors came to light – opposing counsel informed the judge they could not find the cited cases – the lawyers promptly withdrew the faulty filing and candidly acknowledged the mistake to the court. In their submission to the judge, they wrote: “This matter comes with great embarrassment and has prompted discussion and action regarding the training, implementation, and future use of artificial intelligence within our firm.” The Wyoming court imposed sanctions, albeit tailoring them to the attorneys’ respective roles. U.S. District Judge Kelly H. Rankin fined the junior attorney who drafted the AI-driven brief $3,000 and his two supervising co-counsel $1,000 each. Notably, Judge Rankin also revoked the drafting attorney’s pro hac vice admission (his permission to practice in that case as an out-of-state lawyer) as a sanction for the unethical conduct (6). In deciding the sanctions, the judge contrasted the Morgan & Morgan attorneys’ quick candor with the deceit in Mata, observing that unlike in the Avianca matter – where the lawyers misled the court for months – the Walmart case lawyers had been “forthcoming, honest, and apologetic” once the issue was identified (6). The court credited their remedial steps (they had already paid the opposing side’s attorney fees related to the motion and instituted firm-wide AI training and safeguards but still found that a penalty was necessary given the lapse in due diligence (8).
  • Other Recent Examples: In Dehghani v. Castro (D.N.M. Apr. 2025), the court imposed sanctions after counsel submitted “non-existent cases” that were “likely the handiwork of a ChatGPT or similar ... AI program’s hallucinations” (12).
  • And in April 2025, a federal magistrate judge in the Eastern District of New York sanctioned an attorney for using an AI tool called “ChatOn” to generate fake case citations in a motion to remand (the Guerline Benjamin v. Costco Wholesale Corp. case) (7). That judge opened his opinion by lamenting that these “phony submissions” are causing real problems: an “attorney who submits fake cases clearly has not read those nonexistent cases, which is a violation of Rule 11… These made-up cases create unnecessary work for courts and opposing attorneys alike. And perhaps most critically, they demonstrate a failure to provide competent representation to the client.”. Such blunt language underscores that by 2025, courts viewed this pattern as a national problem requiring a firm response.

In sum, since the watershed Avianca incident in early 2023, there has been a streamof cases in which U.S. lawyers faced sanctions for filing AI-generated falsehoods. The sanctions have varied in severity – from monetary fines to non-monetary measures like compulsory CLE training on ethics and AI , letters of apology or notification to affected parties, referrals to disciplinary authorities, and even temporary or permanent practice restrictions in the case at hand. But across all these cases, the message from the judiciary has been consistent: lawyers will be held accountable for inaccuracies in their filings, regardless of whether those mistakes stem from an algorithm or a human junior associate.

Ramifications: Procedural, Ethical, & Reputational Consequences

These cases have serious ramifications for court procedure, legal ethics, and attorney credibility. Fundamentally, an attorney submitting AI-fabricated information to a court runs afoul of core professional obligations and undermines the integrity of the judicial process.

  • Disruption of Court Processes: When a filing contains nonexistent authority or false information, it forces the court and opposing counsel to spend time and resources unraveling the fiction. Judges have had to issue orders to show cause, hold hearings, and write lengthy sanctions opinions – time that could have been devoted to real cases. As one court put it, such antics “create unnecessary work for courts and opposing attorneys alike” (7). In Mata v. Avianca, for example, the case was sidetracked for months by the need to investigate and address the fake citations, including a special hearing where the lawyers were questioned under oath (1). The distraction can also delay resolution of the client’s claims. In the Walmart case, the plaintiffs had to withdraw their motions in limine entirely, effectively losing the opportunity to have legitimate arguments heard at trial on those points (6). Fake citations waste judicial time, drive up litigation costs, and knock proceedings off-course.
  • Violations of Ethical Duties: An attorney who files a document with false citations or facts likely breaches multiple ethics rules. The duty of candor to the tribunal (ABA Model Rule 3.3) is paramount – lawyers must not knowingly make false statements of fact or law to a court. In these AI cases, the lawyers at minimum failed to confirm whether what they were submitting was true, and in some instances, they persisted in asserting the truth of fake authorities even after doubts were raised. The court in Mata explicitly found the attorneys made “false and misleading statements to the court,” violating their candor obligation. Even if the lawyers did not set out to lie, Federal Rule of Civil Procedure 11 imposes a duty to make a reasonable inquiry into the factual and legal basis of any filing. Citing opinions that one has never actually read (because they don’t exist) is a clear violation of Rule 11’s requirement that legal contentions be warranted by existing law or a nonfrivolous argument for its extension. In the Costco case, the judge noted that an attorney citing fake cases “clearly has not read those nonexistent cases,” failing the Rule 11 obligation. Many courts found these violations sanctionable even absent a finding of subjective bad faith, since an objective lapse in due diligence is enough for Rule 11 sanctions.
  • Competence and Diligence: In addition to candor, the duty of competence and diligence (ABA Model Rules 1.1 and 1.3) is at stake. Lawyers must perform legal research and writing with the thoroughness and preparation reasonably necessary for the representation. Relying on unverified AI output betrays a lack of competence in using technology and in legal research methodology. As one commentator observed, such conduct likely violates “their state’s corollary to the ABA Model Rules of Professional Conduct 1.1 (Competence) and 1.3 (Diligence)” (3). It also implicates Model Rule 3.3’s duty of candor, since the lawyer is effectively misrepresenting fake cases as if they were real. In short, a lawyer cannot blame the machine – using an AI tool does not absolve the lawyer from knowing what is in their filings. The ethical responsibility remains squarely with the human lawyer to ensure accuracy.
  • Sanctions and Disciplinary Action: The immediate consequence in these cases has been court-imposed sanctions under Rule 11 or the court’s inherent powers. As detailed above, sanctions have included fines, orders to complete training or community service, and requirements to notify clients, other judges, or disciplinary bodies of the misconduct. Monetary sanctions, while sometimes modest in amount, send a strong signal and become part of the attorney’s record. Some judges have explicitly referred offending lawyers for potential disciplinary review by state bar authorities. In one case, the appellate court even referred an attorney to a grievance committee for possible suspension after a pattern of filing fictitious citations came to lightBeyond formal sanctions, being the subject of a public sanctions order (often reported in legal media and visible on databases) is itself damaging. The attorneys in Mata v. Avianca and Wadsworth v. Walmart found themselves mentioned in nationwide news coverage and became cautionary examples in law offices and CLE programs. Such notoriety can haunt a lawyer’s reputation for years.
  • Loss of Credibility and Client Trust: An attorney’s credibility before the court is important – once tarnished, it is hard to restore. Filing a brief full of lies (even unintentional ones) is a fast way to lose a judge’s trust. Judges may become skeptical of that lawyer’s future filings, subjecting them to greater scrutiny or doubt. In the Avianca matter, for instance, the judge’s requirement that the lawyers send copies of the sanctions order to all the judges falsely cited as authors of the fake cases was not only punitive but also effectively flagged those attorneys’ names to multiple jurists. Likewise, the Walmart case attorneys had their right to appear pro hac vice curtailed – a clear message that they had lost the court’s confidence. There is also the damage to the attorney-client relationship. Clients expect and deserve diligent, competent representation. If a lawyer’s misuse of AI leads to embarrassment or setback in a case (as happened when motions had to be withdrawn, or cases got delayed), the client’s interests are harmed. In Mata, the client’s case was ultimately dismissed on procedural grounds, but not before his lawyers became mired in the sanctions dispute. In Wadsworth, the clients effectively forfeited certain pre-trial motions due to their lawyers’ mistake. At a minimum, the client is put in the uncomfortable position of receiving a court-mandated letter confessing their lawyer’s missteps. All of this erodes trust. As one federal judge noted, misuse of AI by lawyers “promotes cynicism about the legal profession and the American judicial system”. This cynicism can extend to clients, who may question whether their lawyers are exercising sound judgment or simply relying on dubious shortcuts.
  • Broader Impacts on the Profession: The legal community has taken note of these AI-related blunders. Bar associations and ethics committees are now actively discussing the proper use of AI. The American Bar Association has cautioned that while AI can be a useful aid, lawyers remain ultimately responsible for the accuracy of their work. Several state and local bar groups have issued ethics opinions reminding attorneys that using AI tools does not diminish obligations under the Rules of Professional Conduct, such as competence, confidentiality, and candor. Moreover, judges around the country have started instituting new procedural safeguards. For example, in the wake of the Avianca episode, a federal judge in Texas (Judge Brantley Starr of N.D. Texas) issued a standing order requiring that any attorney appearing in his court must file a certificate attesting either that no portion of a filing was drafted by generative AI, or that any AI-generated content was verified for accuracy by a human being (4). The order bluntly warns that AI output is not reliable: “These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up — even quotes and citations.” Other courts have adopted similar rules or guidelines requiring disclosure of AI use in filings. Law firms, too, have reacted by developing internal policies for the use of AI in legal research and writing. After the sanctions in the Walmart case, Morgan & Morgan’s attorneys affirmed that the incident prompted new training and safeguards on AI use firm-wide. In short, the profession is adjusting: there is growing recognition that attorneys must treat AI tools with caution and must train themselves and their teams on how to use these tools responsibly.

Best Practices for Responsible Use of AI in Legal Work

In light of the above, what lessons can legal professionals draw to avoid the pitfalls of AI-generated falsehoods? Below are several best-practice recommendations for attorneys using AI tools in any aspect of legal drafting or research:

1. Verify Every Citation and Quote: “One thing remains the same – checking and verifying the source.” This timeless mandate, as Judge Rankin put it, “remains unchanged” even when using AI. Never cite a case, statute, or quote that you have not personally checked in a trusted primary source. If an AI tool suggests a case or legal principle, find the case via Westlaw, Lexis, or official reporters and read it in full to confirm it actually exists and stands for the proposition asserted. Under no circumstances should an attorney copy-paste output from ChatGPT (or any AI) into a brief without rigorous vetting. The lawyer’s role as a gatekeeper of accuracy is non-delegable.

2. Understand AI’s Limitations – Don’t Trust, Verify: Treat generative AI as an unreliable first draft assistant, not an authoritative source. These tools will at times “hallucinate” – e.g., fabricate case law, fake quotes, or misstate facts. Always assume the possibility that the AI’s output may be wrong or even invented. Approach its suggestions with healthy skepticism. If the AI cites a case that sounds unfamiliar or on-point, be especially suspicious and double-check it. Remember that AI has no accountability or understanding of truth: “Unbound by any sense of duty, honor, or justice, [AI] programs act according to computer code rather than conviction,” as one court observed. The burden is on you to verify everything.

3. Use AI as a Supplement – Not a Substitute – for Traditional Research: AI tools might be helpful for brainstorming search terms, summarizing known documents, or generating preliminary drafts. But they should not replace standard legal research through reliable databases or reading of actual source materials. For finding law, there is no shortcut to using verified research tools and reading the applicable authorities. AI can sometimes point you toward a line of cases, but treat that only as a lead, not an answer. Always cross-reference AI research results with real databases. As Judge Starr warned, lawyers “can’t just trust those [AI] databases. They’ve got to actually verify it… through a traditional database”(4). In practice, this means if ChatGPT cites Smith v. Jones (Imaginary 5th Cir. 2010), you must attempt to locate Smith v. Jones in Westlaw or another source. If you can’t find it readily, that’s a red flag that it likely doesn’t exist.

 4. Maintain Transparency and Candor: If you do use AI in preparing a filing, consider being transparent (to the extent required or appropriate) about your process. Some courts now require attorneys to certify that they have checked their AI-derived content. Even if not required, it may be wise to internally document how you verified any AI-generated research. Never conceal a mistake or double down on a falsehood – that will only compound the ethical breach. If an error is discovered in a filed document (AI-related or not), immediately disclose it and correct the record with the court. Trying to cover it up (as happened initially in Mata) can lead to far worse consequences. Candor in remediation can mitigate sanctions, as seen in the Walmart case where the judge noted the attorneys’ honesty and remedial efforts in deciding on lesser sanctions.

 5. Preserve Client Confidentiality: Be mindful that using cloud-based AI tools could risk exposing confidential or privileged information. Most public AI chatbots retain input data. Avoid inputting any sensitive client facts or case specifics into a public AI tool. If your firm utilizes a proprietary or in-house AI platform (as some larger firms are developing), ensure it has proper data security and that its use is compliant with confidentiality rules. Always obtain client consent if there is any possibility that client data might be shared with an AI service outside the firm. Responsible AI use includes protecting client secrets as required by Model Rule 1.6.

6. Stay Educated on Evolving Standards: The landscape of AI in law is rapidly changing. Courts and bar associations continue to issue new guidelines, ethics opinions, and standing orders regarding AI. Attorneys should keep abreast of these developments – for example, by reading ethics committee reports, attending CLEs on AI in practice, and reviewing any standing orders of judges before whom they appear. Being ignorant of a court’s AI-related requirement (like Judge Starr’s certification rule) could itself lead to sanctions or at least judicial ire. In addition, competence today includes technological competence: comment 8 to Model Rule 1.1 advises lawyers to stay familiar with the “benefits and risks” of relevant technology. Make it a point to understand how generative AI works and its failure modes. The more you know about the tool, the better you can use it wisely (or decide not to use it in a given task).

 7. When in Doubt, Do It the Traditional Way: Finally, if you are unsure about the reliability of an AI output or you cannot practically verify an AI-derived assertion, err on the side of caution. It is far better to spend extra time doing conventional research than to risk your professional reputation and client’s case on an unverified AI result. Use AI’s efficiencies in low-stakes drafting (like generating form language or summarizing non-legal text) rather than in tasks that require absolute accuracy (like citing legal precedents). Exercising prudent judgment about when AI is or isn’t appropriate is now part of the attorney’s skillset.

Conclusion

The growth of AI tools in the legal field has been met with both excitement and trepidation. The U.S. cases since 2023 involving AI-generated fictitious citations serve as lessons in the importance of attorney oversight and integrity. Judges have made clear that while using AI in law practice is not forbidden, doing so irresponsibly – without the requisite human diligence – can lead to real penalties, from fines and sanctions to damaged careers. In the words of the court in the Avianca case, “existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings”(1). No matter how advanced our tools become, the lawyer’s fundamental duties of candor, competence, and diligence remain unchanged.

Legal professionals can harness the power of generative AI to enhance efficiency and productivity. But they must do so wisely – verifying all information, double-checking sources, and never relinquishing their critical oversight. The reputation of attorneys and the integrity of the judicial process depend on it. By learning from the early cautionary cases and adhering to best practices, lawyers can avoid the pitfalls of “hallucinated” legal filings and instead ensure that technology augments, rather than undermines, the quality of their advocacy.

Sources:

1. Sara Merken, “New York lawyers sanctioned for using fake ChatGPT cases in legal brief,” Reuters (June 22, 2023). https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/#:~:text=The%20judge%20found%20the%20lawyers,misleading%20statements%20to%20the%20court
 
2. Seyfarth Shaw LLP, “Update on the ChatGPT Case: Counsel Who Submitted Fake Cases Are Sanctioned,” Legal Update (June 26, 2023). https://www.seyfarth.com/news-insights/update-on-the-chatgpt-case-counsel-who-submitted-fake-cases-are-sanctioned.html#:~:text=York%20case%20involving%20lawyers%20who,nothing%20to%20do%20with%20AI
 
3. Onika K. Williams, “Use of ChatGPT for Research Leads to Bogus Cases, Sanctions,” ABA Litigation News (Nov. 1, 2023). https://www.americanbar.org/groups/litigation/resources/litigation-news/2023/use-chatgpt-research-bogus-cases-sanctions/#:~:text=Summary
 
4. Jacqueline Thomsen, “US judge orders lawyers to sign AI pledge, warning chatbots ‘make stuff up’,” Reuters (June 2, 2023). https://www.reuters.com/legal/transactional/us-judge-orders-lawyers-sign-ai-pledge-warning-they-make-stuff-up-2023-05-31/#:~:text=In%20an%20interview%20Wednesday%2C%20Starr,information%20without%20verifying%20it%20themselves
 
5. Sara Merken, “Lawyers in Walmart lawsuit admit AI ‘hallucinated’ case citations,” Reuters (Feb. 10, 2025). https://www.reuters.com/legal/legalindustry/lawyers-walmart-lawsuit-admit-ai-hallucinated-case-citations-2025-02-10/#:~:text=
 
6. Wadsworth v. Walmart Inc., No. 23-CV-118, 2025 WL 608073 (D. Wyo. Feb. 24, 2025) (Order on Sanctions). https://caselaw.findlaw.com/court/us-dis-crt-d-wyo/117003959.html#:~:text=Many%20harms%20flow%20from%20the,claiming%20doubt%20about%20its%20authenticity 
 
7. Guerline Benjamin v. Costco Wholesale Corp., No. 24-CV-6213, 2025 WL 2403482 (E.D.N.Y. Apr. 19, 2025) (Sanctions Order). https://caselaw.findlaw.com/court/us-dis-crt-ed-new-yor/117206693.html#:~:text=course%20of%20litigation%2C%20Plaintiff%20has,1
 
8. Eugene Volokh, “Sanctions on Lawyers for Filing Motion Containing AI-Hallucinated Cases,” The Volokh Conspiracy (Feb. 25, 2025). https://reason.com/volokh/2025/02/25/sanctions-on-lawyers-for-filing-motion-containing-ai-hallucinated-cases/#:~:text=,attorneys%20had%20to%20manually%20cross
 
9. JAMES GAUTHIER, Plaintiff, v.GOODYEAR TIRE & RUBBER CO., Defendant.
Civil Action No. 1:23-CV-281. United States District Court, E.D. Texas. November 25, 2024. https://scholar.google.com/scholar_case?case=12295395131698828317&hl=en&as_sdt=2006
 
10. UNITED STATES v. DARAGH FINBAR HAYES (2025). United States District Court, E.D. California. UNITED STATES, Plaintiff, v. DARAGH FINBAR HAYES, Defendant.. Case No. 2:24-cr-0280-DJC. Decided: January 17, 2025. https://caselaw.findlaw.com/court/us-dis-crt-e-d-cal/116862866.html

11. MID CENTRAL OPERATING ENGINEERS HEALTH AND WELFARE FUND, Plaintiff, HOOSIERVAC LLC, Defendant. No. 2:24-cv-00326-JPH-MJD. United States District Court, S.D. Indiana, Terre Haute Division. February 21, 2025. https://scholar.google.com/scholar_case?case=1143802252603162019&q=Mid-Central+Operating+Engineers+Fund+v.+Hoosiervac+LLC+(S.D.+Ind.+2025)&hl=en&as_sdt=2006

12. AZADEH DEHGHANI, Petitioner, v. No. 2:25-cv-0052 MIS-DLMDORA CASTRO, Otero Processing Center Warden. https://law.justia.com/cases/federal/district-courts/new-mexico/nmdce/2:2025cv00052/511942/28/

No comments:

Post a Comment

Thank you for your thoughtful comments.