5/22/25
By: Lynn M. Dean
At least three federal courts in recent weeks have reviewed the conduct of lawyers and litigants who improperly used Artificial Intelligence (“AI”) generated research in submissions to judges. The emerging consensus is that such submissions violate attorneys’ obligations under Rule 11 of the Federal Rules of Civil Procedure to ensure the legal contentions in papers they sign and file are warranted by existing law.
First, on May 6, 2025, U.S. Magistrate Judge (Ret.) Michael Wilner issued a sanctions order in the Central District of California in which he struck briefing submitted in support of a discovery motion, declined to grant the motion and fined plaintiff’s counsel $31,000.1 Why? Because the firm had submitted a briefing with research generated by AI containing incorrect legal citations, including citations to two authorities that did not exist. Moreover, after Judge Wilner called this defect to their attention, counsel filed a partially corrected brief that still contained AI-generated errors. After a hearing on an Order to Show Cause re Sanctions, Judge Wilner wrote:
According to my after-the-fact review – and supported by the candid declarations of Plaintiff’s lawyers – approximately nine of the 27 legal citations in the ten-page brief were incorrect in some way. At least two of the authorities cited do not exist at all. Additionally, several quotations attributed to the cited judicial opinions were phony and did not accurately represent those materials. The lawyers’ declarations ultimately made clear that the source of this problem was the inappropriate use of, and reliance on, AI tools.
Counsel’s explanation for the failure was that one attorney used AI tools to generate an outline for the brief and forwarded that outline to his co-counsel at another firm, who incorporated it into the document filed with the Court. “No attorney or staff member at either firm… cite-checked or… reviewed the research before filing the brief…” Judge Wilner characterized the conduct of the lawyers as “deeply troubling.” They not only “failed to check the validity of the research sent to them,” but after he contacted them and let them know his concerns regarding the research, “the lawyers’ solution was to excise the phony material and submit the Revised Brief – still containing a half-dozen AI errors.” Judge Wilner also expressed concern that “even though the lawyers were on notice of a significant problem with the legal research,” they failed to disclose they had used AI to generate it.
Second, on May 9, 2025, Chief Judge Kenneth J. Gonzales of the United States District Court for the District of New Mexico issued a bulletin to litigants in that District in which he noted the “growing frequency” of the use of AI in motions and other filings.2 Judge Gonzales observed “examples of filings have been brought to my attention that appear to include AI-generated arguments and citations that are not warranted by existing law…” including documents that “appear to use AI and citations to non-existent cases.” Judge Gonzales warned that sanctions for such conduct could include monetary fines and referral to the disciplinary boards of state bar associations.
Finally, on May 15, 2025, a litigant in a case in the Northern District of Alabama accused defendants’ counsel of supporting a discovery motion with “wholly invented case citations… possibly through the use of generative artificial intelligence.”3 Counsel explained that four of the cases cited in his opponent’s brief appeared to have been fabricated by AI. He concluded that the “Defendant’s complete fabrication of case law is suggestive of an abuse of the utilization of generative artificial intelligence and should be taken very seriously by this Court.
In response, the court issued an Order to Show Cause, noting “[i]n the light of the seriousness of the accusation, the court has conducted independent searches for each allegedly fabricated citation, to no avail.”4 The court ordered each of the attorneys to show good cause why they should not be sanctioned under Federal Rule of Civil Procedure 11, the court’s inherent authority, Local Rule 83.1(f), and/or Alabama Rule of Professional Conduct 3.3 for making false statements of fact or law to the court. In response, the attorneys admitted the research was generated by ChatGPT and included in the motion without verifying its accuracy.5 Admitting they had chosen “convenience at the expense of accuracy,” counsel apologized for the “extremely poor lapse in judgment” and promised to take “every necessary step” to “restor[e] the court’s confidence in our written and spoken words.” On May 21, 2025, the court permitted the offending party to withdraw the briefs and make an amended filing, but ordered additional briefing on the issue of appropriate sanctions. The matter is ongoing.
Generative AI is here to stay, and some clients are demanding it on cost savings grounds. That said, these cautionary tales make it clear that “[t]he use of artificial intelligence must be accompanied by the application of actual intelligence in its execution.”6 Lawyers who care about their licenses and their reputations will make sure the results of AI research are cite-checked by a human being before filing any brief that uses it.
For any questions or further clarification, please contact Lynn M. Dean at lynn.dean@fmglaw.com or your local FMG attorney.
Information conveyed herein should not be construed as legal advice or represent any specific or binding policy or procedure of any organization. Information provided is for educational purposes only. These materials are written in a general format and are not intended to be advice applicable to any specific circumstance. Legal opinions may vary when based on subtle factual distinctions. All rights reserved. No part of this presentation may be reproduced, published or posted without the written permission of Freeman Mathis & Gary, LLP.
Share
Save Print