Artificial Intelligence A.I.n’t the answer


AI; A.I.; artificial intelligence; computers

By: Christopher J. Fulmer

The Commonwealth of Massachusetts recently joined a growing number of jurisdictions to hand down decisions condemning attorneys for the use of generative artificial intelligence (“AI”) to prepare and file court papers which rely on citations to fabricated authority.1 In Smith v. Farwell, et. al, Civil Action No.: 2282CV01197 (“Smith), the Massachusetts Superior Court ordered plaintiff’s counsel to pay $2,000 in sanctions under Mass. R. Civ. P. 11 for submitting papers to the court which relied on at least four cases that were “made-up” by a generative AI website.   

When the collective defendants in Smith each moved to dismiss the wrongful death action, counsel for the plaintiff filed multiple legal memoranda that cited and relied upon “wholly-fictitious” case law in opposing the defendants’ motions. The court discovered the authorities were amiss when it could not find the cases plaintiff’s counsel relied upon concerning the elements of a wrongful death action.   

At oral argument for the motions to dismiss, the court announced it had discovered three fictitious case citations in plaintiff’s opposition briefs and asked plaintiff’s counsel for an explanation. Plaintiff’s counsel claimed to be unfamiliar with the cases and “had no idea where or how they were obtained.” The court then ordered plaintiff’s counsel to file a written explanation of the origin of the fabricated case citations. 

As it happened, the briefs had been prepared by an associate and two recent law school graduates who had yet to pass the bar exam. In his written letter, which was submitted to the court on November 6, 2023, plaintiff’s counsel admitted that the cases did not exist and were inadvertently included in the opposition briefs. Counsel apologized to the court and expressed regret for failing to “exercise due diligence in verifying the authenticity” of the cases.   

After holding a sanctions hearing, the Court ordered the attorney to pay $2,000 in sanctions under Mass. R. Civ. P. 11, “notwithstanding Counsel’s candor and admission of fault.” 

In its decision, the court remarked on a myriad of potential ethical risks posed by employing generative AI in the legal field. The court noted that while Smith involved violations of the Rules of Professional conduct concerning competence, the use of AI could cause other rule violations concerning diligence, confidentiality of information, candor toward the tribunal, responsibilities of partners, managers, and supervisory lawyers, and the unauthorized practice of law, along with the general rules concerning attorneys’ obligations as an advisor and not to engage in misconduct. 

The court noted: “AI can generate citations to totally fabricated court decisions bearing seemingly real party names, with seemingly real reporter, volume, and pages referenced, and seemingly real dates of decision.” The court also warned that entering confidential client information into an AI system can violate the attorney’s “obligation to maintain client confidences because the information can become part of the AI system’s database, then disclosed by the AI when it responds to other users’ inquiries.”   

In addition to the ethical pitfalls AI poses, attorneys who submit and rely on fictitious cases violate both the federal and Massachusetts Rules of Civil Procedure and open themselves up to sanctions for “bad faith” conduct. In Massachusetts, attorneys have a duty to make a “reasonable inquiry” to ensure an absence of bad faith and sham pleadings.   

 As the Smith court held, “Any information supplied by a Generative AI system must be verified before it can be trusted.” The failure to review the case citations in the oppositions for accuracy, or to at the very least, ensure that someone else in the office reviewed them, before filing the memoranda was a clear violation of Rule 11.  “Simply stated, no inquiry is not a reasonable inquiry.”   

Attorneys who are seeking to find more efficient ways to generate work product must be careful if they choose to do so through the use of generative AI. In addition to monetary sanctions under Rule 11, the Smith court warned that suspension and even disbarment are appropriate disciplinary actions for the blind acceptance of AI-generated content in the practice of law. The court’s final warning was this: “It is imperative that all attorneys practicing in the courts of this Commonwealth understand that they are obligated under Mass. Rule. Civ. P. 11 and 7 to know whether AI technology is being used in the preparation of court papers that they plan to file in their cases and, if it is, to ensure that appropriate steps are being taken to verify the truthfulness and accuracy of any AI-generated content before the papers are submitted.” This warning should be heeded by all attorneys, no matter where they practice.  

For more information, please contact Chris Fulmer at, or your local FMG attorney.


  1. See Mata v. AviancaInc., 2023 WL 4114965 at *1 (S.D.N.Y. June 22, 2023) (ordering sanctions in the amount of $5,000 for submitting and defending use of non-existent authority generated by AI website); Park v. Kim, 2024 WL 332478, *4 (2d Cir. Jan. 30, 2024) (referring plaintiff-appellant’s attorney for possible disciplinary action based upon submission of brief containing “non-existent authority” generated by ChatGPT); Will of Samuel, 2024 WL 238160, at *2 (N.Y. Sur. Jan. 11, 2024) (announcing court’s intention to sanction attorney for submitting papers containing “fictional” citations generated by an AI website); United States v. Cohen, 2023 WL 8635521, at *1 (S.D.N.Y. Dec. 12, 2023) (ordering counsel to “show cause in writing why he should not be sanctioned … for citing non-existent cases to the Court”). ↩︎