The Importance of Verifying AI-Generated Legal Insights



By: Samuel C. Jeon

In recent months, Artificial Intelligence (“AI”) has become increasingly prevalent in the legal field. AI-powered tools are now being used for a variety of tasks, including legal research, contract drafting, and case prediction. While AI can be a powerful tool, it is important to be aware of the potential dangers of relying too heavily on it without verifying its work product.

A recent case involving the airline Avianca and a passenger named Roberto Mata highlights the potential pitfalls of not verifying AI’s legal research. In this case, Mata’s lawyer, Steven A. Schwartz of Levidow, Levidow & Oberman, turned to ChatGPT, an AI software, to research cases. Mata had sued Avianca, alleging that he was injured when a metal serving cart struck his knee during a flight. When Avianca requested the case be dismissed, Schwartz submitted a brief on Mata’s behalf, citing several court decisions.

However, a startling revelation soon followed. Some of the court decisions and opinions cited in the brief were found to be non-existent. These non-existing cases were generated by ChatGPT.

Schwartz, who has practiced law in New York for three decades, admitted to his mistake in not confirming ChatGPT’s legal research. He stated that he had no intention to deceive the court or the airline and expressed regret for relying on ChatGPT without verifying its authenticity.

Judge P. Kevin Castel, who presided over the case, described the situation as an “unprecedented circumstance.” He was presented with a legal submission filled with “bogus judicial decisions, with bogus quotes and bogus internal citations.” A hearing has been scheduled to discuss potential sanctions.

This incident has sparked a wave of discussions among the legal community about the value and dangers of AI software like ChatGPT. While AI can be a powerful tool, this case underscores the importance of verifying the information it provides.

The case is a reminder of the potential risks of using AI in legal research. AI software like ChatGPT is a powerful tool, but it is important to use it responsibly.

Here are some tips for using artificial intelligence in legal research:

  • Be aware of the limitations of AI. AI software like ChatGPT is a machine learning model, and it is not perfect. It can make mistakes, and it can be fooled by bad data.
  • Use artificial intelligence as a tool, not a replacement for human judgment. AI software can be a valuable resource for legal research, but it is important to use it in conjunction with your knowledge and experience.
  • While AI is increasingly prevalent in the legal world, there is still a need for human oversight and verification. This is particularly true in professional fields like law, where the stakes are high and errors can have serious consequences.

For more information, contact Samuel C. Jeon or your local FMG attorney.