You are not hallucinating: Risks and problems associated with Generative AI for lawyers


AI; A.I.; artificial intelligence; computers

By: Jenna N. Lofaro

The New York State Bar Association’s Task Force on Artificial Intelligence recently issued a Report and Recommendations regarding Artificial Intelligence (“AI”) in the legal field. The task force on AI was created to examine the legal, social, and ethical impact of AI on the legal profession, including ways AI can both enhance the profession and the risks posed by its use. These risks may affect the individual attorney as well as the integrity of the judicial process. As to its effect on the legal profession, the task force divided their discussion of AI into three areas of impact – ethical considerations, access to justice, and judicial response. 

The main point of impact and concern is the ethical considerations and risks that the use of AI poses to legal practitioners. The task force identified six specific areas that may be implicated the most. First, is the duty of competency. This duty commands that lawyers be aware of the risks and benefits of the technology used to service their clients, as well as continue their education, training, and proficiency with these tools. Therefore, to satisfy the duty of competency, lawyers utilizing AI must educate themselves on how to properly use these tools for the benefit of clients, while also being cognizant of the risks they pose.  

Next, the task force identified overlapping issues that may appear with regard to the duty of confidentiality and privacy as well as Attorney Client Privilege and Attorney Work Product. These concerns arise when information is entered into generative AI, like chatbots and Chat GPT. The entries made using client information or attorney work product may become part of the technology’s training set, thereby storing sensitive information about the client and/or case strategy. The information can then be exposed when evaluative AI is used to examine the technology’s results, which can violate past and future protective orders and the client’s confidentiality. Additionally, communications in the presence of a third party may not be entitled to Attorney Client Privilege. Again, concern arises when entries into a Chatbot are stored. This information may then become public to third parties when developers analyze the training set and results in an effort to improve and develop AI services, when training sets are disclosed to AI vendors, and when data input can be viewed by other parties on some public forms of AI.  

An attorney’s duty of supervision is also implicated, as courts have found that nonhuman entities like artificial intelligence can be considered nonlawyers. Therefore, like with any other nonlawyer, it is the lawyer’s duty to verify the accuracy of work produced by AI. In many cases, the use of AI may be considered the unauthorized practice of law. The lawyer must be part of the “information loop” when using AI, meaning the AI programs can direct clients to forms and templates but may not give advice as to the substance of those documents. 

In verifying the accuracy of information and legal authorities produced by AI, a lawyer’s duty of candor to the court is also implicated. To satisfy this duty, lawyers must identify and correct mistakes made by AI in information presented to the court. AI creates “hallucinations” or fake cases, citations, and legal arguments that seem correct but do not actually exist.  

One benefit specific to the legal industry identified by the task force is the possibility of an increased access to justice for underserved communities that cannot afford to obtain legal services. AI tools make it easier for these communities to obtain answers to their legal problems. However, this comes with several concerns. One of these concerns is the inaccuracy of chatbots. A Stanford University study found that 75% of the answers generated by AI chatbots regarding a sample court ruling were incorrect. Further, AI cannot adequately address questions of law that implicate more than one practice area. For example, a legal issue implicating both immigration law and criminal law may yield an accurate answer for immigration law purposes but disregard any criminal law issues and implications. Therefore, AI may actually widen the justice gap as underserved communities may be subject to inferior, less expensive forms of AI, and these individuals may not know how to prompt the AI effectively to obtain the answers they are looking for. 

The task force ultimately recommended that the NYSBA adopt guidelines specifically concerning the utilization of AI and create a standing committee to update those guidelines as AI technology evolves. While many of the concerns posed by the use of AI are more sophisticated versions of problems that already exist and are governed by court rules, rules of professional conduct, and other laws and regulations, there will need to be adjustments to the comments to the rules of professional conduct in order to better address concerns that arise from AI. As for individual attorneys, all practicing attorneys must take the time to educate themselves on the use of AI within the framework of the rules of professional conduct and must ensure that the client’s interests are protected when opting to use AI as a tool in the legal profession.  

For more information, please contact Jenna Lofaro at or your local FMG attorney.