You Are Not Hallucinating

Risks and problems associated with generative AI for lawyers

May 22, 2024 Photo

The New York State Bar Association’s Task Force on Artificial Intelligence recently issued a report and recommendations regarding Artificial Intelligence (AI) in the legal field. The task force on AI was created to examine the legal, social, and ethical impact of AI on the legal profession, including ways AI can both enhance the profession and the risks posed by its use. These risks may affect the individual attorney as well as the integrity of the judicial process. As to its effect on the legal profession, the task force divided their discussion of AI into three areas of impact: ethical considerations, access to justice, and judicial response. 

Ethical Considerations

The main point of impact and concern is the ethical considerations and risks that the use of AI poses to legal practitioners. The task force identified six specific areas that may be implicated the most. First, is the duty of competency. This duty commands that lawyers be aware of the risks and benefits of the technology used to service their clients, as well as continue their education, training, and proficiency with these tools. Therefore, to satisfy the duty of competency, lawyers utilizing AI must educate themselves on how to properly use these tools for the benefit of clients, while also being cognizant of the risks they pose.  

Confidentiality and Privacy Issues

Next, the task force identified overlapping issues that may appear with regard to the duty of confidentiality and privacy as well as attorney-client privilege and attorney work product. These concerns arise when information is entered into generative AI, like chatbots and Chat GPT. The entries made using client information or attorney work product may become part of the technology’s training set, thereby storing sensitive information about the client and/or case strategy. The information can then be exposed when evaluative AI is used to examine the technology’s results, which can violate past and future protective orders and the client’s confidentiality.

Additionally, communications in the presence of a third party may not be entitled to attorney-client privilege. Again, concern arises when entries into a Chatbot are stored. This information may then become public to third parties when developers analyze the training set and results in an effort to improve and develop AI services, when training sets are disclosed to AI vendors, and when data input can be viewed by other parties on some public forms of AI.  

Explore more: CLM Webinars

An attorney’s duty of supervision is also implicated, as courts have found that nonhuman entities like AI can be considered nonlawyers. Therefore, like with any other nonlawyer, it is the lawyer’s duty to verify the accuracy of work produced by AI. In many cases, the use of AI may be considered the unauthorized practice of law. The lawyer must be part of the “information loop” when using AI, meaning the AI programs can direct clients to forms and templates but may not give advice as to the substance of those documents. 

In verifying the accuracy of information and legal authorities produced by AI, a lawyer’s duty of candor to the court is also implicated. To satisfy this duty, lawyers must identify and correct mistakes made by AI in information presented to the court. AI creates “hallucinations” or fake cases, citations, and legal arguments that seem correct but do not actually exist.  

Benefits of AI

One benefit specific to the legal industry identified by the task force is the possibility of an increased access to justice for underserved communities that cannot afford to obtain legal services. AI tools make it easier for these communities to obtain answers to their legal problems. However, this comes with several concerns. One of these concerns is the inaccuracy of chatbots. A Stanford University study found that 75% of the answers generated by AI chatbots regarding a sample court ruling were incorrect.

Further, AI cannot adequately address questions of law that implicate more than one practice area. For example, a legal issue implicating both immigration law and criminal law may yield an accurate answer for immigration law purposes but disregard any criminal law issues and implications. Therefore, AI may actually widen the justice gap as underserved communities may be subject to inferior, less expensive forms of AI, and these individuals may not know how to prompt the AI effectively to obtain the answers they are looking for. 

Recommedations

The task force ultimately recommended that the NYSBA adopt guidelines specifically concerning the utilization of AI and create a standing committee to update those guidelines as AI technology evolves. While many of the concerns posed by the use of AI are more sophisticated versions of problems that already exist and are governed by court rules, rules of professional conduct, and other laws and regulations, there will need to be adjustments to the comments to the rules of professional conduct in order to better address concerns that arise from AI.

As for individual attorneys, all practicing attorneys must take the time to educate themselves on the use of AI within the framework of the rules of professional conduct and must ensure that the client’s interests are protected when opting to use AI as a tool in the legal profession.  

This article originally appeared on Freeman Mathis & Gary, LLP. 


About the author:

Jenna N. Lofaro is an associate at Freeman Mathis & Gary, LLP.

photo
About The Authors
Jenna N. Lofaro

Jenna N. Lofaro is an Associate at Freeman Mathis & Gary, LLP.  jenna.lofaro@fmglaw.com

Sponsored Content
photo
Daily Claims News
  Powered by Claims Pages
photo
About The Community
  CMPL

CLM’s Cyber, Management & Professional Liability Community helps raise awareness of issues and trends in the management & professional liability insurance marketplace, with an emphasis on litigation management through a collaborative effort between insurance companies and brokerages, claims organizations and service providers.

photo
Community Events
  CMPL
No community events