The evolution of artificial intelligence programs presents an interesting dichotomy: If they are proven successful by increasing efficiency and enhancing effectiveness, should there also be a threshold mandate for their use in the legal profession, and if so, what ethical mandates should sit alongside such requirements?

Artificial intelligence (AI) enables machines to mimic human intelligence to perform tasks such as learning, decision making and problem solving and through the integration and use of AI in various industrial and financial sectors, transformative changes continue. The legal community has confronted the challenges of adapting to the use of AI with optimism and justifiably so. AI has been used to automate document analyses, enhance legal research, predict legal outcomes, and facilitate case management, resulting in increased efficiency and accuracy in the legal practice. Legal databases such as Lexis+ and Westlaw boast of new AI programs to transform legal research, contract analyses and document review while promoting "hallucination-free" legal support.

The integration of AI offers immense potential to enhance the delivery of legal services. In fact, a ­­­recent survey conducted by LexisNexis found 80% of Fortune 1000 executives expect their outside counsel to become more efficient in their operations—and thus reduce billing—by leveraging the efficiencies of AI. While these tools may enhance efficiency, they introduce nuanced ethical challenges related to attorney responsibilities and the broader legal code of ethics.

For instance, large language models (LLMs) like ChatGPT have been known to "fill in the gaps" by outputting incorrect information when its training data (input) is inadequate or flawed. This consequence is commonly referred to as a "hallucination", and such mistakes have concomitantly waned trust in reliance on generative AI tools.

The legal community has observed how AI hallucinations can infuse legal research, resulting in citations to non-existent cases, or even quotations from non-existent cases. The now infamous New York caseMata v. Avianca, involving two lawyers who submitted a ChatGPT brief containing false information as a result of hallucinogenic problems, has served as a cautionary tale by revealing the consequences of using such AI tools without proper review.

Most recently, disgraced lawyer Michael Cohen lost his rebid for shortened criminal sentencing but dodged the sanctions bullet as the court concluded that the false citations created by AI hallucination and cited by Mr. Cohen in filings before the court fell short of 'bad faith' and were merely 'negligent'.

As this technology advances, its implications on attorney responsibilities, canons, and the broader code of ethics become increasingly complex. Indeed, Chief Justice John Roberts lamented the role of hallucinations in his 2023 year-end report on the federal judiciary. Although AI has necessitated a reevaluation of the ethical framework in many jurisdictions to address issues like hallucinations, perhaps reviewing the foundations of attorney's ethics is sufficient.

For example, the underpinnings of the Model Rules of Professional Conduct, 1.1 (Lawyers Duty of Competence), and Rule 1.3 (Diligence), may already appropriately govern the ethical use of AI. Pursuant to disciplinary rules found in most jurisdictions, law firms and supervising attorneys are directed to ensure that the conduct of non-lawyers is compatible with the professional obligations of the lawyer. Thus, as lawyers may be responsible for ensuring that non-lawyers working under their supervision act in an ethical and competent manner, in like fashion lawyers should impose the same obligations upon the AI tools they utilize.

Balancing Ethical Considerations and the Use of AI

As AI matures and its iterations are more readily available, the failure to utilize such technology may compromise an attorney's ability to provide "competent legal representation." The Federal Rules of Civil Procedure urge practitioners to ensure the "just, speedy and inexpensive" resolution of cases. Attorneys have responded by scaling up existing methods to make that possible (think smartphones, predictive coding in e-discovery and virtual assistants). However, with the arrival and sophistication of "smart machines" attorneys are challenged to find the ethical balance between replacing attorney tasks versus augmenting attorney work.

Even raising the question is controversial: a 2023 survey conducted by Thomson Reuters found 82% of those attorneys surveyed believed generative AI can be readily applied to legal work, whereas 51% said generative AI should be applied to legal work.

When seeking a balance between the two, consideration of ethical obligations found in local disciplinary canons and/or professional ethical guidance is paramount. The American Bar Association amended Comment 8 to Model Rule of Professional Conduct 1.1 (Lawyers Duty of Competence) to address "technology competency." In order to be technologically competent, a lawyer not only "should keep abreast of changes in the law and its practice" but be knowledgeable of the "benefits and risks associated with relevant technology." The amendment captures the balance between leveraging technology in meaningful ways, by also ensuring lawyers abide by their ethical duties in maintaining a level of competency in the technology utilized.

Therefore, lawyers must have a basic understanding of what tasks, and under what circumstances, AI can be employed. Perhaps this means attorneys must possess some appetite for engaging in AI in their everyday life. This also means lawyers must exercise reasonable care and employ precautions when using such programs.

For instance, when GPT-4 was released, OpenAI claimed a factual accuracy rate between 70% and 80% depending on the subject matter. Currently, the website warns users that the program "may occasionally generate incorrect information." In the same ways that a lawyer has a duty to supervise any non-lawyer to whom the lawyer delegates work to, a lawyer must supervise the inputs—what information is being shared with AI—as well as the outputs—what information AI provides as an answer. This skill takes practice.

While there is a general understanding to consider AI accuracy rates and review AI-generated work, what else is expected of a lawyer to remain technologically competent is unclear. The Model Rules of Professional Conduct were written far before advanced AI programs existed, and their guidance on imposing that lawyers understand the benefits and risks of AI is more complex than originally contemplated.

For instance, the lawyers in Mata v. Avianca reasoned that they were unaware that ChatGPT can hallucinate and cause incorrect, or even cite to entirely non-existent, information. Nevertheless, the court held such reasoning was insufficient and sanctioned the firm and the attorneys involved. With the advancement of AI technologies—derived from complex computer science and statistics—how competent a lawyer must be to employ AI is not straightforward or simple.

Shifts in the Ethical Framework by AI-Imposed Challenges

Stories such as Mata v. Avianca serve as a cautionary tale, but some judges and other decisionmaker organizations within the legal community are actively taking steps to ensure stories like these are not repeated.

Some believe this starts with education. For instance, the American Bar Association formed an AI group to assess AI's impact while also probe ethical questions the technology imposes. The group of seven "special advisors" are tasked with evaluating risk management, generative AI, access to justice, and AI governance in legal education. Similarly, the Delaware Supreme Court created the Commission on Law and Technology to educate both the bench and bar on technology. The Illinois Judicial Conference task force, created this year, meets monthly to discuss how generative AI could help the court system improve access to the courts, promote procedural fairness and increase public confidence in the judiciary.

Others believe the ethical use of AI is achieved by court-imposed mandates. For instance, Chicago Magistrate Judge Gabriel Fuentes requires lawyers to reveal any "specific AI tool" used for legal research or document drafting. Judge Stephen Vaden of the U.S. Court of International Trade defined specific steps to safeguard data, asserting that AI technologies "challenge the Court's ability to protect confidential and business proprietary information from access by unauthorized parties." Similarly, the State Bar of Florida approved an Ethics Advisory Opinion 24-1, which provides that lawyers may ethically use generative AI if they can guarantee compliance with the lawyer's ethical obligations, including duty to client confidentiality.

While the intent of the advisory opinion is to ensure client confidentiality, it does, in effect, create a conundrum in ensuring confidential data or other proprietary information is not inadvertently inputted and subsequently subsumed into an AI system in ways that would compromise an organization's security and intellectual property.

When imposing a mandatory certification regarding generative AI, Judge Brantley Starr of the U.S. District Court for the Northern District of Texas explained:

While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law or the laws and Constitution of the United States (or, as addressed above, the truth).

What becomes evident as these new requirements emerge is that technological competence is not a skill lawyers can simply check off on their CLE list—this skill is recursive. With AI's promise of streamlining tasks in effective manners, lawyers must consistently evaluate and reevaluate their current processes by engaging with these systems to understand their blind spots, and understand how these systems work. Lawyers need to consider what additional requirements are imposed in their jurisdiction and consider other meaningful ways to participate as technology continues to integrate in the legal profession.

* * * *

Reprinted with permission from the April 2, 2024, edition of the “New York Law Journal" © 2024 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com.

Jump to Page

Privacy Preference Center

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.