In Spring 2025, Harvard Law School plans to offer a course on “Agentic Artificial Intelligence and the Law,” reflecting the growing significance of agentic artificial intelligence (AI) in legal practice. Agentic AI refers to systems capable of autonomous decision-making, task execution, and adapting to dynamic environments without direct human oversight. Unlike Generative AI (GenAI), which creates content such as text or images, agentic AI focuses on goal-oriented actions and independent problem-solving.
The transformative potential of agentic AI lies in its ability to handle complex workflows, akin to a human employee, enabling automation of tasks traditionally performed by junior lawyers. This could reshape legal practice by enhancing efficiency, reducing costs, and requiring lawyers to adapt their skills to collaborate with such systems.
Definitions
Although the definition of AI in most states with AI legislation differs widely, several already appear broad enough to encompass agentic AI. California, which has one of the more aggressive AI laws to date, defines AI as “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.”
Colorado, meanwhile, defines AI as “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations that can influence physical or virtual environments.”
To the extent that AI exists in the workplace, certain definitions from states also suggest that agentic AI is covered. California’s AI definition, for example, states that the system “varies in its level of autonomy.” Both Colorado and Utah definitions of AI or AI systems include its influence over physical or virtual environments.
Overseas, the European Union’s AI Act already appears to anticipate agentic AI, defining an AI system in a similar manner in Article 3(1), and such systems could be classified as high-risk depending on their use. The World Economic Forum is up to speed, specifically defining and addressing agentic AI in its recently published white paper on “Navigating the AI Frontier: A Primer on the Evolution and Impact of AI Agents.”
A Different Kind of Large Language Model – Move Over, GenAI!
Because agentic AI is characterized by the autonomous capability to plan and execute complex sequences of actions on behalf of users, it is not only different than GenAI but its applications are far ahead from the generative AI that lawyers may already use. Within the legal sphere, GenAI is already being used for tasks such as document review, contract analytics, predictive analytics, and basic legal research. Forbes has noted that GenAI is the “second wave” following predictive AI; agentic AI is the third.
As agentic AI systems are deployed across diverse applications, they are being entrusted with escalating levels of discretionary power. These systems will have a sophisticated grasp of adaptive decision-making processes and the ability to autonomously pursue various objectives. Experts predict that the next few years will bring more and more agentic AI systems into our daily lives—and the practice of law will not be insulated or exempt.
The fact that Spellbook named its first full-fledged legal AI agent “Associate” upon its launch in August 2024 may cause some concern; in some circles, AI is predicted to automate 100,000 legal positions by 2036. But lawyers would do well to approach AI with optimism instead of fear. Law firms need future generations of talented associates and partners.
So far, absent a few highly publicized glitches, AI technology tends to help lawyers more than do harm—for example, by freeing up associates to do more substantive work than document review and to do tasks such as research and drafting faster. This is still a profession that values independent decision making and creativity. “It will undoubtedly become crucial for lawyers to master AI tools,” Forbes noted eloquently in October, “but these tools are most effective when wielded by those with uniquely human strengths.”
AI in the Legal Profession: Positives and Potential
At the close of 2024, law firms are reported to be quite interested in agentic AI—from offering agentic AI commercial services to clients to exploring the lawyering tools that are available so far. This comes as no surprise, as a survey by LexisNexis published at the start of the year found that 53 percent of Am Law 200 firms have purchased AI tools and 45 percent are using them for legal work.
Forty-three percent of Am Law 200 leaders said their firm had “a dedicated budget to invest in the growth opportunities presented by generative AI in 2024.” This will likely expand to agentic AI, as firms become more familiar with this new form. Wilson Sonsini has reportedly offered an agentic AI commercial contracting tool for cloud services companies; Clifford Chance reportedly is eyeing autonomous agents created through Microsoft Copilot Studio in order to stay on the leading edge.
The Thomson Reuters Future of Professionals Report 2024, published in July, shows that firms are increasingly anticipating a surge in the impact that AI will have on their work within the next five years—79 percent, which is up ten percentage points from 2023. The data in the report suggests that AI could “free up additional work time at a pace of 4 hours freed up per week within one year; 8 hours in three years’ time, and 12 hours in five years.”
This will allow lawyers to focus on higher-level work, fostering both creativity and strategic problem-solving. Meanwhile, a Bloomberg Law 2024 State of Practice report says that 41 percent of firms have a dedicated internal team focusing on evaluating AI tools; 29 percent report having a legal team or practice group for AI law designed for clients. Clearly, firm leaders are eager to invest in AI.
The potential benefits in the law firm space appear almost limitless. Those in the field are already envisioning agentic AI legal project managers, legal research agents, and due diligence agents: “the key distinction here is that these agents would not just suggest actions or provide information—they would take concrete steps to complete tasks involving human oversight only when necessary or desired.”
A New York City Bar podcast published at the end of October—entitled “Could Agentic AI Be Your Next Legal Intern”—pointed out that agentic AI could simulate litigation with simulated jurors, helping to figure out which jurors would be most favorable and which arguments would be the most effective (although we’re not ready to have AI taking on the roles of judges).
Agentic AI could draft and send emails, as well as coordinate with court calendars when there’s a change in the schedule. An intern with a great AI tool could knock out a project on contract analysis that might have taken an entire summer in the past. In smaller organizations—which may include sole practitioners or even corporate legal departments—there may be few or no interns to do the work, and AI can help. (Regarding in-house, the Bloomberg State of Practice report indicates that 49 percent of counsel have advised their organizations on at least GenAI, with respect to data management and document preparation).
Perhaps the most encouraging use of agentic AI is the freeing up time devoted to mundane tasks, and time, of course, means money: Thompson Reuters is already predicting that AI in general could free up roughly 200 hours per professional per year.
The Challenges of Agentic AI
Of course, the rapid advancement and integration of agentic AI systems raise critical legal questions that demand scholarly attention for all industries. These questions include:
- Liability and Accountability: As agentic AI systems gain autonomy, determining liability for their actions becomes increasingly complex. Legal frameworks may need to be adapted to address scenarios where AI decisions lead to harm or unintended consequences.
- Regulatory Challenges: The dynamic nature of agentic AI systems poses unique challenges for regulatory bodies. Developing adaptive regulatory frameworks that can keep pace with technological advancements while ensuring public safety and ethical use is paramount.
- Intellectual Property Rights: The creation of novel solutions by agentic AI systems may blur traditional notions of authorship and inventorship, necessitating a reevaluation of intellectual property laws.
- Privacy and Data Protection: As these systems interact with and process vast amounts of personal data, ensuring compliance with existing data protection regulations and addressing potential new privacy concerns becomes crucial.
- Ethical Decision-Making: The incorporation of ethical decision-making capabilities in agentic AI systems raises questions about the legal standards and oversight mechanisms needed to ensure alignment with human values and societal norms.
These legal considerations underscore the need for interdisciplinary collaboration between legal scholars, technologists, and policymakers to develop robust legal frameworks that can effectively govern the deployment and use of agentic AI systems in society.
While GenAI offers lawyers the ability to augment their efficiencies and provide greater levels of responsiveness to their clients, the use of these tools especially in the area of legal research can be problematic. A recent Stanford study that proclaims to be the “first preregistered empirical evaluation of AI-driven legal research tools” indicates that despite claims by providers, AI legal research tools using retrieval augmented generation (RAG) still have significant hallucination rates.
Even with enterprise models, like Lexis AI and Westlaw AI, the authors found that those systems “can fail to distinguish between arguments made by litigants and statements by the court” and “struggle with orders of authority.” Notably, RAG-based legal research tools were more accurate than general-purpose AI models like ChatGPT, but they did not eliminate hallucinations entirely.
Given the evolving state of agentic AI development and use, whether the issues of error, mistake, and confabulation will be lessened by the use of Agentic AI is not certain. Still, it would appear to be more inclined to accuracy and self-correction, something that GenAI does not do.
Professional Responsibility
For lawyers especially, the use of agentic as well as GenAI may implicate a number of the ABA Model Rules of Professional Conduct and specific rules of other states.
To adhere to their professional responsibilities, lawyers must ensure they are competent in using AI tools and understand the tools capabilities and limitations. According to Formal Opinion 512 on GenAI tools, released by the ABA in July 2024, lawyers “must have a reasonable understanding of the capabilities and limitations of the specific [GenAI] technology that the lawyer might use” and must “recognize inherent risks”—including the risk of producing inaccurate output. This pairs with lawyers’ duty of supervision.
As lawyers have responsibilities to supervise nonlawyers, it extends to overseeing any AI-generated work. Furthermore, “[m]anagerial lawyers must establish clear policies regarding the law firm’s permissible use of [GenAI], and supervisory lawyers must make reasonable efforts to ensure that the firm’s lawyers and nonlawyers comply with their professional obligations when using AI tools.”
Lawyers must protect client information when using AI tools, and they should inform and consult with their clients about any use of AI. The Florida Bar Association, for example, recommends that lawyers obtain “affected client’s informed consent prior to utilizing a third-party generative AI program if the utilization would involve the disclosure of any confidential information.”
Formal Opinion 512 states that when lawyers rely on GenAI to influence significant decisions in the representation—such as relying on GenAI to “evaluate potential litigation outcomes or jury selection”—communication with the client regarding the use of GenAI is necessary. Ultimately, the facts of the case will decide whether lawyers are required to disclose their practices with the use of GenAI or obtain informed consent.
Lawyers must thoroughly review and verify the accuracy of any AI-generated outputs before submitting them to the court, in order to comply with their duty of candor towards the tribunal. This was emphasized in Formal Opinion 512 to ensure that any assertions a lawyer makes while using GenAI are not false. Courts have imposed sanctions on lawyers for submitting incorrect information generated by AI systems.
While AI tools may increase efficiency dramatically, lawyers still have an obligation for their fees and expenses to be reasonable, and thus have an obligation to bill only for the actual time worked, not time saved by AI. The ABA Journal has noted that “if AI saves 20 hours by doing 15 minutes of document searching, for example, the firm should only bill for the 15 minutes of actual time worked.” As a result, discussions surrounding new billing models and fee arrangements have emerged to account for the efficiency of AI systems.
Next Steps
As agentic AI sweeps generative AI into last year, bar associations need to do more. While the principles of agentic AI may also apply, 2024 guidance that addresses generative AI alone is already outdated. Simply put, lawyers need more education—as do nonlawyers, since legal AI tools in the hands of unlicensed individuals, including unrepresented litigants, may constitute the unauthorized practice of law.
In addition to the ABA, the state bars of New York, California, Florida, Illinois, Kentucky, Michigan, Minnesota, Missouri, North Carolina, New Jersey, Pennsylvania, Texas, Utah, Virginia, West Virginia, and Washington, D.C., are among those that have weighed in on the use of AI among lawyers and/or judges (Justia’s list is here.)
On an international level, the bar associations and law societies of G7 countries signed a statement on artificial intelligence in March 2024, pledging to “cooperate with each other to watch and assess [the potential impacts of AI] carefully, both in terms of positive and negative implications.”
While continuing legal education (CLE) requirements for lawyers vary state by state, it would be advantageous for bar associations to adopt a new CLE category focused solely on artificial intelligence. Some jurisdictions are already incorporating technology developments and challenges into their CLE requirements.
For example, New York became the first state in the country to require experienced attorneys to complete one hour of CLE on cybersecurity, privacy, and data protection every two years. The New York State Bar Association’s (NYSBA) Committee on Technology and the Legal Profession tackles issues related to new and increasingly prevalent use of AI tools and systems, like agentic AI, in the practice of law. The NYSBA also has a separate Task Force on Artificial Intelligence.
The Virginia State Bar has a Technology and Future Practice of Law Committee. California mandates one hour of CLE on competence issues every three years, which could potentially cover AI-related competence. While lawyers can voluntarily take AI-related CLE courses, universal requirements would ensure that lawyers stay informed about AI developments, adhere to their professional responsibilities, and understand the evolving regulatory landscape surrounding AI in legal practice.
As AI reshapes the legal profession, lawyers should continue to educate themselves, but ultimately, bar associations should take more steps to ensure that there is comprehensive education in this area. As agentic AI advances and begins to make autonomous decisions, this should complement an attorney’s work rather than taking it over.
* * * *
Reprinted with permission from the January 22, 2025, edition of the “New York Law Journal" © 2025 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com.