As AI becomes more integrated into various industries, courts are debating whether liability for harms should be based on a "product" approach—focusing on defective design or warnings—or a "service" approach—focusing on the conduct of creators and deployers. Recent cases show courts applying these frameworks to AI algorithms and tools, examining design choices, safeguards, and professional standards. Moving forward, liability assessments will likely be nuanced, considering both the AI's technical features and the behavior of those responsible for its deployment, making legal guidance essential for companies involved with AI.
Regardless of industry, profession, or age, modern artificial intelligence (AI) is dominating our collective mindshare. AI tools now draft legal briefs, triage insurance claims, screen job applications, provide emotional companionship, and speak and appear believably through our phones and digital devices. The applications of AI are only as limited as our human capacity to create, deploy, and regulate them.
As these tools become ubiquitous, courts must decide whether liability for harms inflicted by AI should be analyzed through a “product” or “services” paradigm. A product paradigm analysis asks whether design defects or failures to warn led to alleged harm. A services paradigm analysis asks whether there was a duty of care between a plaintiff and defendant, and whether that breach led to alleged harm.
Negligence in a Products Paradigm
When applying a products liability framework, the central question is whether the product itself is defective, not whether the entity that created it and sold or deployed it behaved unreasonably. Under this paradigm, liability may attach through theories of manufacturing defect, design defect, and failure-to-warn.
A manufacturing defect claim argues that a product deviated from its intended design. A design defect claim argues that the product was conceived and built in a way that made the product unreasonably dangerous, even when used as intended. A failure-to-warn claim argues that the creator knew, or should have known, of risks that were not adequately disclosed to the end user.
Manufacturing and design defect claims often result in strict liability for resultant harms. Where a plaintiff establishes strict liability, the framework does not require a plaintiff to prove that the creator or distributor were careless. It requires only that the product was defective and that the defect caused harm. Design defect claims, alongside failure-to-warn claims, can also require robust inquiries before liability attaches, e.g. whether the parties responsible for the product’s creation and dissemination acted unreasonably or negligently.
When considering AI offerings, a product liability framework scrutinizes the engineering and design choices baked into a model or platform. A judge or jury may scrutinize the objectives the AI was trained to optimize, the safeguards that were or were not built, and whether obvious or latent defects were or should have been known.
Negligence in a Services Paradigm
Under a services paradigm, the inquiry shifts from the product's architecture to the conduct of the humans and organizations that designed, deployed, and maintained the product. A fact finder will ask whether the defendant owed a duty of care to a foreseeable plaintiff, whether a breach of duty occurred, and whether damages flowed from that breach. When evaluating whether a breach occurred courts ask whether the service reasonably fulfilled a standard of care. Often, the relevant standard of care is defined by professions like medicine, law, and finance.
Where AI is evaluated as a service, the negligence analysis would focus on whether the service provided meets the threshold of reasonable conduct as defined by profession or existing standard.
A cluster of recent decisions have begun to sketch the contours of AI liability and are demonstrating how courts are applying product paradigms or the application of services paradigms to modern fact patterns.
Algorithms and Their Platforms as Negligently Designed Products: K.G.M. v. Meta Platforms
Earlier this year, a California judge allowed a jury to apply a products analysis to a technology company’s algorithmic design. In K.G.M. v. Meta Platforms, a 20-year-old California woman alleged that the deliberate designs—including infinite scroll, unpredictable reward mechanisms, autoplay, and engagement-maximizing recommendation engines—of various social media platforms created defective products that caused her addiction, depression, body dysmorphia, and suicidal ideation.
In a bid for dismissal, Meta argued that the applications’ decisions about how to curate and recommend user-generated content constitute protected expression under the First Amendment. Judge Carolyn Kuhl rejected that framing as a basis for dismissal. The judge made clear that a “conduct-versus-content distinction—treating algorithmic design choices as the company's own conduct rather than as the protected publication of third-party speech—was a viable legal theory for a jury to evaluate.” Accordingly, a discussion of the allegedly defective design of Meta’s algorithm and whether the resultant harms were reasonable were allowed to take center stage.
On March 25, 2026, the jury ruled in favor of the plaintiff. That verdict, whatever its appellate fate, is a watershed moment for litigation in this space. A jury has now applied product liability logic to algorithmic design. Future claims may allege product defects where engineering choices shape the way AI directs human attention or action.
AI Services and Professional Liability: Nippon Life Insurance v. OpenAI
Filed on March 4, 2026, in the Northern District of Illinois, Nippon Life Insurance Company of America v. OpenAI Foundation and OpenAI Group PBC illustrates how a services liability analysis may apply to an AI tool.
In Nippon Life, the plaintiff alleges that an individual’s unsupervised use of ChatGPT for pro se litigation caused the chatbot to engage in tortious interference with a contract, abuse of process, and the unlicensed practice of law. The case frames OpenAI, not as a manufacturer of a defective physical product, but as a provider of a service beholden to professional liability. Indeed, the complaint notes that ChatGPT was able to pass the Uniform Bar Examination with a combined score of 297, though it “has not been admitted to practice law in the State of Illinois or in any other jurisdiction within the United States.”
In this case, the court must evaluate whether a general-purpose generative AI system, and the company that develops and deploys it, can be deemed to have practiced law without a license by generating tailored legal advice and litigation documents.
Nippon Life remains in its early stages, however the arguments before the court concern clients across professional industries including, but not limited to, law, medicine, finance, and accounting.
As AI tools rapidly absorb the mindshare of licensed professionals, courts will have to decide how to address harms that would ordinarily be traced to a negligent actor and force consequences upon applicable license. Courts may eventually weigh whether AI providers exercised a professional level of care when designing, deploying, and maintaining their services.
AI Services Invoking Professional Protections—The Current Trio: Warner v. Gilbarco; United States v. Heppner; and Morgan v. V2X Inc.
On Feb. 10, 2026, two federal decisions from the Eastern District of Michigan and the Southern District of New York, addressed, respectively, whether a client’s use of an AI tool negated the attorney work product doctrine and the attorney-client privilege. Each court’s fact-sensitive analysis resulted in opposite decisions regarding the discoverability of AI-generated materials.
In Warner v. Gilbarco, Inc., decided in the Eastern District of Michigan, Magistrate Judge Anthony P. Patti denied the defendants’ motion to compel discovery of a pro se plaintiff's litigation materials which included drafts, email analyses, and internal strategy work generated using a paid ChatGPT account. The court held that these materials were protected work product, pursuant to Federal Rule of Civil Procedure 26(b)(3), rejecting the argument that uploading information to an AI platform constituted disclosure to a third party that waived the protection.
The court reasoned that generative AI programs are “tools, not persons,” and disclosure to them is no more a waiver of work product protection than dictating notes to a word processor. The court viewed AI use as analogous to traditional internal drafting and analysis and not a waiver of protection.
In United States v. Heppner, decided in the Southern District of New York, Judge Jed S. Rakoff reached the opposite conclusion on different facts. The defendant, facing federal fraud charges, had used the publicly available version of Anthropic’s Claude to generate defense strategy memos prior to retaining counsel and without any attorney direction.
The court denied the protection of both attorney-client privilege and the work product doctrine, reasoning that the defendant did not demonstrate confidentiality, a legal relationship, or attorney involvement. Of note, Claude’s tool explicitly noted that it was not an attorney and that its free version reserved the right to share user data with third parties, including the government.
Taken together, Warner and Heppner demonstrate that the use of AI tools, when evaluated within distinct circumstances, may produce markedly different legal outcomes. The protections afforded by the attorney-client privilege and the attorney work product doctrine will likely continue to turn generally upon how the tool was used, why the tool was used, who directed its use, and what the platform's terms of service say about confidentiality. Indeed, a recent holding of the U.S. District Court for the District of Colorado, Morgan v. V2X Inc., emphasized the spirit of Warner.
There, Judge Maritza Dominguez Braswell found that a pro se litigant’s materials, prepared using public AI tools, were protected by the attorney work product doctrine. Warner, Heppner, and Morgan are early reminders of how important it is for legal counsel to direct the use of AI tools in a manner that provides the strongest argument for a client’s continued protections.
Future Implications
The product versus services designation question carries immediate practical consequences for clients developing, deploying, or using AI. Product liability frameworks provide the public with aggressive legal tools with which to seek recourse for injury. These litigants will allege design-defect and failure-to-warn claims that characterize internal knowledge of known or reasonably foreseeable risks as proof of defects in technical design.
Further, adequacy of warnings, foreseeable misuse, and failure to incorporate safety features will all become viable theories to assign liability for harm in court. As the K.G.M. verdict suggests, jurors are prepared to apply these robust negligence paradigms to AI tools.
Where companies can avoid products liability frameworks, litigants will attempt to assign liability for reasonably foreseeable limitations resulting in harm. Though the services framework is friendlier to defendants, a services designation is not a complete shield. As suggested by Nippon Life, an AI tool that encroaches upon the mindshare of a profession may be saddled with some of the liabilities of that profession.
Litigants will allege, amongst other claims, failure to exercise reasonable care in design, knowledge of foreseeable harm, and failure to implement safeguards or disclosures. As courts continue to apply negligence paradigms to AI tools, clients should seek out savvy counsel to be a first line defense against the judiciary’s growing and capable curiosity.
Conclusion
In the years to come, courts are unlikely to impose a single, clean answer to the question of whether AI is a product or a service. More likely than not, courts will continue to apply granular, feature-level analyses which seek to assign negligence based on the characteristics of the specific AI tool, the conduct of the actors in its supply chain, and the nature of the harms claimed. For now, it is safest to understand AI as both a service and a product, allowing the facts and knowledgeable counsel to guide novel outcomes.
* * * *
Frances M. Green is Of Counsel at Epstein Becker Green. Bryan Hahm is an Associate with the firm. Ann W. Parks, a staff attorney at the firm, contributed to the preparation of this article.
Opinions are the authors’ own and not necessarily those of their employers.
Reprinted with permission from the April 21st, 2026 edition of the New York Law Journal © 2026 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com.