Alaap B. Shah, Member of the Firm in the Health Care & Life Sciences practice, in the firm’s Washington, DC, office, was quoted in the ABA Journal, in “Generative Artificial Intelligence Developers Face Lawsuits Over User Suicides,” by Danielle Braff. (Read the full version – subscription required.)
Following is an excerpt:
Alaap Shah, a Washington D.C.-based attorney with Epstein Becker Green, says there is no regulatory framework in place that applies to emotional or psychological harm caused by AI tools. But, he says, we do have broad consumer protection authorities at the federal and state levels that afford some ability for the government to protect the public and to hold AI companies accountable if they’re in violation of these consumer protection laws.
For example, Shah says, the Federal Trade Commission has broad authority under Section 5 of the FTC to bring enforcement actions against unfair or deceptive practices, which may apply to AI tools that mislead or emotionally exploit users.
Some state consumer protection laws might also apply if an AI developer misrepresents its safety or functionality.
Colorado has passed a comprehensive AI consumer protection law that’s set to take effect in February. The law creates several risk management obligations for developers of high-risk AI systems that make consequential decisions concerning consumers.
A major setback is the regulatory flux with respect to AI, Shah says.
President Donald Trump rescinded President Joe Biden’s 2023 executive order governing the use, development and regulation of AI.
“This signaled that the Trump administration had no interest in regulating AI in any manner that would negatively impact innovation,” Shah says, adding that the original version of Trump’s One Big Beautiful Bill Act contained a proposed “10-year moratorium on states enforcing any law or regulation limiting, restricting or otherwise regulating artificial intelligence.” The moratorium was removed from the final bill.
Shah adds that if a court were to hold an AI company directly liable in a wrongful death or personal injury suit, it would certainly create a precedent that could lead to additional lawsuits in a similar vein.
From a privacy perspective, some argue that AI programs that monitor conversations may infringe upon the privacy interests of AI users, Shah says.
“Yet many developers often take the position that if they are transparent as to the intended uses, restricted uses and related risks of an AI system, then users should be on notice, and the AI developer should be insulated from liability,” he says.