Artificial intelligence (AI) is poised to redefine and reshape health care, and the U.S. Department of Justice (DOJ) is taking notice.

In June, Deputy Attorney General Lisa Monaco convened the fourth session of the DOJ's Justice AI Initiative, tasked with bringing together experts from academia, science and industry, to examine potential effects of AI on the Department's mission and to report to the president. This session focused on the civil rights and civil liberties challenges of AI, particularly when algorithms and automated systems are used to make critical decisions in health care, employment, housing, and more. Past meetings have focused on "how malicious actors are using AI to supercharge their criminal schemes."

In announcing the launch of Justice AI Initiative earlier this year, Monaco said, "every new technology is a double-edged sword, but AI may be the sharpest blade yet." From election security to U.S. national security to combatting discrimination, the DOJ has its sword at the ready. "Discrimination using AI is still discrimination. Price fixing using AI is still price fixing. Identity theft using AI is still identity theft. You get the picture," Monaco said. "Our laws will always apply. And—our enforcement will be robust." Monaco echoed these concerns most recently in Brussels with leaders from the European Parliament. Further, the DOJ is using AI to bring enforcement actions. In fact, "this approach has led to some of the Fraud Section's largest cases and initiatives."

We can expect that in the future, a False Claims Act (FCA) violation using AI will still be a False Claims Act violation. That federal statute, enacted in 1863, contains a qui tam provision allowing whistleblowers (relators), as well as the U.S. government, to file cases alleging fraud on the government. The DOJ was a party to 543 False Claims Act settlements and judgments in 2023—the most ever in a single year—with the Department recovering a total of $2.68 billion from FCA cases in fiscal year 2023.

While initially developed to combat Civil War-era fraud, the FCA is now used by the DOJ to combat fraud in the health care industry—with a whopping $1.8 billion recovered in fiscal year 2023 from managed care providers, hospitals, pharmacies, laboratories, long-term acute care facilities, physicians and more. With DOJ enforcement so heavily focused on uncovering health care fraud through the FCA, and also focusing strongly on AI, we expect that in the future those elements will combine in a significant way (The Hill, in fact, published a piece in June in the context of defense contractor fraud, aptly titled, "AI companies, meet the False Claims Act").

The President's Executive Order Shapes DOJ Enforcement Priorities

These DOJ and other agency initiatives stem from President Joe Biden's Oct. 30 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, a government-wide approach to AI. The order noted that the irresponsible use of AI "could exacerbate societal harms such as fraud, discrimination, bias and disinformation."

Appropriate safeguards are especially important, the president noted, in critical fields like health care. Meanwhile, the DOJ's FCA enforcement—while restoring funds to federal programs like Medicare, Medicaid and TRICARE—also serves to "protect patients from medically unnecessary or potentially harmful actions."

In the executive order, "each agency" was tasked with designating a chief AI officer. In February, Attorney General Merrick B. Garland designated Jonathan Mayer to serve as the DOJ's first chief science and technology advisor and chief AI officer in the Office of Legal Policy—advising the AG and others on complex technological issues including AI and cybersecurity while leading a newly established Emerging Technology Board.

In separate remarks in March at the American Bar Association's 39th Annual National Institute on White Collar Crime, Monaco stated that DOJ prosecutors will be seeking stiffer sentences for both individual and corporate defendants where AI is "deliberately misused to make a white-collar crime significantly more serious." To that end, the DOJ will assess a company's ability to manage AI-related risks as part of its overall compliance efforts, and the Criminal Division will incorporate assessment of disruptive technology risks—including risks associated with AI—into its guidance on Evaluation of Corporate Compliance Programs.

In April, Monaco listed AI "at the top of the enforcement priority list" for the Disruptive Technology Task Force, which helps protect advanced technology from being unlawfully acquired by foreign adversaries.

On the antitrust front, meanwhile, the DOJ in May announced the Antitrust Division's new Task Force on Health Care Monopolies and Collusion (HCMC) (see EBG's post on the subject). As reported by Bloomberg Law, the "task force is expected to bring cases against providers using algorithmic price setting databases to pool data and inflate drug prices, deals involving medical billing and health care IT services, and consolidation of payer and provider businesses, attorneys say." In June, news outlets including the New York Times were reporting that the DOJ's antitrust division and the Federal Trade Commission have agreed to split government oversight of AI with respect to different companies in the industry.

Of course, the DOJ is also cracking down on cybercrime, with its Civil Cyber-Fraud Initiative; one company in May resolved to pay $2.7 million in connection with a failure to provide adequate cybersecurity for COVID-19 Contact Tracing Data in an FCA case. The Computer Fraud and Abuse Act remains an important tool for prosecutors to address cyber-based crimes.

False Claims Act: New Wine into Old Wineskins

As AI develops, we expect to see more activity in DOJ's federal FCA enforcement cases, both civil and criminal, each of which are on an upward trend, with the agency recovering $2.68 billion in fiscal year 2023 from civil FCA settlements and judgments. The agency recently announced that it is seeking in excess of $2.75 billion in cases stemming from the 2024 National Health Care Fraud Enforcement Action. As the FCA requires no showing of specific intent to defraud, individuals and organizations must be careful to monitor and audit AI usage or risk enforcement actions (31 U.S.C. §§3729(b)(1)(B)).

The following examples of FCA enforcement actions may presage AI FCA-related enforcement to come.

Health Plans

Health plans using technology to review claims or assess patient medical records for determinations of diagnoses have already been subject to enforcement efforts. The DOJ has intervened in several cases alleging, at least in part, that the algorithm the plans used to gather diagnosis codes allowed them to submit inaccurate codes for Medicare Advantage, with the end results being higher reimbursements.

In 2021, the U.S. government intervened in six complaints against a health provider/plan's affiliates regarding inaccurate diagnosis codes for Medicare Advantage plan enrollees. The complaint alleged that the defendants utilized both automated algorithms in addition to human reviewers to add improper diagnoses—unrelated to patient visits—to medical records and submitted them to the Centers for Medicare and Medicaid Services for payment in violation of the FCA.

In another such case, the U.S. District Court for the Northern District of California held that the government sufficiently alleged, among other things, that a provider submitted legally false claims for payment under Medicare Advantage—violating the FCA by presenting/causing to be presented false claims in the form of improper diagnosis codes (United States ex rel. Osinek v. Permanente Medical Group, 640 F. Supp 3d 885 (Nov. 14, 2022)). "[The defendants used] algorithms to identify [certain] disease conditions for data mining…" the government alleged in its first amended complaint.

The defendants had allegedly created a data-mining algorithm to identify potential cachexia diagnosis (commonly referred to as a wasting syndrome, e.g., anorexia cachexia) yet physicians were routinely sent queries, asking them to update their patient medical records for patients who were merely thin. The physicians reported that the queries were "garbage," yet cachexia diagnoses were added over 120 times more than in other areas. And while defendants were put on notice through internal audits, they did not modify the cachexia data mining algorithm, did not delete the diagnoses, and in fact continued to submit them.

"[T]he government has alleged that an audit revealed a high error rate with respect to cachexia diagnoses made through addenda but that [defendants] failed to respond…" the district court wrote, noting that despite this knowledge, defendants did not modify its cachexia data-mining algorithm or program for several years. "This further supports a finding of scienter. For false cachexia diagnoses thereafter, one could reasonably infer reckless disregard[.]"

Health Care Providers

Providers have also been subject to enforcement actions that may signal AI enforcement actions. In a 2020 case, the government alleged that defendant medical providers violated the FCA by (1) submitting false risk-adjusting diagnosis codes to CMS and (2) failing to return payments predicated on false diagnosis codes (United States ex rel. Ormsby v. Sutter Health, 444 F.Supp. 3d 1010 (Mar. 16, 2020)). Among other things, the defendants maintained a team of non-physician coders who could change these codes or encourage physicians to do so; certain codes also appeared in medical records before the physicians saw the patients. Audits later found many of the codes to be false.

The government alleged direct FCA violations by knowingly submitting false diagnosis codes, and reverse FCA violations by knowingly failing to delete false codes and return payments from the Centers for Medicare and Medicaid Services.

"Using data mining, [defendants] 'pushed' their physicians through messages in the electronic medical record to find and refresh especially high-paying risk-adjusting diagnosis codes to increase patients' risk scores," the Northern District of California wrote, adding that some of those physicians received "queries" in the electronic medical record from coders reminding the physicians to ensure that all such diagnosis codes were captured. "Numerous physicians disliked this practice and felt 'pressured' to add diagnosis codes that they did not believe to be clinically accurate or relevant." Audits revealed a problem when certain medical providers submitted diagnosis codes "much more frequently than industry average," yet no action was taken. The court allowed the FCA claims to go forward.

EHRs

Also in 2020, the DOJ obtained a $145 million settlement from a San Francisco-based electronic health records (EHR) vendor to resolve civil and criminal investigations, in a kickback scheme using electronic health record software to increase opioid prescriptions—a case that could be a harbinger of AI-enabled EHR enforcement. The government alleged that the pharmaceutical companies were allowed to influence the development of clinical decision support alerts in electronic health records software in exchange for kickbacks.

Medical Devices

In 2021, medical device manufacturers agreed to pay $38.75 million to resolve FCA allegations that they knowingly sold diagnostic devices that they knew had a materially defective algorithm. The companies then billed, and caused others to bill, Medicare for defective devices that produced inaccurate and unreliable results.

DOJ Use of AI: The Enforcement Sword

AI offers enforcement advantages, as well—it may be, as Monaco suggested, "the sharpest blade yet" for enforcement. The Criminal Division's Health Care Fraud Unit—protecting health care benefit programs such as Medicare, Medicaid and TRICARE, and protecting patients from egregious fraudulent schemes resulting in patient harm—is using advanced data analytics and algorithmic methods to identify newly emerging health care fraud schemes. Data analysts in the unit are working with prosecutors to identify, investigate, and prosecute cases. Health care entities with high acuity patients should be poised to defend claims from accusations that billing exceeds national averages.

Adding to existing self-disclosure processes in place at HHS-OIG and DOJ, on April 15, the DOJ's Criminal Division launched a Pilot Program on Voluntary Self-Disclosures for Individuals—under which culpable individuals can receive a non-prosecution agreement (NPA) if they (1) voluntarily, (2) truthfully and (3) completely self-disclose original information regarding misconduct unknown to the department in certain high-priority enforcement areas, (4) fully cooperate and are able to provide substantial assistance against those equally or more culpable, and (5) forfeit any ill-gotten gains and compensation.

As with the Corporate Enforcement Policy, the individual's self disclosure must be to the Criminal Division, and it must be voluntary. The Pilot Program is specifically open to disclosures in areas that include health care fraud and kickback schemes. The individuals program will reinforce an existing corporate voluntary self-disclosure program and a developing whistleblowers program, the DOJ has said.

While the DOJ is sharpening its sword in preparation for AI-related enforcement actions, health care organizations, providers, payors and device manufacturers should be donning their shields. Organizations are well advised to proceed with caution, and to undertake auditing and monitoring of contemplated or currently deployed AI systems and tools.

* * * *

This article was written with the assistance of Epstein Becker Green staff attorney Ann W. Parks.

Reprinted with permission from the July 12, 2024, edition of the “New York Law Journal" © 2024 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com.

Jump to Page

Privacy Preference Center

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.