Bradley Merrill Thompson, Member of the Firm in the Health Care & Life Sciences practice, in the firm’s Washington, DC, office, was quoted in POLITICO, in “AI Has Arrived in Your Doctor’s Office. Washington Doesn’t Know What to Do About It,” by Daniel Payne.
Following is an excerpt:
Washington hasn’t written the rules for the new artificial intelligence in health care even though doctors are rapidly deploying it — to interpret tests, diagnose diseases and provide behavioral therapy.
Products that use AI are going to market without the kind of data the government requires for new medical devices or medicines. The Biden administration hasn’t decided how to handle emerging tools like chatbots that interact with patients and answer doctors’ questions — even though some are already in use. And Congress is stalled. Senate Majority Leader Chuck Schumer said this week that legislation was months away.
Advocates for patient safety warn that until there’s better government oversight, medical professionals could be using AI systems that steer them astray by misdiagnosing diseases, relying on racially biased data or violating their patients’ privacy. …
Safety and innovation
Students of the technology said AI systems that change — or “learn” — as they get more information could become more or less helpful over time, changing their safety or effectiveness profile.
And determining the impacts of those changes becomes even more difficult because companies closely guard the algorithms at the heart of their products — a proprietary “black box” that protects intellectual property but stands in the way of regulators and outside researchers.
The Office of the National Coordinator for Health Information Technology at HHS has proposed a policy aimed at getting more transparency about AI systems being used in health, but it doesn’t focus on the safety or efficacy of those systems. …
The World Health Organization’s approach is not unlike that of Washington: one of concern, guidance and discussion. But with no power of its own to regulate, the WHO recently suggested that the governments among its members step up the pace.
AI models “are being rapidly deployed, sometimes without a full understanding of how they may perform,” the body said in a statement.
Still, whenever it moves to tighten the rules, the FDA can expect pushback.
Some industry leaders have suggested that doctors are themselves a kind of regulator, since they are experts making the final decision regardless of AI co-pilots.
Others argue even the current approval process is too complicated — and burdensome — to support rapid innovation.
“I kind of feel like I’m the technology killer,” said Brad Thompson, an attorney at Epstein Becker Green who counsels companies on their use of AI in health care, by “fully inform[ing] them of the regulatory landscape.”
‘Would I personally feel safe?’
In the past, Thompson would have gone to Congress with his concerns.
But lawmakers aren’t sure what to do about AI, and legislating slowed while Republicans selected a new speaker. Now, lawmakers have to reach a deal on funding the government in fiscal 2024.
“That avenue just isn’t available now or in the foreseeable future,” Thompson said of attempts to update regulations through Congress, “and it just breaks my heart.”