For some patients in the University of Wisconsin Health system, the next question they ask their doctor might be answered by artificial intelligence – as part of a pilot project to see if AI chatbots can save providers time. …
That’s just one of thousands of possible ways large AI language models like OpenAI’s ChatGPT or Google’s Med-Palm could transform medicine. AI chatbots trained on publicly available internet data have shown they can pass medical licensing exams and provide cogent answers to complex diagnostic questions. One recent study even found that ChatGPT was better than doctors at responding to patient questions posted online. …
Are regulators ready?
One major unanswered question is whether generative AI models qualify as medical devices subject to Food and Drug Administration approval.
If an AI algorithm is intended for use in the diagnosis, treatment or prevention of disease, the FDA must approve it before it can be sold and used. If it’s used for administrative purposes the algorithm doesn’t need FDA approval. For now, AI chatbots like the one UW Health is testing seem to fall outside of FDA’s purview. …
Where any given generative AI model falls along this continuum depends on its intended use, said Bradley Merrill Thompson, an attorney at Epstein Becker Green who specializes in FDA enforcement of AI. The fact that a single model can perform tasks all along that continuum “really challenges the regulatory framework,” he said.
That’s largely because the more a model can do, the more difficult it becomes to demonstrate its accuracy and potential costs. Until now, most AI tools have been for very specific functions. “That’s not a legal requirement that it be narrower,” said Thompson. “It’s a practical challenge of designing studies to cover and quantify each of the different functions or outcomes,” he said. Doing those kinds of evaluations “could take a lifetime,” he said.