Bradley Merrill Thompson, Member of the Firm in the Health Care & Life Sciences practice, in the firm’s Washington, DC, office, was quoted in Healthcare IT News, in “FDA Action Plan Puts Focus on AI-Enabled Software as Medical Device,” by Mike Miliard.
Following is an excerpt:
The U.S. Food and Drug Administration this week published its first action plan for how it intends to spur development and oversight of safe, patient-centric artificial intelligence and machine learning-based software as a medical device. …
ON THE RECORD
Bradley Merrill Thompson of the law firm Epstein Becker & Green – where he counsels on medical devices, FDA regulatory issues and more – offered his thoughts on the action plan to HITN’s sister publication, MobiHealthNews. He says the new report is “good and bad news.”
The good news? “Many people in industry support the general approach – with one exception I’ll describe,” he said.
The bad news? “This appears to be a report in lieu of progress,” said Thompson. “What’s depressing is that the concept paper was published in April 2019, and here we are coming up on two years, and for the most part, on the critical elements like a guidance on the Predetermined Change Control Plan, they’re merely talking about that guidance in the future tense with the goal of publishing it in 2021.”
Thompson notes that FDA doesn’t even specify “fiscal year 2021” – which he suspects means the wait might be more toward the end of the calendar year. “We were really hoping for quicker action as we think that guidance is critically important to the further development of artificial intelligence in healthcare.”
As for industry reaction? Many stakeholders support the first four steps outlined in the action plan, he said.
“Developing a guidance document to implement the Predetermined Change Control Plan is extremely important, although I know that many AI developers are currently informally trying to work with FDA on a case-by-case basis to develop such plans,” he explained.
“The GMLP is likewise a very affirmative step, and I do appreciate the fact that they want to work in a consensus fashion with lots of standard-setting bodies,” said Thompson. “Having transparency on the required transparency is also extremely important. A workshop would be a very constructive next step, as the agency proposes. Transparency is a technically complex topic, but also a practically challenging idea given the possible audiences for the information.”
Likewise, “the regulatory science initiative is very important, as we need better and more specialized tools to identify bias and performance in AI used in healthcare. We also need to be able to identify the appropriate standards, such as how much bias is acceptable. There will always be some bias.
Thompson said the biggest substantive disagreement from developers and device makers would be around discussion of real-world performance.
“On the one hand, I think many in industry support the idea that companies need to develop systems to monitor the performance of their algorithms in the marketplace,” he said. “Performance changes, by the very nature of artificial intelligence, and companies must develop robust systems to monitor those changes and ensure that their products remain safe and effective.
“The point of departure is that we sense that FDA wants to be in the middle of that, getting frequent updates of data so that the agency can on a more real-time basis monitor that performance,” he added.
For most on this industry side, that’s “completely unacceptable,” said Thompson. “And the reason FDA proposes to proceed on a volunteer basis is that they have no statutory authority to require this.” That’s why he expects significant “disagreement with FDA over what data need to be shared and when during the post-market phase of AI based product lifecycles.”