Bradley Merrill Thompson, Member of the Firm in the Health Care & Life Sciences practice, in the firm’s Washington, DC, office, was quoted in Law360 Healthcare Authority, in “Medical Board Group Keeps AI Liability Focus on Doctors,” by Mark Payne. (Read the full version – subscription required.)
Following is an excerpt:
When emergency room nurses and doctors make split-second decisions using an artificial intelligence tool, who ultimately bears responsibility if a patient winds up harmed?
According to recent guidance from the Federation of State Medical Boards, the onus remains squarely on doctors, even if the AI is supplied by the hospital or another employer. …
According to Bradley Merrill Thompson, a member at Epstein Becker Green, the FSMB's guidance aligns with the 21st Century Cures Act passed in 2016, which expanded the U.S. Food and Drug Administration's medical device definition to include AI used in clinical-decision support.
The basic idea, he said, is that AI may analyze specific patient information to arrive at a recommendation, but the decision is ultimately left to the doctor.
"The view at the time was that Congress doesn't need to regulate that sort of software because the decision-making is still firmly the physician's decision," Thompson said. "And regulation of that falls into the practice of medicine by the state boards of medicine." …
The guidance also directs providers using AI tools to view informed consent as a "meaningful dialogue" rather than "a list of AI-generated risks and benefits."
"Bringing the patient into the discussion is kind of the last step, and the guidelines reiterate the need for transparency and the need for sharing information with a patient," Thompson said.