Who’s to Blame When AI Makes a Medical Error?

In a JAMA Health Forum brief released today, Associate Professor of Management Shefali Patil warns assistive AI may pose a new burden for physicians

Who’s to Blame When AI Makes a Medical Error? Shefali March2025

Assistive artificial intelligence technologies hold significant promise for transforming health care by aiding physicians in diagnosing, managing, and treating patients. However, the current trend of assistive AI implementation could actually worsen challenges related to error prevention and physician burnout, according to a new brief published in JAMA Health Forum today.

The brief, written by researchers from the Johns Hopkins Carey Business School, Johns Hopkins Medicine, and The University of Texas at Austin McCombs School of Business, explains that there is an increasing expectation of physicians to rely on AI to minimize medical errors. However, proper laws and regulations are not yet in place to support physicians as they make AI-guided decisions, despite the fierce adoption of these technologies among health care organizations.

The researchers predict that medical liability will depend on whom society considers at fault when the technology fails or makes a mistake, subjecting physicians to an unrealistic expectation of knowing when to override or trust AI. The authors warn that such an expectation could increase the risk of burnout and even errors among physicians.

“AI was meant to ease the burden, but instead, it’s shifting liability onto physicians — forcing them to flawlessly interpret technology even its creators can’t fully explain,” said Shefali Patil, visiting associate professor at the Carey Business School and associate professor at the University of Texas McCombs School of Business. “This unrealistic expectation creates hesitation and poses a direct threat to patient care.”

The new brief suggests strategies for health care organizations to support physicians by shifting the focus from individual performance to organizational support and learning, which may alleviate pressure on physicians and foster a more collaborative approach to AI integration.

“Expecting physicians to perfectly understand and apply AI alone when making clinical decisions is like expecting pilots to also design their own aircraft — while they’re flying it,” said Christopher Myers, associate professor and faculty director of the Center for Innovative Leadership at the Carey Business School. “To ensure AI empowers rather than exhausts physicians, health care organizations must develop support systems that help physicians calibrate when and how to use AI so they don’t need to second-guess the tools they’re using to make key decisions.”

The full viewpoint is available on the JAMA Health Forum media site.