Doctors Need Better Guidance on AI
To avoid burnout and medical mistakes, health care organizations should train physicians in AI-assisted decision-making
Based on the research of Shefali Patil

Artificial intelligence is everywhere — whether you know it or not. In many fields, AI is being touted as a way to help workers at all levels accomplish tasks, from routine to complicated. Not even physicians are immune.
But AI puts doctors in a bind, says Shefali Patil, associate professor of management at Texas McCombs, in a recent article. Health care organizations are increasingly pushing them to rely on assistive AI to minimize medical errors. But they lack direct support for how to use it.
The result, Patil says, is that physicians risk burnout, as society decides whom to hold accountable when AI is involved in medical decisions. Paradoxically, they also face greater chances of making medical mistakes. This interview has been edited for length and clarity.
Your article discusses the phenomenon of superhumanization. Unlike the rest of us, doctors are thought to have extraordinary mental, physical, and moral capacities, and they may be held to unrealistic standards. What pressures does this place on medical professionals?
AI is generally meant to aid and enhance clinical decisions. When an adverse patient outcome arises, who gets the blame? It’s up to the physician now to decide whether to take the machine’s recommendation and to anticipate what will happen if there’s an adverse patient result.
There are two possible types of errors — false positives and false negatives. The doctor has to determine if the illness is really bad and to do treatments that are potentially unnecessary, if it turns out to be a false positive. The other is a false negative, where the patient is super sick and the doctor doesn’t catch it.
The doctor has to figure out how to use AI software systems but has no control over the systems that the hospital buys. It all has to do with liability. There are no tight regulations around AI.
AI diagnoses, which are supposed to make doctors’ lives easier and reduce medical errors, are potentially having an opposite effect. Why?
The promise for AI is to alleviate some of the decision-making pressures on physicians. The promise is to make their jobs easier and lead to less burnout.
But these come with liability issues. AI vendors do not reveal the way the algorithms actually work. There’s limited transparency on how the algorithms are making a decision, so it’s difficult to calibrate when to use AI and when not to.
If you don’t use it, and there’s a mistake, you’ll be asked why you did not take the AI recommendation. Or, if AI makes a mistake, you’re held responsible, because it’s not a human being. That’s the tension.
What risks does this situation pose to patient care?
People want a physician who’s competent and decisive without feeling a sense of analysis paralysis because of information overload. Decision-making uncertainty and anxiety cause physicians to second-guess themselves. That leads to poor decision-making and, subsequently, poor patient care.
You predict that medical liability will depend on who people believe is at fault for a mistake. How could that expectation increase the risk of doctor burnout and mistakes?
Decision-making research suggests that people who suffer from performance anxiety and constantly second-guess themselves are not thinking logically through decisions. They’re questioning their own judgments.
That’s a very strong, accepted finding in the field of organizational behavior. It’s not specific to doctors, but we’re extrapolating to them.
What strategies can health care organizations use to alleviate those pressures and support physicians in using AI?
One of the big things that needs to be implemented with medical education is simulation training. It can be done as part of continuing education. It’s going to be very significant, because this is the future of medicine. There’s no turning back.
Learning how these systems actually work and understanding how they update and make a recommendation based on medical literature and past case outcomes is important in effective decision-making.
What do you mean when you write about a “regulatory gap?”
We mean that legal regulations always lag behind technological advances. You’re never going to be able to get fair and effective regulations that meet everybody’s interests. The liability risk always happens. The perception of blame is always after the fact. That’s why we’re trying to say the onus should be on administrators to help physicians deal with this issue.
Can you offer some practical advice for doctors, suggesting some do’s and don’ts for using AI assistance?
Right now, there is very little assistance from hospital administrators in teaching physicians how to calibrate the use of AI. More needs to be done.
Administrators need to implement more practical support that heavily relies on feedback from clinicians. At the moment, administrators don’t get that feedback. Performance outcomes, such as what was useful and what was not, need to be tracked.
“Calibrating AI Reliance – A Physician’s Superhuman Dilemma,” co-authored with Christopher Myers of Johns Hopkins University and Yemeng Lu-Myers of Johns Hopkins Medicine, is published in JAMA Health Forum.
Story by Sharon Jayson
About this Post
Share: