When Should Humans Override AI?
Asking AI to explain its reasons does not always help humans make better decisions
What They Studied
As businesses and other organizations increasingly rely on AI to help make decisions, that reliance can be problematic given AI’s history of bias. One mitigation practice is to have AI systems explain their decisions. Maria De-Arteaga, assistant professor of information, risk, and operations management, along with her co-authors — UT postdoctoral research fellow Jakob Schoeffer and Niklas Kühl of the University of Bayreuth, Germany — had an AI system read 134,436 biographies and predict whether each person was a teacher or professor. Then, human participants read the bios and chose whether to override AI recommendations, which were based on task-relevant keywords such as “research” or “schools” and keywords related to gender such as “he” or “she.”
What They Found
The explanations don’t “lead humans to make better quality decisions or fairer decisions,” De-Arteaga says. Human participants were 4.5 percentage points more likely to override AI recommendation when the explanations highlighted gender rather than task-relevance — because the humans suspected gender bias. But those overrides were no more accurate than task-based overrides.
Why It Matters
The AI system’s explanations may fuel a perception of fairness without being grounded in accuracy or equity. That underscores a need to develop tools that help humans successfully complement AI systems — not just offer explanations that build a false sense of trust, De-Arteaga says. “Explanations, Fairness, and Appropriate Reliance in Human-AI Decision Making” is published in the Association for Computing Machinery’s Proceedings of the CHI Conference on Human Factors in Computing Systems.
– Omar L. Gallaga