What is it about?
This paper is about how AI oversight affects human decision-making, particularly when people know their mistakes could be corrected by an AI system. It focuses on professional tennis, where the Hawk-Eye technology reviews umpires’ calls on whether a ball is in or out. The study explores how umpires change their behavior under AI oversight—sometimes making fewer mistakes, but also shifting the types of errors they make because of the psychological pressure of being overruled by AI.
Featured Image
Photo by Kevin Mueller on Unsplash
Why is it important?
It helps us understand how AI oversight influences human decision-making, not just in tennis but in many areas where AI could assist or overrule people. While AI can reduce mistakes, it can also create new pressures that change how people behave, sometimes leading to unintended consequences. This is crucial for designing AI systems in high-stakes settings like medicine, law, or public safety, where small shifts in decision-making can have big impacts on outcomes and fairness.
Perspectives
Read the Original
This page is a summary of: AI Oversight and Human Mistakes: Evidence from Centre Court, July 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3670865.3673481.
You can read the full text:
Resources
Contributors
The following have contributed to this page