What is it about?
Robots are becoming teammates in many areas of daily life, from workplaces to homes. But what happens when a robot makes a mistake and breaks our trust? In our study, we explored how robots can win that trust back. We looked at five different responses: saying sorry, denying the mistake, explaining what happened, offering compensation, or staying silent. People worked with a robot that sometimes made errors, either by performing poorly or by crossing social and moral lines. What we found is that moral mistakes, such as breaking social rules, hurt trust the most. Importantly, when the robot offered compensation, people were more willing to forgive, trust it again, and even work with it in the future.
Featured Image
Photo by Possessed Photography on Unsplash
Why is it important?
This research shows that how robots respond to mistakes really matters. It highlights the fact that besides trust in a robot's performance, people also trust a robot partner to act in a moral way. Our findings underpin the severity of violating this moral trust, and highlight the potential repairing effect of compensation. Understanding these trust repair strategies brings us one step closer to building robots people feel comfortable working with, even when things go wrong.
Read the Original
This page is a summary of: A Robot Should Compensate for Its Mistakes: An Exploration of the Dynamics of Trust Violation and Repair Strategies in Human-Robot Collaboration, ACM Transactions on Human-Robot Interaction, October 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3767729.
You can read the full text:
Contributors
The following have contributed to this page







