What is it about?
Long story short: we built a gender-biased robot. And we only realized it after having deployed this robot behavior in more than 60 user study trials with participants. In this work, we are dissecting what happened, and how easy it is to miss these biases. We also leave some reflections and recommendations for preventative measures that might be taken by researchers.
Featured Image
Photo by Eric Krull on Unsplash
Why is it important?
As AI becomes more prominent in our society, so do examples of important biases in these models that should not have been overlooked by the developers (think, for example, race-biased hiring models). This paper highlights a specific example but reflects on why it is probably not an isolated case. The paper's importance lies in its contribution to the ongoing conversation about the ethical and social implications of using robots in human interactions. By highlighting the unintended biases that can be introduced into these interactions, the paper underscores the need for greater awareness and attention to issues of bias and diversity in the design and development of social robots
Perspectives
Read the Original
This page is a summary of: How Did We Miss This?, March 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3568294.3580032.
You can read the full text:
Resources
Contributors
The following have contributed to this page