What is it about?

When we interact with people, we spontaneously derive a likely estimate of what the world looks like from their point of view. This is known as visual perspective taking (VPT). For successful human-robot collaboration we should be able to do the same with robots. So far, however, previous research exploring human VPT towards robots has largely relied on using videos of robots on screens. It is therefore unclear whether and how people take a robot’s perspective in real-word interactions, and what non-verbal robot behaviours or features would promote perspective taking. We present a novel experiment design and analysis that is able to measure the extent to which people take a robot's perspective in a face-to-face human-robot interaction and investigate robot behaviours that may facilitate perspective taking.

Featured Image

Why is it important?

Visual Perspective Taking (VPT) is fundamental in human social interaction, from joint action to predicting others' future actions and mentalizing about their goals and affective/mental states. In order to design robots and robot behaviours that people can collaborate with, it is essential that we understand what robot features can trigger VPT.

Perspectives

I am super excited to present our work to the HRI community. Our robot (after breaking when we first started this project) is now back to work, so I am even more excited to update the community on our upcoming user study using the methods outlined in this paper.

Joel Currie
University of Aberdeen

Read the Original

This page is a summary of: More Than Meets the Eye? An Experimental Design to Test Robot Visual Perspective-Taking Facilitators Beyond Mere-Appearance, March 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3610978.3640684.
You can read the full text:

Read

Contributors

The following have contributed to this page