What is it about?

This paper talks about a system that combines an autonomous robot with natural language explanations of its behaviour. The robot learns from previous demonstrations how to navigate and what to do in a remote environment using machine learning. However, learned behaviour is difficult to verify, understand and thus, trust. The system described in the paper uses the learned behaviour to produce short explanations about what the robot is doing, i.e."Turning left to avoid some obstacle'', that combined with visual interfaces help users understand the decision making process of the robot.

Featured Image

Why is it important?

This works offers a way to make machine learning algorithms, often seen as black-box, more transparent and understandable to users.

Perspectives

I think that steps toward understanding complex systems that take decisions are crucial for the safe deployment of robots and artificially intelligent processes in the most general sense. Our approach goes in that direction, by even discovering the intentions of the robot without regards to its internal functions. Even more, graphical and textual interfaces are natural methods for humans to understand these systems.

Simón C. Smith
Imperial College London

Read the Original

This page is a summary of: Self-Explainable Robots in Remote Environments, March 2021, ACM (Association for Computing Machinery),
DOI: 10.1145/3434074.3447275.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page