What is it about?

Visual simultaneous localisation and mapping (vSLAM) is widely used in autonomous navigation tasks. The camera sensors used in vSLAM bring rich semantic information, but also introduce visual complexity which hinders the performance of vSLAM algorithms. One complexity that is common in real world environments but has been largely overlooked is mirror reflections. In this work, we collected the MirrEnv dataset which contains RGBD image sequences captured in environments with mirror reflections, and systematically evaluated the performance of representative visual SLAM algorithms on the dataset to determine the influence of planar mirror reflections to vSLAM algorithms.

Featured Image

Why is it important?

Reflections are ubiquitous in many domestic and industrial settings, often being utilised by the human visual system to help people understand their surroundings. However, computer vision systems have struggled to recognise reflective surfaces and correctly understand the environment geometry. A reflection aware vSLAM (RA-vSLAM) algorithm that is aware of mirrors in the environments, and can even make use of the reflections, would thus be of interest to both domestic and industrial applications. Our work presented here is an initial attempt at RA-vSLAM to help understand the influence of mirror reflections to vSLAM algorithms. The proposed MirrEnv dataset, which contains RGBD image sequences with ground-truth poses and ground-truth mirror-label masks, provides a benchmark to promote research in robustness of vSLAM algorithms to visual complexities.

Read the Original

This page is a summary of: Benchmarking visual SLAM methods in mirror environments, Computational Visual Media, January 2024, Tsinghua University Press,
DOI: 10.1007/s41095-022-0329-x.
You can read the full text:

Read

Contributors

The following have contributed to this page