What is it about?

This paper explains how machines combine data from different sensors (like cameras, radar, and GPS) to better understand their surroundings—a process called sensor fusion. It reviews how this technology has evolved over 40 years, from simple rule-based methods to advanced artificial intelligence and deep learning systems used in autonomous vehicles, healthcare, and robotics. The key message is that while modern systems are very good at detecting objects, they still struggle to fully understand situations or predict outcomes. The paper also highlights challenges like a lack of transparency, limited datasets, and reliability issues, and suggests that future research should focus on making sensor fusion more intelligent, adaptable, and trustworthy.

Featured Image

Why is it important?

The paper shows that sensor fusion is important because it improves the accuracy and reliability of autonomous systems and enables them to operate in real-world conditions. By combining multiple sensors, systems can overcome the weaknesses of individual sensors. For example, if a camera fails in poor lighting, radar or LiDAR can still provide useful information. This leads to better decision-making, especially in critical applications like self-driving cars, healthcare monitoring, and robotics. The results also highlight that sensor fusion has enabled major progress in AI-based systems, particularly in object detection and perception, making machines much better at understanding their environment. However, the paper also finds that current approaches are still limited. While systems can detect objects well, they are not yet strong in understanding complex situations or predicting future risks. This is important because real-world applications require not just seeing things, but also interpreting context and anticipating outcomes (e.g., predicting accidents). Overall, the paper shows that sensor fusion is essential for building safe, intelligent, and reliable autonomous systems, but further improvements are needed to make them fully trustworthy and effective.

Perspectives

This is a strong and well-structured paper that clearly explains the evolution of sensor fusion over time. It effectively connects classical methods with modern AI approaches, making it easy to understand how the field has progressed. A key strength is its use of the JDL model to highlight important gaps, especially in higher-level reasoning, which adds real value to the discussion. The paper also covers a wide range of applications, showing its relevance across domains. Overall, it provides a clear, insightful, and forward-looking perspective, making it useful for both researchers and practitioners.

Prof. Dr. Dr. Varun Gupta
Gisma University of Applied Sciences, Germany

Read the Original

This page is a summary of: Sensor Fusion Models in Autonomous Systems: A Review, Computers Materials & Continua, January 2026, Tsinghua University Press,
DOI: 10.32604/cmc.2025.071599.
You can read the full text:

Read

Contributors

The following have contributed to this page