What is it about?

This paper presents Butter, a new model for detecting objects in autonomous driving, such as cars, pedestrians, and traffic signs. It improves detection accuracy and speed by refining how the model processes features at different scales. Butter combines innovative techniques like frequency-adaptive feature consistency and progressive feature fusion, making the model lightweight and highly efficient. The model shows better performance compared to existing methods, especially in real-world driving scenarios with complex environments.

Featured Image

Why is it important?

The Butter model is important because it offers a solution to a major challenge in autonomous driving: real-time and efficient object detection in complex environments. By focusing on multi-scale feature fusion and frequency-adaptive consistency, it enhances detection accuracy without compromising computational efficiency. With the demand for more accurate and lightweight models in autonomous systems, Butter presents an innovative approach that can be widely deployed in real-world scenarios, where computational resources are limited, but performance is critical.

Perspectives

From my perspective, this work is a significant advancement for real-time object detection in autonomous driving. The model's focus on making detection both more accurate and computationally efficient will help bridge the gap between high-performance and deployable systems. The innovations introduced, such as the FAFCE component and PHFFNet module, highlight the future direction of autonomous driving technology, where optimization and real-time application are key. This model has the potential to be widely adopted due to its balance between precision and efficiency.

Xiaojian Lin
Tsinghua University

Read the Original

This page is a summary of: Butter: Frequency Consistency and Hierarchical Fusion for Autonomous Driving Object Detection, October 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3746027.3754865.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page