What is it about?
Although much progress has been made in visual emotion recognition, researchers have realized that modern deep networks tend to exploit dataset characteristics to learn spurious statistical associations between the input and the target. Such dataset characteristics are usually treated as dataset bias, which damages the robustness and generalization performance of these recognition systems. In this work, we scrutinize this problem from the perspective of causal inference, where such dataset characteristic is termed as a confounder which misleads the system to learn the spurious correlation. To alleviate the negative effects brought by the dataset bias, we propose a novel Interventional Emotion Recognition Network (IERN) to achieve the backdoor adjustment, which is one fundamental deconfounding technique in causal inference. Specifically, IERN starts by disentangling the dataset-related context feature from the actual emotion feature, where the former forms the confounder. The emotion feature will then be forced to see each confounder stratum equally before being fed into the classifier. A series of designed tests validate the efficacy of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms state-of-the-art approaches for unbiased visual emotion recognition.
Featured Image
Photo by Ben White on Unsplash
Why is it important?
We make three main contributions in this work, listed as below, * We are the first to tackle dataset bias in visual emotion recognition from a causality perspective. * We propose a novel trainable framework, named Interventional Emotion Recognition Network (IERN), to realize the backdoor adjustment theorem. * With rigorous experiments done for both the mixed-dataset and cross-dataset on existing benchmarks, we show how IERN effectively alleviates negative effects raised by dataset bias and outperforms state-of-the-art approaches.
Perspectives
Read the Original
This page is a summary of: Towards Unbiased Visual Emotion Recognition via Causal Intervention, October 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3503161.3547936.
You can read the full text:
Resources
Contributors
The following have contributed to this page