What is it about?

We decompose the decisions of the similarity model and the classification model into multiple human-defined attribute representations to highlight the importance of different attributes to the model's decision-making. Through the saliency map, people can more intuitively discover the location of the model's attention, and the attribute words can make it easier for people to understand the model's decision-making.

Featured Image

Why is it important?

Although existing deep learning models have various evaluation metrics, it is difficult for humans to understand their decision-making mechanisms. Adding explainability to deep learning models can help build trustworthy AI. While visual attributes are one of the most intuitive factors that enable people to understand the visual system, it is meaningful to explain the model through attributes.

Perspectives

In this paper, we propose a new baseline that simultaneously outputs saliency maps, text, and importance scores for related concepts. It is hoped that our proposed method can provide more inspiration for explaining similarity models. Our code is available at https://github.com/RuoyuChen10/Sim2Word.

Ruoyu Chen
University of the Chinese Academy of Sciences

Read the Original

This page is a summary of: Sim2Word: Explaining Similarity with Representative Attribute Words via Counterfactual Explanations, ACM Transactions on Multimedia Computing Communications and Applications, September 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3563039.
You can read the full text:

Read

Contributors

The following have contributed to this page