What is it about?
We decompose the decisions of the similarity model and the classification model into multiple human-defined attribute representations to highlight the importance of different attributes to the model's decision-making. Through the saliency map, people can more intuitively discover the location of the model's attention, and the attribute words can make it easier for people to understand the model's decision-making.
Featured Image
Photo by Alexandru Zdrobău on Unsplash
Why is it important?
Although existing deep learning models have various evaluation metrics, it is difficult for humans to understand their decision-making mechanisms. Adding explainability to deep learning models can help build trustworthy AI. While visual attributes are one of the most intuitive factors that enable people to understand the visual system, it is meaningful to explain the model through attributes.
Perspectives
Read the Original
This page is a summary of: Sim2Word: Explaining Similarity with Representative Attribute Words via Counterfactual Explanations, ACM Transactions on Multimedia Computing Communications and Applications, September 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3563039.
You can read the full text:
Contributors
The following have contributed to this page