What is it about?
Existing explanation techniques of machine learning models are often not comprehensible to the end user. Lack of evaluation and selection criteria makes it difficult to choose the most suitable technique. Experimentation strongly indicates that an ensemble of multiple interpretation techniques yields considerably more truthful explanations, according to our study.
Featured Image
Why is it important?
The evaluation of an interpretation is of utter importance. Providing interpretations to the end user that are correct and truthful is important. Providing a tool to both evaluate such interpretations and combine different interpretations into one is indeed needed. This work lays the foundation for this topic.
Perspectives
Read the Original
This page is a summary of: Altruist: Argumentative Explanations through Local Interpretations of Predictive Models, September 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3549737.3549762.
You can read the full text:
Resources
Contributors
The following have contributed to this page