What is it about?

Existing explanation techniques of machine learning models are often not comprehensible to the end user. Lack of evaluation and selection criteria makes it difficult to choose the most suitable technique. Experimentation strongly indicates that an ensemble of multiple interpretation techniques yields considerably more truthful explanations, according to our study.

Featured Image

Why is it important?

The evaluation of an interpretation is of utter importance. Providing interpretations to the end user that are correct and truthful is important. Providing a tool to both evaluate such interpretations and combine different interpretations into one is indeed needed. This work lays the foundation for this topic.

Perspectives

This work was initially designed around the concept of argumentation; however, argumentation became supplemental later on. The main idea that this work proposes is that ensembling interpretation techniques using unsupervised interpretability metrics is a way to go towards providing a better explanation for the decisions of machine learning models to the end users.

Ioannis Mollas
Aristotle University of Thessaloniki

Read the Original

This page is a summary of: Altruist: Argumentative Explanations through Local Interpretations of Predictive Models, September 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3549737.3549762.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page