What is it about?

Explainable AI helps people to get an idea the inner workings of a highly complicated AI system. In our paper we investigate one of the most popular explainable AI methods called LIME. We discover situations where AI explanation systems like LIME become unfaithful, providing the potential to misinform users. In addition to this, we illustrate a simple method to make an AI explanation system like LIME more faithful.

Featured Image

Why is it important?

Many users take the explanations provided from off-the-shelf methods, such as LIME, as being reliable. We discover that the faithfulness of AI explanation systems can vary drastically depending on where and what a user chooses to explain. From this, we urge users to understand whether an AI explanation system is likely to be faithful or not. We also empower users to construct more faithful AI explanation systems with our proposed change to the LIME algorithm.

Read the Original

This page is a summary of: Reconciling Training and Evaluation Objectives in Location Agnostic Surrogate Explainers, October 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3583780.3615284.
You can read the full text:

Read

Contributors

The following have contributed to this page