What is it about?

SHAP scores represent the use of the well-known Shapley values in the context of explainability by feature attribution. In the recent past, SHAP scores have been proposed in literally thousands of possible uses of explainability by feature attribution. Our paper shows that SHAP scores can produce highly unsatisfactory results, that may induce human decision-makers in error.

Featured Image

Why is it important?

There are literally tens of thousands of proposed uses of SHAP scores for explaining machine learning models. Our results demonstrate that, even in theory, the results obtained with SHAP scores may assign importance to non-important features, and assign no importance to the most important features. For example, assigning undue importance to the wrong features might induce a medical doctor to prescribe an incorrect treatment.

Perspectives

The paper aims to raise awareness to important limitations of SHAP scores, that may induce human decision-makers in error. As a consequence, the use of SHAP scores in high-risk or safety-critical domains should not be allowed.

Joao Marques-Silva
ICREA

Read the Original

This page is a summary of: Explainability is  Not a Game, Communications of the ACM, June 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3635301.
You can read the full text:

Read

Contributors

The following have contributed to this page