What is it about?

Experts should be able to understand and evaluate the algorithmic outputs they base their decisions on. One way to achieve this is by making them interpretable. We reflect on the psychology of decision-making literature and argue that effective interpretability could be achieved by designing interfaces that would support experts decision-making strategies.

Featured Image

Why is it important?

Decision-making in many areas (e.g., healthcare, finance, media) is increasingly based on algorithmic outputs, such as risk assessments or predictions. However, how these outputs were produced is not always clear to the experts who base their decisions upon them. Current attempts to make these predictive systems and their outputs more understandable to the experts have been shown to be ineffective.

Perspectives

I hope this work-in-progress paper will challenge designers and others working on interpretability projects to deepen their understanding of how experts make sense of information, how they reason and make decisions and reflect on this knowledge in their work.

Ms Auste Simkute
University of Edinburgh

Read the Original

This page is a summary of: Experts in the Shadow of Algorithmic Systems, July 2020, ACM (Association for Computing Machinery),
DOI: 10.1145/3393914.3395862.
You can read the full text:

Read

Contributors

The following have contributed to this page