What is it about?
Experts should be able to understand and evaluate the algorithmic outputs they base their decisions on. One way to achieve this is by making them interpretable. We reflect on the psychology of decision-making literature and argue that effective interpretability could be achieved by designing interfaces that would support experts decision-making strategies.
Featured Image
Photo by Irwan iwe on Unsplash
Why is it important?
Decision-making in many areas (e.g., healthcare, finance, media) is increasingly based on algorithmic outputs, such as risk assessments or predictions. However, how these outputs were produced is not always clear to the experts who base their decisions upon them. Current attempts to make these predictive systems and their outputs more understandable to the experts have been shown to be ineffective.
Perspectives
Read the Original
This page is a summary of: Experts in the Shadow of Algorithmic Systems, July 2020, ACM (Association for Computing Machinery),
DOI: 10.1145/3393914.3395862.
You can read the full text:
Contributors
The following have contributed to this page