What is it about?

Algorithmic decision support systems are widely applied in domains ranging from healthcare to journalism. To ensure that these systems are fair and accountable, it is essential that humans can maintain meaningful agency, understand and oversee algorithmic processes. Explainability is often seen as a promising mechanism for enabling human-in-the-loop, however, current approaches are ineffective and can lead to various biases. We argue that explainability should be tailored to support naturalistic decision-making and sensemaking strategies employed by domain experts and novices.

Featured Image

Read the Original

This page is a summary of: Explainability for experts: A design framework for making algorithms supporting expert decisions more explainable, Journal of Responsible Technology, November 2021, Elsevier,
DOI: 10.1016/j.jrt.2021.100017.
You can read the full text:

Read

Contributors

The following have contributed to this page