What is it about?
Are you worried that artificial intelligence (AI) might be unfair and discriminate against people? We aim to help people without an AI background to judge fairness and to make machine learning (ML) system fairer. This work explores designing an interactive interface that allows ordinary end-users without any technical or domain background to identify potential fairness issues and possibly fix them in the context of loan decisions. We worked with end-users to design and implement a prototype system that allowed them to see why predictions were made, and then to adapt the system to make suggested changes to the decision-making. We then evaluated this prototype system through an online study. To investigate the implications of diverse human values about fairness around the globe, we also explored how cultural dimensions might play a role in using this prototype.
Featured Image
Photo by Wesley Tingey on Unsplash
Why is it important?
Our work is one of the first studies to support people without a background in AI or machine learning in investigating bias and fairness in AI systems. Our results contribute to the design of interfaces to allow people without a background in AI or machine learning to be more involved in judging and addressing AI fairness.
Perspectives
Read the Original
This page is a summary of: Toward Involving End-users in Interactive Human-in-the-loop AI Fairness, ACM Transactions on Interactive Intelligent Systems, July 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3514258.
You can read the full text:
Contributors
The following have contributed to this page