What is it about?
Researchers studying AI fairness have attempted to apply the critical framework of intersectionality to their work; we argue that they often do so in an incomplete or incorrect manner. We look at how AI fairness researchers engage with intersectionality by reviewing 30 relevant scientific articles, and discover gaps between how they use it and the purpose of intersectionality, i.e., identifying power relations in society and how they shape inequality. For example, AI researchers often narrowly interpret intersectionality to only be about the intersections of different demographic groups, and neglect societal and historical context. We provide actionable recommendations for AI researchers to better engage with intersectionality.
Featured Image
Photo by Mike Swigunski on Unsplash
Why is it important?
Getting AI fairness right is important for everyone whose lives are touched by AI. Using intersectionality as a lens for AI fairness work enriches it by encouraging us to pay attention to social and historical context, power relations in society, and how our technical decisions with respect to AI are related to AI's impact on people in the world. It is thus important to carefully interpret intersectionality for AI fairness work, by engaging with core intersectionality scholarship and not diluting it.
Perspectives
Read the Original
This page is a summary of: Factoring the Matrix of Domination: A Critical Review and Reimagination of Intersectionality in AI Fairness, August 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3600211.3604705.
You can read the full text:
Resources
Contributors
The following have contributed to this page