What is it about?
When decisions crucial to their lives and well-being are made by opaque and authoritative automated decision- making (ADM) systems powered by AI technologies, including machine learning and deep learning, individuals often find themselves disempowered. They may be unaware of the existence of such systems or their rights related to ADM, such as those granted by the EU’s General Data Protection Regulation (GDPR) and emerging AI regulations. Even when they are aware of the adverse impacts of ADM, numerous barriers—ranging from algorithmic to organizational to regulatory—hinder their ability to exert their rights and maintain agency and control over ADM decisions. This thesis aims to address this disempowerment by exploring how explainability, contestability, and beyond within the ADM ecosystem can empower individuals. Through a combination of qualitative workshops, large-scale user experiments, and interdisciplinary studies at the intersection of human-computer interaction (HCI) and AI governance, this research investigates what empowerment entails at personal, organizational, and societal levels. This paper provides an overview of various streams of my research, contributing to the understanding and addressing of the complexity of empowering those affected by ADM.
Featured Image
Read the Original
This page is a summary of: Empowering Individuals in Automated Decision-Making: Explainability, Contestability and Beyond, November 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3678884.3682043.
You can read the full text:
Contributors
The following have contributed to this page