What is it about?

Currently, many AI ethics frameworks are being developed by public and private organizations and non-profit ones. However, it is unclear how those who work developing AI systems deal with ethics in their everyday practices. The project focuses on understanding how those who develop AI systems deal with ethical issues in practice. Specifically, it is based on an interview study involving AI developers and AI practitioners working for public organizations in Sweden.

Featured Image

Why is it important?

There are currently many AI ethics frameworks, but it is not clear how they are adopted in practice by AI practitioners, especially in the public sector. Our analysis found that several AI ethics issues are not consistently tackled. Besides, the participants did not perceive AI systems as part of a broader socio-technical system (affected stakeholder involvement was missing in nearly all the projects that the participants described).

Perspectives

This text summarizes the doctoral project of Clàudia Figueras, which she presented at the Student Track at the Conference on AI, Ethics, and Society (AIES'21). Given the presently small number of participants, I aim to continue interviewing further relevant participants and delve into power imbalance issues of AI systems implemented in the public sector.

Clàudia Figueras

Read the Original

This page is a summary of: Trustworthy AI for the People?, July 2021, ACM (Association for Computing Machinery),
DOI: 10.1145/3461702.3462470.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page