What is it about?
When implementing AI solutions in decision-making contexts, problems may not only be caused by "technical" limitations - they may also be found in people's unwillingness to provide the data needed for AI to succeed. In this paper, we present our ethnographic findings on this matter and discusses implications for AI-supported practice and research.
Featured Image
Why is it important?
Our study focuses on caseworkers' decision-making tasks in a Danish jobcentre and their reasoning for not writing down their own descriptions of citizens - which are crucial to their work, but invisible to the records. When classifying people, the caseworkers know that they are producing a ‘type’ of person. These typifications of people are created, used, and reused, in combination, but people can and do change. Keeping information ‘confidential’ allows the caseworkers not only to use but also change their classifications. Thus, our paper addresses broader and more fundamental questions: what data is (and should be) made available for AI and for what purposes?
Read the Original
This page is a summary of: "We Would Never Write That Down", Proceedings of the ACM on Human-Computer Interaction, April 2021, ACM (Association for Computing Machinery),
DOI: 10.1145/3449176.
You can read the full text:
Contributors
The following have contributed to this page