What is it about?
In this paper, we analyze the relation between biased data-driven outcomes and practices of data annotation for vision models, by placing them in the context of the market economy. Understanding data annotation as a sense-making process, we investigate which goals are prioritized by decision-makers throughout the annotation of datasets. Following a qualitative design, the study is based on 24 interviews with relevant actors and extensive participatory observations, including several weeks of fieldwork at two companies dedicated to data annotation for machine learning in Buenos Aires, Argentina and Sofia, Bulgaria. The prevalence of market-oriented values over socially responsible approaches is argued based on three corporate priorities that inform work practices in this field: profit, standardization, and opacity. Finally, we introduce three elements, namely transparency, education, and regulations, aiming at developing ethics-oriented practices of data annotation, that could help prevent biased outcomes.
Featured Image
Read the Original
This page is a summary of: Biased Priorities, Biased Outcomes, February 2020, ACM (Association for Computing Machinery),
DOI: 10.1145/3375627.3375809.
You can read the full text:
Contributors
The following have contributed to this page