What is it about?

Emotion AI technologies are becoming increasingly popular in education, where they’re being used to ‘detect’ student attention and engagement during lessons (usually by collecting information like face scans, voice recordings, and physical movements). We argue that these technologies are harmful because of the theories of emotion that they’re built on, because these theories disempower students to push back against the technologies' claims about their own emotional expression. In light of these problems, we explore the political agendas behind these technologies and consider three alternative policy approaches to their use in classrooms. We ultimately argue that these technologies should be abandoned in education.

Featured Image

Why is it important?

Since emotion AI technologies (and AI technologies in general) are becoming extremely popular and ubiquitous to our lives, there is an urgent need to interrogate the harms that they can bring about. This paper brings attention to some of those harms, which will predominately impact children; an especially vulnerable group. Given that we ultimately argue for the abandonment of emotion AI technologies in education, this work is especially pertinent and valuable for educators and policymakers.

Read the Original

This page is a summary of: (Anti)-Intentional Harms: The Conceptual Pitfalls of Emotion AI in Education, June 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3593013.3594088.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page