What is it about?
When should systems talk to people or act socially, and when should they just act like the machines they are? From a combination of user studies, we identify a novel class of dark patterns (manipulative design tactics) that "chatty" interfaces use to influence user behaviour, and how even seemingly helpful or friendly statements can backfire.
Featured Image
Photo by Domingo Alvarez E on Unsplash
Why is it important?
While this study was conducted before the large language model (LLM) wave hit, we identify some design implications and ethical considerations for future conversational systems that are now more pertinent than ever. This paper is the first to treat, in a unified manner, the various types of interfaces that proactively address and speak to people in humanlike ways, from smartphone notifications to self-checkout machines to chatbots. Whereas current work in language agent ethics predominantly considers the "helpful, honest, and harmless" (HHH) criteria when defining what makes for a good interaction, this work highlights the importance of pragmatic factors.
Perspectives
Read the Original
This page is a summary of: Computers as Bad Social Actors: Dark Patterns and Anti-Patterns in Interfaces that Act Socially, Proceedings of the ACM on Human-Computer Interaction, April 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3653693.
You can read the full text:
Contributors
The following have contributed to this page