What is it about?
We explore the impact of system performance on trust, the dichotomy between trust and behavior, and how transparency might help attenuate the effects caused by low system performance in the specific context of decision-making and learning tasks assisted by conversational systems.
Featured Image
Photo by Compare Fibre on Unsplash
Why is it important?
Conversational systems represent a value enabler for human-machine interaction. Simultaneously, the opacity, complexity, and humanness accompanied by such systems introduce their own issues, including trust misalignment. While trust is viewed as a prerequisite for effective system use, few studies have considered calibrating for appropriate trust, and empirically testing the relationship between trust and related behavior.
Read the Original
This page is a summary of: Examining Trust in Conversational Systems: Conceptual and Empirical Findings on User Trust, Related Behavior, and System Trustworthiness, July 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3514094.3539525.
You can read the full text:
Resources
Trustworthy Conversational AI Research Project
What effects do intelligent, dialog-oriented systems and their individual design features have on users and trust? How do people change their perception and behavior over a longer period of time when using such systems? What socio-technological design principles do we wish for a user-centered and trust-enhancing interaction with Smart Personal Assistants? The SNSF-funded project investigates these questions with the help of tools from design and behavioral research.
Designing for Conversational System Trustworthiness: The Impact of Model Transparency on Trust and Task Performance
Designing for system trustworthiness promises to address challenges of opaqueness and uncertainty introduced through Machine Learning (ML)-based systems by allowing users to understand and interpret systems’ underlying working mechanisms. However, empirical exploration of trustworthiness measures and their effectiveness is scarce and inconclusive. We investigated how varying model confidence (70% versus 90%) and making confidence levels transparent to the user (explanatory statement versus no explanatory statement) may influence perceptions of trust and performance in an information retrieval task assisted by a conversational system.
Towards a Trust Reliance Paradox? Exploring the Gap Between Perceived Trust in and Reliance on Algorithmic Advice
Researchers are undecided on whether erroneous advice acts as an impediment to system use or is blindly relied upon. As part of an experimental study, we turn towards the impact of incorrect system advice and how to design for failure-prone AI. In an experiment with 156 subjects we find that, although incorrect algorithmic advice is trusted less, users adapt their answers to a system's incorrect recommendations. While transparency on a system's accuracy levels fosters trust and reliance in the context of incorrect advice, an opposite effect is found for users exposed to correct advice. Our findings point towards a paradoxical gap between stated trust and actual behavior. Furthermore, transparency mechanisms should be deployed with caution as their effectiveness is intertwined with system performance.
Contributors
The following have contributed to this page