What is it about?
Many people rely on AI to help judge whether online information is accurate. But how the AI explains its reasoning can affect how people make decisions. This study compared different types of explanations AI can give—one that focuses on the wording of a statement (content explanation), one that includes social information like who said it and how it spread (social explanation), and a mix of both. This paper found that when the two explanations agree, people are better at spotting false claims. But when the explanations conflict, people can get confused, although some still engage more deeply with the reasoning.
Featured Image
Photo by Steve Johnson on Unsplash
Why is it important?
AI systems are increasingly used to help people decide what information to trust online, yet most explanations they offer focus only on text features and ignore the social context in which information spreads. Our study is one of the first to compare how “content-based” and “social” explanations—alone and together—affect people’s accuracy and reasoning. We also show that the order and alignment of explanations can change how people interpret AI explanations. The results offer guidance on how to design AI tools that not only detect falsehoods but also explain their decisions in ways people find helpful and understandable.
Read the Original
This page is a summary of: Designing Effective AI Explanations for Misinformation Detection: A Comparative Study of Content, Social, and Combined Explanations, Proceedings of the ACM on Human-Computer Interaction, October 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3757577.
You can read the full text:
Contributors
The following have contributed to this page







