What is it about?

This study investigates what university students outside of computer science actually understand about artificial intelligence, and where their understanding goes wrong. Using a combination of Item Response Theory analysis and in-depth cognitive interviews, we identified common misconceptions among 154 non-specialist students, such as confusing basic data collection with AI training or assuming that an AI trained on one category can automatically recognize its opposite. We found that concepts related to machine learning were especially challenging, even when students felt confident in their answers. By blending statistical item analysis with direct insight into how students reason through problems, the study provides a clearer picture of where AI literacy breaks down and how to design more effective AI education for non-experts.

Featured Image

Why is it important?

As AI becomes embedded in everyday life, from social media algorithms to hiring tools, understanding how it works is no longer optional. Yet many people hold deep-seated misconceptions that can lead to over-trust, unfair judgments, or missed opportunities to use AI responsibly. Most existing assessments measure what people think they know, not what they actually understand. By pinpointing exactly where non-experts struggle and why, this research gives educators, policymakers, and technology developers a clear roadmap for building AI literacy programs that target real conceptual barriers rather than assumed ones.

Perspectives

As researchers, we see this work as more than a validation study. It's a necessary step toward making AI literacy genuinely accessible. Too often, educational tools are designed with experts in mind, leaving non-specialists to rely on intuition that can be misleading. What excites us most is how our mixed-methods approach didn't just confirm that students struggle with machine learning concepts; it showed us why, by letting their own voices explain the reasoning behind their answers. That kind of insight transforms how we teach, moving from generic lessons to targeted interventions that address the real mental models people bring to the table. We hope this work helps bridge the gap between how AI is built and how it is understood by the people who use it every day.

Xingyao Xiao
Stanford University

Read the Original

This page is a summary of: What do I Know about AI Beyond Everyday Knowledge?, Digital Threats Research and Practice, March 2026, ACM (Association for Computing Machinery),
DOI: 10.1145/3803805.
You can read the full text:

Read

Contributors

The following have contributed to this page