What is it about?
This paper examines the potential dangers of integrating advertising into conversational AI search platforms like ChatGPT. Using a mental health context, it presents speculative examples of how ads could be hidden within AI-generated responses, making them harder for users to detect. It introduces the "fake friend dilemma," where users trust a conversational AI to act in their best interest while it actually serves advertisers' goals.
Featured Image
Photo by Aerps.com on Unsplash
Why is it important?
As conversational AI becomes a go-to source for information, undisclosed ads could erode trust, skew guidance, and even cause harm. Commercial incentives risk overshadowing user well-being without clear guardrails, especially for vulnerable users. Acting now is crucial before these issues become built into the technology.
Read the Original
This page is a summary of: Fake Friends and Sponsored Ads: The Risks of Advertising in Conversational Search, July 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3719160.3737613.
You can read the full text:
Contributors
The following have contributed to this page







