What is it about?
Recommender systems are expected to be assistants that help human users find relevant information automatically without explicit queries. As recommender systems evolve, increasingly sophisticated learning techniques are applied and have achieved better performance in terms of user engagement metrics such as clicks and browsing time. The increase in the measured performance, however, can have two possible attributions: a better understanding of user preferences, and a more proactive ability to utilize human bounded rationality to seduce user over-consumption. A natural following question is whether current recommendation algorithms are manipulating user preferences. If so, can we measure the manipulation level? In this paper, we present a general framework for benchmarking the degree of manipulations of recommendation algorithms, in both slate recommendation and sequential recommendation scenarios. We benchmark some representative recommendation algorithms in both synthetic and real-world datasets under the proposed framework. We have observed that a high online click-through rate does not necessarily mean a better understanding of user initial preference, but ends in prompting users to choose more documents they initially did not favor. Moreover, we find that the training data have notable impacts on the manipulation degrees, and algorithms with more powerful modeling abilities are more sensitive to such impacts. The experiments also verified the usefulness of the proposed metrics for measuring the degree of manipulations. We advocate that future recommendation algorithm studies should be treated as an optimization problem with constrained user preference manipulations.
Featured Image
Photo by dlxmedia.hu on Unsplash
Why is it important?
We are living in an era of information explosion. With the popularity of the internet, we have access to far more information than ever before, making it difficult to find what we need among the vast amount of content. To save people from information overload and find the most relevant content for each of us, modern recommender systems make use of rich user profiles to model user preferences. The more you use the recommender system, and the more other people are using the system, the more you will be recommended with relevant results. If we look at the other side, the situation is also thriving. For companies who are deploying large-scale recommender systems on their products or platforms, personalized recommendation greatly improves the retention rate of users, as well as the clicks or purchases, depending on the service they provide. We focus on the ethical question that is seldom mentioned by previous works: are recommender systems manipulating users' preferences for higher revenue? Human decision-making has bounded rationality and is easily influenced by the displayed items. The behavioral economics community has studied some typical phenomena, e.g., the decoy effect, the confirmation bias, and the anchoring effect. Recommendation systems are likely to exploit these psychological weaknesses and biases to achieve higher performance in online metrics, which we call manipulation. While such manipulation can lead to an increase in user clicks, the items clicked on are not necessarily those the user preferred in the first place. In this paper, we are dedicated to proposing a general framework for evaluating representative recommendation algorithms for the degree of manipulation in user preferences and demonstrating the effectiveness of the proposed framework under various use cases.
Read the Original
This page is a summary of: Understanding or Manipulation: Rethinking Online Performance Gains of Modern Recommender Systems, ACM Transactions on Information Systems, December 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3637869.
You can read the full text:
Contributors
The following have contributed to this page