What is it about?

Our research looks at a hidden weakness in tools that protect people’s privacy when collecting data. These tools, which add random noise to each person’s information, are widely used by companies and apps to keep individuals safe. However, we found that this noise can also be misused. By creating only a small number of fake users, an attacker can subtly change the rankings of items inside the system—for example, making something look more popular or pushing something else down the list. We show how these ranking manipulations can happen, how effective they can be, and why current defenses are not enough to stop them. Our work highlights the need for stronger protections so that privacy‑preserving systems remain reliable and cannot be quietly influenced by fake data.

Featured Image

Why is it important?

This work is important because it uncovers a hidden vulnerability in widely used privacy‑preserving data systems. Many organizations rely on local differential privacy to protect users, assuming it also protects the integrity of their data. Our findings show that this is not always true: even a small number of fake users can manipulate rankings in meaningful ways, influencing decisions in areas like recommendations, health, or finance. By revealing this gap and demonstrating how easily rankings can be altered, our research provides timely insight that can help developers and policymakers strengthen these systems before they are exploited in the real world.

Perspectives

Working on this publication gave me the opportunity to look deeply at a problem that is often overlooked in privacy research. I was surprised by how easily rankings could be manipulated even in systems designed to protect users, and this motivated me to explore practical ways to reveal and explain the risks. I hope that our findings will encourage both researchers and developers to rethink how we build privacy‑preserving technologies, and to design future systems that remain trustworthy not only for privacy, but also for the integrity of their results.

Pei Zhan
Shandong University

Read the Original

This page is a summary of: Poisoning Attacks to Local Differential Privacy for Ranking Estimation, November 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3719027.3744821.
You can read the full text:

Read

Contributors

The following have contributed to this page