What is it about?

The rapid growth of machine learning over the past few years led to its application across many fields. But, the large amount of data needed to train machine learning models has created various privacy concerns. This has led researchers to explore a novel concept called federated learning (FL), which can protect the users’ data privacy to some extent. In FL, the idea is to train multiple local models on local datasets and send only the learned parameters to a central server, not the data itself. The server then creates a global model based on these parameters. In this article, the authors look at the latest privacy concerns in FL based approaches. Unlike previous reviews, they focus on FL schemes aimed at preserving privacy in particular. They provide a new way to categorize these techniques based on potential privacy leaks and how they can be addressed.

Featured Image

Why is it important?

Many incidents of data leakage have occurred in recent years. Most notable among them is the recent leak of records on Facebook users. This has made governments strengthen various laws related to data privacy. In turn, this has boosted research in FL. A knowledge of current privacy risks in FL and how to tackle them will lead to a future where users can benefit from machine learning without risking data leaks. This article can guide researchers in the field of machine learning to this end by pointing at future research directions and knowledge gaps. KEY TAKEAWAY: It is important to keep improving federated learning methods to ensure the security of user privacy.

Read the Original

This page is a summary of: A Comprehensive Survey of Privacy-preserving Federated Learning, ACM Computing Surveys, July 2021, ACM (Association for Computing Machinery),
DOI: 10.1145/3460427.
You can read the full text:

Read

Contributors

The following have contributed to this page