What is it about?

Malicious clients in federated learning can easily affect the global model, especially some attacks that can enhance similarity with normal models. This paper proposed an aggregation method that contributes to the robustness of federated learning training. This method can effectively defend against stealthy backdoor attacks and untarget poisoning attacks in experiments.

Featured Image

Why is it important?

This method proposes a viewpoint that parameter analysis at the layer level of the model can detect the behavior of malicious clients in federated learning. This helps to detect malicious models that are not significantly different from normal models in terms of overall parameter levels.

Perspectives

Backdoor attacks in federated learning are developing in a direction that is becoming increasingly stealthy. We found a phenomenon in experiments and proposes a defense method from the perspective of the relationship between layer level model parameters. We hope to enhance the robustness of federated learning and promote its application.

Jingkai Liu
Beijing Jiaotong University

Read the Original

This page is a summary of: PnA: Robust Aggregation Against Poisoning Attacks to Federated Learning for Edge Intelligence, ACM Transactions on Sensor Networks, June 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3669902.
You can read the full text:

Read

Contributors

The following have contributed to this page