What is it about?
Malicious clients in federated learning can easily affect the global model, especially some attacks that can enhance similarity with normal models. This paper proposed an aggregation method that contributes to the robustness of federated learning training. This method can effectively defend against stealthy backdoor attacks and untarget poisoning attacks in experiments.
Featured Image
Photo by Steve Johnson on Unsplash
Why is it important?
This method proposes a viewpoint that parameter analysis at the layer level of the model can detect the behavior of malicious clients in federated learning. This helps to detect malicious models that are not significantly different from normal models in terms of overall parameter levels.
Perspectives
Read the Original
This page is a summary of: PnA: Robust Aggregation Against Poisoning Attacks to Federated Learning for Edge Intelligence, ACM Transactions on Sensor Networks, June 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3669902.
You can read the full text:
Contributors
The following have contributed to this page