What is it about?
Poisoning attack is the most immediate threat against the training process of machine learning models. The core idea of poisoning attacks is to introduce malicious data into training datasets of target models to hinder the model training. The security challenges brought by poisoning attacks have prompted many researchers to devote themselves to the development of countermeasures. Existing countermeasures are largely attack-specific: they can only defend against several known attack methods, and once the adversary knows the existence of these countermeasures, it is easy to bypass them. Many reasons have led to the current disadvantage of defenders. For example, the development of countermeasures is often only based on some observations, rather than a global understanding of attack methods and learning algorithms. Therefore, to better countering poisoning attacks, a comprehensive and in-depth survey is needed.
Featured Image
Photo by Moritz Erken on Unsplash
Why is it important?
A comprehensive understanding on poisoning attacks will be helpful to guide the academia and industry to develop more robust machine learning methods.
Perspectives
Read the Original
This page is a summary of: A Comprehensive Survey on Poisoning Attacks and Countermeasures in Machine Learning, ACM Computing Surveys, December 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3551636.
You can read the full text:
Contributors
The following have contributed to this page