What is it about?
In this work we propose an optimized strategy to attack machine learning models for classification, e.g., to make malware look like benign software to the classifier. The strategy finds the optimal balance between two competing actions available to the attacker, thus boosting the effectiveness of the final attack.
Featured Image
Photo by Michael Geiger on Unsplash
Why is it important?
Our work proposes a more realistic attack strategy than the state of the art, where the attacker is forced to follow a fixed attack pattern not representative of real-world scenarios. We experimentally show that the increase in the attack effectiveness enabled by our approach is significant.
Read the Original
This page is a summary of: AMEBA: An Adaptive Approach to the Black-Box Evasion of Machine Learning Models, May 2021, ACM (Association for Computing Machinery),
DOI: 10.1145/3433210.3453114.
You can read the full text:
Contributors
The following have contributed to this page