What is it about?
There is a considerable threat present in genres such as machine learning due to adversarial attacks which include purposely feeding the system with data that will alter the decision region. These attacks are committed to presenting different data to machine learning models in a way that the model would be wrong in its classification or prediction. The field of study is still relatively young and has to develop strong bodies of scientific research that would eliminate the gaps in the current knowledge. This paper provides the literature review of adversarial attacks and defenses based on the highly cited articles and conference published in the Scopus database. Through the classification and assessment of 128 systematic articles: 80 original papers and 48 review papers till May 15, 2024, this study categorizes and reviews the literature from different domains, such as Graph Neural Networks, Deep Learning Models for IoT Systems, and others. The review posits findings on identified metrics, citation analysis, and contributions from these studies while suggesting the area’s further research and development for adversarial robustness’ and protection mechanisms. The identified objective of this work is to present the basic background of adversarial attacks and defenses, and the need for maintaining the adaptability of machine learning platforms. In this context, the objective is to contribute to building efficient and sustainable protection mechanisms for AI applications in various industries
Featured Image
Photo by Markus Spiske on Unsplash
Why is it important?
Addressing adversarial attacks is essential to safeguard the reliability, security, and effectiveness of machine learning systems, ensuring their safe integration into various industries and promoting the growth of AI technologies.
Perspectives
Read the Original
This page is a summary of: Adversarial Attacks in Machine Learning: Key Insights and Defense Approaches, Applied Data Science and Analysis, August 2024, Mesopotamian Academic Press,
DOI: 10.58496/adsa/2024/011.
You can read the full text:
Resources
Contributors
The following have contributed to this page