What is it about?

There is a considerable threat present in genres such as machine learning due to adversarial attacks which include purposely feeding the system with data that will alter the decision region. These attacks are committed to presenting different data to machine learning models in a way that the model would be wrong in its classification or prediction. The field of study is still relatively young and has to develop strong bodies of scientific research that would eliminate the gaps in the current knowledge. This paper provides the literature review of adversarial attacks and defenses based on the highly cited articles and conference published in the Scopus database. Through the classification and assessment of 128 systematic articles: 80 original papers and 48 review papers till May 15, 2024, this study categorizes and reviews the literature from different domains, such as Graph Neural Networks, Deep Learning Models for IoT Systems, and others. The review posits findings on identified metrics, citation analysis, and contributions from these studies while suggesting the area’s further research and development for adversarial robustness’ and protection mechanisms. The identified objective of this work is to present the basic background of adversarial attacks and defenses, and the need for maintaining the adaptability of machine learning platforms. In this context, the objective is to contribute to building efficient and sustainable protection mechanisms for AI applications in various industries

Featured Image

Why is it important?

Addressing adversarial attacks is essential to safeguard the reliability, security, and effectiveness of machine learning systems, ensuring their safe integration into various industries and promoting the growth of AI technologies.

Perspectives

The perspectives on adversarial attacks in machine learning underscore the need for a multifaceted approach to building resilient AI systems. As machine learning continues to be integrated into critical industries such as healthcare, finance, and autonomous systems, there is an increasing demand for robust defense mechanisms to counteract these vulnerabilities. Future research must focus not only on advancing technical defenses like adversarial training, but also on exploring interdisciplinary collaborations between cybersecurity, machine learning, and ethical governance to create sustainable, secure AI models. Moreover, emerging technologies like Graph Neural Networks and IoT systems present new challenges and opportunities, requiring adaptive frameworks that can respond to the evolving nature of adversarial threats. Ultimately, the field's progression depends on continuous innovation in both the detection and prevention of adversarial attacks, ensuring AI's safe and trustworthy deployment across sectors.

Yahya Layth Khaleel
Tikrit University

Read the Original

This page is a summary of: Adversarial Attacks in Machine Learning: Key Insights and Defense Approaches, Applied Data Science and Analysis, August 2024, Mesopotamian Academic Press,
DOI: 10.58496/adsa/2024/011.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page