What is it about?
We want to understand what natural conditions will mislead AI models. To describe natural situations, the most convenient method for users is language. For example, users can use language to depict numerous natural scenes (such as various weather conditions or different gestures of objects), utilize text-to-image models to generate a large number of images, and test an image classifier on which natural scenarios it is easy to be misled. To achieve this goal, we propose a natural language induced adversarial image generation method.
Featured Image
Photo by Millo Lin on Unsplash
Why is it important?
Our work reveals the potential impact of text-to-image models on AI robustness and social fairness and inspires researchers to develop more fair and robust AI models.
Perspectives
Read the Original
This page is a summary of: Natural Language Induced Adversarial Images, October 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3664647.3680902.
You can read the full text:
Contributors
The following have contributed to this page