What is it about?

Our research proposes a new protection method called HAAD (h-space based Adversarial Attack for Diffusion models) that helps prevent such misuse. HAAD works by adding tiny, invisible changes to an image that disrupt how diffusion models learn from it, making it much harder for them to reproduce or personalize the protected content. We also introduce a faster and more efficient version, HAAD-KV, which focuses on specific parts of the model most responsible for linking text and image information. Despite being lightweight, it provides even stronger protection.

Featured Image

Why is it important?

Overall, our work offers a practical and efficient defense to help users and artists maintain control over their personal or creative visual content in the age of generative AI.

Perspectives

The motivation behind this work came from a growing concern: as diffusion models become more powerful, it’s increasingly easy for anyone to reproduce or modify personal images without consent. We wanted to create a practical defense that helps people maintain control over their visual identity and creative work. For me, this project is about more than algorithms — it’s about empowering users to defend their digital presence in an age where generative AI can easily blur the line between what’s real and what’s synthetic.

Xide Xu
Universitat Autonoma de Barcelona

Read the Original

This page is a summary of: An h -space Based Adversarial Attack for Protection Against Few-shot Personalization, October 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3746027.3755659.
You can read the full text:

Read

Contributors

The following have contributed to this page