GIFT: Gradient-aware Immunization of diffusion models against malicious Fine-Tuning with safe concepts retention
Abdalla, Amro, Shaheen, Ismail, DeGenaro, Dan, Mallick, Rupayan, Raita, Bogdan, Bargal, Sarah Adel
–arXiv.org Artificial Intelligence
We present GIFT: a {G}radient-aware {I}mmunization technique to defend diffusion models against malicious {F}ine-{T}uning while preserving their ability to generate safe content. Existing safety mechanisms like safety checkers are easily bypassed, and concept erasure methods fail under adversarial fine-tuning. GIFT addresses this by framing immunization as a bi-level optimization problem: the upper-level objective degrades the model's ability to represent harmful concepts using representation noising and maximization, while the lower-level objective preserves performance on safe data. GIFT achieves robust resistance to malicious fine-tuning while maintaining safe generative quality. Experimental results show that our method significantly impairs the model's ability to re-learn harmful concepts while maintaining performance on safe content, offering a promising direction for creating inherently safer generative models resistant to adversarial fine-tuning attacks.
arXiv.org Artificial Intelligence
Jul-21-2025
- Country:
- Europe > Switzerland (0.04)
- North America > United States
- Florida > Miami-Dade County
- Miami (0.04)
- New Mexico > Bernalillo County
- Albuquerque (0.04)
- Florida > Miami-Dade County
- Genre:
- Research Report > New Finding (0.66)
- Industry:
- Information Technology (0.46)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks (0.69)
- Natural Language (1.00)
- Representation & Reasoning > Optimization (0.49)
- Vision (1.00)
- Information Technology > Artificial Intelligence