Goto

Collaborating Authors

 caat


Perturbing Attention Gives You More Bang for the Buck: Subtle Imaging Perturbations That Efficiently Fool Customized Diffusion Models

Xu, Jingyao, Lu, Yuetong, Li, Yandong, Lu, Siyang, Wang, Dongdong, Wei, Xiang

arXiv.org Artificial Intelligence

Diffusion models (DMs) embark a new era of generative modeling and offer more opportunities for efficient generating high-quality and realistic data samples. However, their widespread use has also brought forth new challenges in model security, which motivates the creation of more effective adversarial attackers on DMs to understand its vulnerability. We propose CAAT, a simple but generic and efficient approach that does not require costly training to effectively fool latent diffusion models (LDMs). The approach is based on the observation that cross-attention layers exhibits higher sensitivity to gradient change, allowing for leveraging subtle perturbations on published images to significantly corrupt the generated images. We show that a subtle perturbation on an image can significantly impact the cross-attention layers, thus changing the mapping between text and image during the fine-tuning of customized diffusion models. Extensive experiments demonstrate that CAAT is compatible with diverse diffusion models and outperforms baseline attack methods in a more effective (more noise) and efficient (twice as fast as Anti-DreamBooth and Mist) manner.


Combining Adversaries with Anti-adversaries in Training

Zhou, Xiaoling, Yang, Nan, Wu, Ou

arXiv.org Artificial Intelligence

Adversarial training is an effective learning technique to improve the robustness of deep neural networks. In this study, the influence of adversarial training on deep learning models in terms of fairness, robustness, and generalization is theoretically investigated under more general perturbation scope that different samples can have different perturbation directions (the adversarial and anti-adversarial directions) and varied perturbation bounds. Our theoretical explorations suggest that the combination of adversaries and anti-adversaries (samples with anti-adversarial perturbations) in training can be more effective in achieving better fairness between classes and a better tradeoff between robustness and generalization in some typical learning scenarios (e.g., noisy label learning and imbalance learning) compared with standard adversarial training. On the basis of our theoretical findings, a more general learning objective that combines adversaries and anti-adversaries with varied bounds on each training sample is presented. Meta learning is utilized to optimize the combination weights. Experiments on benchmark datasets under different learning scenarios verify our theoretical findings and the effectiveness of the proposed methodology.


Banksy donates funds from anti-arms artwork sale

BBC News

The anonymous street artist Banksy has donated £205,000 from the sale of a protest piece to anti-arms campaigners. The artwork, Civilian Drone Strike, was on display at the Stop the Arms Fair art exhibition in east London. It depicts drones destroying a cartoon image of a house, while a child and her dog look on in distress. The exhibition was held alongside the world's largest arms fair, the Defence and Security Equipment International - both exhibitions closed on Friday. The art exhibition claimed to highlight "the inhumanity of the arms trade" through art, with work by more than 1,600 exhibitors from 54 countries.