CLIPure: Purification in Latent Space via CLIP for Adversarially Robust Zero-Shot Classification
Zhang, Mingkun, Bi, Keping, Chen, Wei, Guo, Jiafeng, Cheng, Xueqi
–arXiv.org Artificial Intelligence
A BSTRACT In this paper, we aim to build an adversarially robust zero-shot image classifier. We ground our work on CLIP, a vision-language pre-trained encoder model that can perform zero-shot classification by matching an image with text prompts "a photo of a < class-name> .". Purification is the path we choose since it does not require adversarial training on specific attack types and thus can cope with any foreseen attacks. We then formulate purification risk as the KL divergence between the joint distributions of the purification process of denoising the adversarial samples and the attack process of adding perturbations to benign samples, through bidirectional Stochastic Differential Equations (SDEs). The final derived results inspire us to explore purification in the multi-modal latent space of CLIP . We propose two variants for our CLIPure approach: CLIPure-Diff which models the likelihood of images' latent vectors with the DiffusionPrior module in DaLLE-2 (modeling the generation process of CLIP's latent vectors), and CLIPure-Cos which models the likelihood with the cosine similarity between the embeddings of an image and "a photo of a.". As far as we know, CLIPure is the first purification method in multi-modal latent space and CLIPure-Cos is the first purification method that is not based on generative models, which substantially improves defense efficiency. We conducted extensive experiments on CIFAR-10, ImageNet, and 13 datasets that previous CLIP-based defense methods used for evaluating zero-shot classification robustness. Among them, CLIP (Radford et al., 2021) is an example that is popular, effective, and efficient. CLIP performs zero-shot classification by forming text prompts "a photo of a < class-name> ." of all the candidate categories, and selecting the class with the highest similarity with the image embedding. Despite its efficacy, when facing adversarial attacks, its accuracy can drop to zero, similarly vulnerable to other neural classifiers. Existing methods to enhance adversarial robustness follow two primary paths: adversarial training and purification. Adversarial Training (A T) (Madry et al., 2017; Rebuffi et al., 2021; Wang et al., 2023) incorporates adversarial examples into model training to boost robustness. It often achieves corresponding authors 1 arXiv:2502.18176v2 FARE (Schlarmann et al., 2024) and TeCoA (Mao et al., 2022) are two A T approaches integrated with CLIP, which enhance CLIP's zero-shot classification robustness while harming clean accuracy significantly and do not generalize to other types of attacks.
arXiv.org Artificial Intelligence
Mar-2-2025
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Energy (0.46)
- Information Technology > Security & Privacy (0.34)
- Technology: