Understanding and Achieving Efficient Robustness with Adversarial Contrastive Learning

Bui, Anh, Le, Trung, Zhao, He, Montague, Paul, Camtepe, Seyit, Phung, Dinh

arXiv.org Artificial Intelligence 

Among them, the adversarial training methods (e.g, FGSM, PGD adversarial training [13, 22] and Contrastive learning (CL) has recently emerged as an TRADES [36] that utilize adversarial examples as training effective approach to learning representation in a range of data, have been one of the most effective approaches, which downstream tasks. Central to this approach is the selection truly boost the model robustness without the facing the of positive (similar) and negative (dissimilar) sets to provide problem of obfuscated gradients [3]. In adversarial training, the model the opportunity to'contrast' between data recent works [34, 4] show that reducing the divergence and class representation in the latent space. In this paper, of the representations of images and their adversarial examples we investigate CL for improving model robustness using adversarial in latent space (e.g., the feature space output from an samples. We first designed and performed a comprehensive intermediate layer of a classifier) can significantly improve study to understand how adversarial vulnerability the robustness. For example, in [4], latent representations behaves in the latent space. Based on these empirical of images in the same class are pulled closer together than evidences, we propose an effective and efficient supervised those in different classes, which led to a more compact latent contrastive learning to achieve model robustness against space and consequently, better robustness.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found