Better Representations via Adversarial Training in Pre-Training: A Theoretical Perspective
Xing, Yue, Lin, Xiaofeng, Song, Qifan, Xu, Yi, Zeng, Belinda, Cheng, Guang
–arXiv.org Artificial Intelligence
Pre-training is known to generate universal representations for downstream tasks in large-scale deep learning such as large language models. Existing literature, e.g., \cite{kim2020adversarial}, empirically observe that the downstream tasks can inherit the adversarial robustness of the pre-trained model. We provide theoretical justifications for this robustness inheritance phenomenon. Our theoretical results reveal that feature purification plays an important role in connecting the adversarial robustness of the pre-trained model and the downstream tasks in two-layer neural networks. Specifically, we show that (i) with adversarial training, each hidden node tends to pick only one (or a few) feature; (ii) without adversarial training, the hidden nodes can be vulnerable to attacks. This observation is valid for both supervised pre-training and contrastive learning. With purified nodes, it turns out that clean training is enough to achieve adversarial robustness in downstream tasks.
arXiv.org Artificial Intelligence
Jan-26-2024
- Country:
- North America > United States > California (0.14)
- Genre:
- Research Report > New Finding (0.67)
- Technology: