AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework
Xu, Xilie, Zhang, Jingfeng, Kankanhalli, Mohan
–arXiv.org Artificial Intelligence
With the emergence of foundation models (Bommasani et al., 2021), fine-tuning the pre-trained feature extractor (FE) has become a low-cost strategy to obtain superior performance in downstream tasks. Notably, GPT-3 (Brown et al., 2020) can achieve state-of-the-art (SOTA) performance on GLUE benchmarks (Wang et al., 2018) via parameterefficient fine-tuning (Hu et al., 2021). Due to the ubiquitous existence of adversarial attacks (Goodfellow et al., 2014; Madry et al., 2018), adopting pre-trained FEs to safety-critical downstream areas such as medicine (Buch et al., 2018) and autonomous cars (Kurakin et al., 2018) necessitates the strategy of robust fine-tuning (Hendrycks et al., 2019) that can yield adversarial robustness in downstream applications. Robust fine-tuning (RFT) (Hendrycks et al., 2019) that contains an adversarial objective to learn features of adversarial data (Madry et al., 2018) can gain adversarial robustness in downstream tasks. To further improve generalization, vanilla RFT (formulated in Eq. 1, shown in the left panel of Figure 1c) optimizes both adversarial and natural objectives to learn the features of adversarial and natural data simultaneously via the FE (Zhang et al., 2019; Shafahi et al., 2019; Jiang et al., 2020).
arXiv.org Artificial Intelligence
Oct-3-2023
- Country:
- Asia (0.14)
- North America > United States
- Colorado (0.14)
- Genre:
- Research Report (1.00)
- Industry:
- Information Technology (0.55)
- Technology: