Provable Robust Overfitting Mitigation in Wasserstein Distributionally Robust Optimization
Liu, Shuang, Wang, Yihan, Zhu, Yifan, Miao, Yibo, Gao, Xiao-Shan
–arXiv.org Artificial Intelligence
Wasserstein distributionally robust optimization (WDRO) optimizes against worst-case distributional shifts within a specified uncertainty set, leading to enhanced generalization on unseen adversarial examples, compared to standard adversarial training which focuses on pointwise adversarial perturbations. However, WDRO still suffers fundamentally from the robust overfitting problem, as it does not consider statistical error. We address this gap by proposing a novel robust optimization framework under a new uncertainty set for adversarial noise via Wasserstein distance and statistical error via Kullback-Leibler divergence, called the Statistically Robust WDRO. We establish a robust generalization bound for the new optimization framework, implying that out-of-distribution adversarial performance is at least as good as the statistically robust training loss with high probability. Furthermore, we derive conditions under which Stackelberg and Nash equilibria exist between the learner and the adversary, giving an optimal robust model in certain sense. Finally, through extensive experiments, we demonstrate that our method significantly mitigates robust overfitting and enhances robustness within the framework of WDRO. Optimization in machine learning is often challenged by data ambiguity caused by natural noise, adversarial attacks (Goodfellow et al., 2014), or other distributional shifts (Malinin et al., 2021; 2022). To develop robust models against data ambiguity, Wasserstein Distributionally Robust Optimization (WDRO) (Kuhn et al., 2019) has emerged as a powerful modeling framework, which has recently attracted significant attention attributed to its connections to generalization and robustness (Lee & Raginsky, 2018; Mohajerin Esfahani & Kuhn, 2018; An & Gao, 2021). From both intuitive and theoretical perspectives, WDRO can be considered as a more comprehensive framework that subsumes and extends adversarial training (Staib & Jegelka, 2017; Sinha et al., 2017; Bui et al., 2022; Phan et al., 2023). In contrast to standard adversarial training (Madry et al., 2017), WDRO operates on a broader scale by perturbing the entire data distribution rather than pointwise adversarial samples.
arXiv.org Artificial Intelligence
Mar-6-2025
- Country:
- Asia > China (0.14)
- North America (0.14)
- Genre:
- Research Report > New Finding (0.67)
- Industry:
- Government (0.34)
- Information Technology > Security & Privacy (0.34)
- Technology: