SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness
Jeong, Jongheon, Park, Sejun, Kim, Minkyu, Lee, Heung-Chang, Kim, Doguk, Shin, Jinwoo
–arXiv.org Artificial Intelligence
Under the paradigm, the robustness of a classifier is aligned with the prediction confidence, i.e., the higher confidence from a smoothed classifier implies the better robustness. This motivates us to rethink the fundamental trade-off between accuracy and robustness in terms of calibrating confidences of a smoothed classifier. In this paper, we propose a simple training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup: it trains on convex combinations of samples along the direction of adversarial perturbation for each input. The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness in case of smoothed classifiers, and offers an intuitive way to adaptively set a new decision boundary between these samples for better robustness.
arXiv.org Artificial Intelligence
Nov-17-2021
- Country:
- North America
- Canada > Ontario
- Toronto (0.14)
- United States > California (0.14)
- Canada > Ontario
- North America
- Genre:
- Research Report > New Finding (0.93)
- Industry:
- Health & Medicine (0.67)
- Information Technology (0.46)
- Technology: