ar-adals
- North America > Canada > Ontario > Toronto (0.04)
- Europe > France > Hauts-de-France > Nord > Lille (0.04)
Improving Calibration through the Relationship with Adversarial Robustness
Neural networks lack adversarial robustness, i.e., they are vulnerable to adversarial examples that through small perturbations to inputs cause incorrect predictions. Further, trust is undermined when models give miscalibrated predictions, i.e., the predicted probability is not a good indicator of how much we should trust our model. In this paper, we study the connection between adversarial robustness and calibration and find that the inputs for which the model is sensitive to small perturbations (are easily attacked) are more likely to have poorly calibrated predictions. Based on this insight, we examine if calibration can be improved by addressing those adversarially unrobust inputs. To this end, we propose Adversarial Robustness based Adaptive Label Smoothing (AR-AdaLS) that integrates the correlations of adversarial robustness and calibration into training by adaptively softening labels for an example based on how easily it can be attacked by an adversary. We find that our method, taking the adversarial robustness of the in-distribution data into consideration, leads to better calibration over the model even under distributional shifts. In addition, AR-AdaLS can also be applied to an ensemble model to further improve model calibration.
A Implementation Details
For all the experimental results on ResNet-29 v2 (He et al., 2016b), we use a batch size The network is trained with Adam optimizer (Kingma et al., 2015) for 200 epochs. We randomly split the training dataset into training data of 45000 images and 5000 images as the validation set. We train a Wide ResNet-28-10 v2 (Zagoruyko & Komodakis, 2016) to obtain the state-of-the-art accuracy for CIFAR-10 (e.g., Table 2 in the main text). For mixup (Zhang et al., 2018; Thulasidasan et al., 2019), the mixing parameter of two images is For CCA T (Stutz et al., 2020), we observe that training models with adversarial examples bounded We train a Wide ResNet-28-10 v2 (Zagoruyko & Komodakis, 2016) to obtain the state-of-the-art accuracy for CIFAR-100. All the experiments on ImageNet were obtained via training a ResNet-101 v1 (He et al., 2016a) following the training script at The input image is normalized (divided by 255) to be within [0,1].
- North America > Canada > Ontario > Toronto (0.14)
- Europe > France > Hauts-de-France > Nord > Lille (0.04)
Improving Calibration through the Relationship with Adversarial Robustness
Neural networks lack adversarial robustness, i.e., they are vulnerable to adversarial examples that through small perturbations to inputs cause incorrect predictions. Further, trust is undermined when models give miscalibrated predictions, i.e., the predicted probability is not a good indicator of how much we should trust our model. In this paper, we study the connection between adversarial robustness and calibration and find that the inputs for which the model is sensitive to small perturbations (are easily attacked) are more likely to have poorly calibrated predictions. Based on this insight, we examine if calibration can be improved by addressing those adversarially unrobust inputs. To this end, we propose Adversarial Robustness based Adaptive Label Smoothing (AR-AdaLS) that integrates the correlations of adversarial robustness and calibration into training by adaptively softening labels for an example based on how easily it can be attacked by an adversary.
Improving Uncertainty Estimates through the Relationship with Adversarial Robustness
Qin, Yao, Wang, Xuezhi, Beutel, Alex, Chi, Ed H.
Robustness issues arise in a variety of forms and are studied through multiple lenses in the machine learning literature. Neural networks lack adversarial robustness -- they are vulnerable to adversarial examples that through small perturbations to inputs cause incorrect predictions. Further, trust is undermined when models give miscalibrated or unstable uncertainty estimates, i.e. the predicted probability is not a good indicator of how much we should trust our model and could vary greatly over multiple independent runs. In this paper, we study the connection between adversarial robustness, predictive uncertainty (calibration) and model uncertainty (stability) on multiple classification networks and datasets. We find that the inputs for which the model is sensitive to small perturbations (are easily attacked) are more likely to have poorly calibrated and unstable predictions. Based on this insight, we examine if calibration and stability can be improved by addressing those adversarially unrobust inputs. To this end, we propose Adversarial Robustness based Adaptive Label Smoothing (AR-AdaLS) that integrates the correlations of adversarial robustness and uncertainty into training by adaptively softening labels conditioned on how easily it can be attacked by adversarial examples. We find that our method, taking the adversarial robustness of the in-distribution data into consideration, leads to better calibration and stability over the model even under distributional shifts. In addition, AR-AdaLS can also be applied to an ensemble model to achieve the best calibration performance.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > New York > New York County > New York City (0.04)