A Finer Calibration Analysis for Adversarial Robustness
Awasthi, Pranjal, Mao, Anqi, Mohri, Mehryar, Zhong, Yutao
W e present a more general analysis of H -calibration for adversarially robust classification. By adopting a finer definition of calibration, we can cover setti ngs beyond the restricted hypothesis sets studied in previous work. In particular, our results ho ld for most common hypothesis sets used in machine learning. W e both fix some previous calibration re sults ( Bao et al., 2020) and generalize others ( A wasthi et al., 2021). Moreover, our calibration results, combined with the pre vious study of consistency by A wasthi et al. ( 2021), also lead to more general H -consistency results covering common hypothesis sets.
May-6-2021