Review for NeurIPS paper: Consistency Regularization for Certified Robustness of Smoothed Classifiers
–Neural Information Processing Systems
Additional Feedback: Overall, this paper presents an efficient approach to training L2-robust models, that outperforms existing approaches in the large perturbation regime. While experiments could be improved with multiple runs, I thought they were extensive and included analyses of different design choices. Releasing code/models would help further improve reproducibility of the work. Additional comments: - Why does m have to be larger than 1? How does the method perform with m 1? - The analysis resulting in Figure 1 focuses on the log-probability gap, or logit-margin of the various classifiers. However, this is not the only factor contributing to robustness in the case of deep neural networks, which perform a highly non-linear mapping from inputs to logits; the distance to the decision boundary in input space (or input margin) is what we really care about, and is related to the logit-margin by the Lipschitzness of the mapping from input to logits; see Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks, NeurIPS 2018 for a discussion.
Neural Information Processing Systems
Jan-25-2025, 23:06:52 GMT
- Technology: