Reviews: Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation
–Neural Information Processing Systems
This paper fills an important gap in the literature of robustness of classifiers to adversarial examples by proposing the first (to the best of my knowledge) formal guarantee (at an example level) on the robustness of a given classifier to adversarial examples. Unsurprisingly, the bound involves the Lipschitz constant of the Jacobians which the authors exploit to propose a cross-Lipschitz regularization. Overall the paper is well written, and the material is well presented. The proof of Theorem 2.1 is correct. I did not check the proofs of the propositions 2.1 and 4.1. This is an interesting work.
Neural Information Processing Systems
Oct-8-2024, 11:29:34 GMT
- Technology: