Reviews: Scaling provable adversarial defenses
–Neural Information Processing Systems
Based on the rebuttal letter, in the final version I'd suggest emphasizing the provable defense is guaranteed in probabilistic sense. Even though I agree in test time the geometric estimator is not necessary, what you indeed certified are training data, instead of test data. This is a nice piece of work and I enjoy reading it. In my opinion, this work has made important contributions in norm-bounded robustness verification by proposing a scalable and more generic toolkit for robustness certification. The autodual framework is both theoretically grounded and algorithmically efficient. However, I also have two major concerns about this work: (I) the proposed nonlinear random projection leads to an estimated (i.e., probabilistic) lower bound of the minimum distortion towards misclassification, which is a soft robustness certification and does not follow the mainstream definition of deterministic lower bound; (II) Since this method yields an estimated lower bound, it then lacks performance comparison to existing bound estimation methods.
Neural Information Processing Systems
Oct-7-2024, 08:17:01 GMT
- Technology: