Review for NeurIPS paper: Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond
–Neural Information Processing Systems
Weaknesses: There are two handicaps that I can see to this work: firstly, the methodology while impressive in its ability to generalize to LSTMs, CNNs, and Transformers actually seems like it may be limited in terms of its scope of verification procedures it admits. In the abstract the authors mention that they can implement things such as CROWN into their LiRPA algorithm, however CROWN also explicitly derives quadratic over and under approximations of pre-activation bounds and I am curious as to if the authors have a scheme for directly accommodating things of this nature along with higher-order convex relaxations (e.g. even degree 3 polynomials for small networks) which are harder to propagate in general as there are more complex dependencies. Further on this point, the method can really only handle linear constraints in the input. Indeed, the authors show how they can bound discrete perturbations in the NLP scenario, but at the end, it looks to me like the authors recursively build a linear region in the input space by taking maximally perturbed embeddings from the synonym set. Is there a plan for how to incorporate non-linear specifications?
Neural Information Processing Systems
Jan-21-2025, 11:45:03 GMT
- Technology: