Goto

Collaborating Authors

 lirp




0cbc5671ae26f67871cb914d81ef8fc1-AuthorFeedback.pdf

Neural Information Processing Systems

We thank all reviewers for their encouraging and helpful comments. We will fix all typos. Results are presented in Table A. Following convex relaxation theory[32], our bound has This makes the concretization problem (in Sec. We plan to study high-order bounds on general graphs as our future work. We will also discuss these extensions.


In Appendix A we provide more discussions on A bounds including detailed algorithm and complexity analysis comparison of different A implementations and also a small numerical

Neural Information Processing Systems

In Appendix B, we provide proofs of the theorems. In Table 6, we provide a list of oracle functions of three basic operation types, including affine transformation, unary nonlinear function, and binary nonlinear function. This lower bound can be used for training ReLU networks with loss fusion. Figure 4 compares the linear bounds in LiRP A and IBP respesctively. We refer readers to those existing works for details.


Efficient Preimage Approximation for Neural Network Certification

Björklund, Anton, Zaitsev, Mykola, Kwiatkowska, Marta

arXiv.org Artificial Intelligence

The growing reliance on artificial intelligence in safety- and security-critical applications demands effective neural network certification. A challenging real-world use case is "patch attacks", where adversarial patches or lighting conditions obscure parts of images, for example, traffic signs. A significant step towards certification against patch attacks was recently achieved using PREMAP, which uses under- and over-approximations of the preimage, the set of inputs that lead to a specified output, for the certification. While the PREMAP approach is versatile, it is currently limited to fully-connected neural networks of moderate dimensionality. In order to tackle broader real-world use cases, we present novel algorithmic extensions to PREMAP involving tighter bounds, adaptive Monte Carlo sampling, and improved branching heuristics. Firstly, we demonstrate that these efficiency improvements significantly outperform the original PREMAP and enable scaling to convolutional neural networks that were previously intractable. Secondly, we showcase the potential of preimage approximation methodology for analysing and certifying reliability and robustness on a range of use cases from computer vision and control.


Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond Kaidi Xu

Neural Information Processing Systems

The majority of LiRP A-based methods focus on simple feed-forward networks and need particular manual derivations and implementations when extended to other architectures. In this paper, we develop an automatic framework to enable perturbation analysis on any neural network structures, by generalizing existing LiRP A algorithms such as CROWN to operate on general computational graphs.