single neuron convex barrier
Reviews: Beyond the Single Neuron Convex Barrier for Neural Network Certification
Originality: The authors propose a novel relaxation (to the best of my knowledge) for networks with ReLU activations that tighten previously proposed relaxations that ignore the correlations between neurons in the network. The theoretical results are also novel (although unsurprising). However, it would be useful for the authors to better clarify the computational requirements and tightness of k-ReLU relative to DeepPoly and other similar relaxations and bound propagation methods like [13] and https://arxiv.org/abs/1805.12514, Quality: The theoretical results are accurate (albeit unsurprising) in my opinion. The experimental section is missing several important details in my opinion: 1) The authors say that experiments are performed on both MNIST and CIFAR-10, but the tables 2/3 only report numbers on MNIST.
Reviews: Beyond the Single Neuron Convex Barrier for Neural Network Certification
All reviewers were leaning towards acceptance. Unfortunately, in the discussion after the rebuttal it became clear that crucial parts of the paper could not be properly understood e.g. the set S in line 147 is a union of polyhedra whereas it seems that this should be an intersection. Moreover, the notation introduced by the authors (box cap to mean convex hull) was not helpful either. The evaluation is not very helpful as the authors evaluate mainly on non-robust models, whereas the gain on the only robust model (ConvBig) on MNIST is marginal and the same is true for CIFAR-10. It is thus hard to judge how significant the impact of the improved relaxation is for the verification of robust models. On the other hand the reviewers appreciated the idea of k-ReLU relaxation as it can also be used in other verification frameworks.
Beyond the Single Neuron Convex Barrier for Neural Network Certification
We propose a new parametric framework, called k-ReLU, for computing precise and scalable convex relaxations used to certify neural networks. The key idea is to approximate the output of multiple ReLUs in a layer jointly instead of separately. This joint relaxation captures dependencies between the inputs to different ReLUs in a layer and thus overcomes the convex barrier imposed by the single neuron triangle relaxation and its approximations. The framework is parametric in the number of k ReLUs it considers jointly and can be combined with existing verifiers in order to improve their precision. Our experimental results show that k-ReLU en- ables significantly more precise certification than existing state-of-the-art verifiers while maintaining scalability.
Beyond the Single Neuron Convex Barrier for Neural Network Certification
Singh, Gagandeep, Ganvir, Rupanshu, Püschel, Markus, Vechev, Martin
We propose a new parametric framework, called k-ReLU, for computing precise and scalable convex relaxations used to certify neural networks. The key idea is to approximate the output of multiple ReLUs in a layer jointly instead of separately. This joint relaxation captures dependencies between the inputs to different ReLUs in a layer and thus overcomes the convex barrier imposed by the single neuron triangle relaxation and its approximations. The framework is parametric in the number of k ReLUs it considers jointly and can be combined with existing verifiers in order to improve their precision. Our experimental results show that k-ReLU en- ables significantly more precise certification than existing state-of-the-art verifiers while maintaining scalability.