Goto

Collaborating Authors

 noise symbol


Reasoning about Uncertainties in Discrete-Time Dynamical Systems using Polynomial Forms

Neural Information Processing Systems

In this paper, we propose polynomial forms to represent distributions of state variables over time for discrete-time stochastic dynamical systems. This problem arises in a variety of applications in areas ranging from biology to robotics. Our approach allows us to rigorously represent the probability distribution of state variables over time, and provide guaranteed bounds on the expectations, moments and probabilities of tail events involving the state variables. First, we recall ideas from interval arithmetic, and use them to rigorously represent the state variables at time t as a function of the initial state variables and noise symbols that model the random exogenous inputs encountered before time t. Next, we show how concentration of measure inequalities can be employed to prove rigorous bounds on the tail probabilities of these state variables. We demonstrate interesting applications that demonstrate how our approach can be useful in some situations to establish mathematically guaranteed bounds that are of a different nature from those obtained through simulations with pseudo-random numbers.


Neural Network Verification with PyRAT

Lemesle, Augustin, Lehmann, Julien, Gall, Tristan Le

arXiv.org Artificial Intelligence

There is no doubt that Artificial Intelligence (AI) has taken over an important part of our lives and its recent popularisation with large language models has anchored this change even more in our current landscape. The use of AI is becoming more and more widespread reaching new sectors such as health, aeronautics, energy, etc., where it can bring tremendous benefits but could also cause environmental, economic, or human damage, in critical or high-risk systems. In fact, numerous issues are still being uncovered around the use of AI, ranging from its lack of robustness in the face of adversarial attacks [1, 2], to the confidentiality and privacy of the data used, the fairness of the decisions, etc. Faced with these threats and the exponential growth of its use, regulations have started to emerge with the European AI Act. Not waiting for regulations, tools have already been developed to respond to and mitigate these threats by providing various guarantees from the data collection phase to AI training and validation of the AI. Our interest here lies in this last phase, the formal validation of an AI system, and more specifically a neural network, to allow its use in a high-risk system.


Fast and Effective Robustness Certification

Singh, Gagandeep, Gehr, Timon, Mirman, Matthew, Püschel, Markus, Vechev, Martin

Neural Information Processing Systems

We present a new method and system, called DeepZ, for certifying neural network robustness based on abstract interpretation. Compared to state-of-the-art automated verifiers for neural networks, DeepZ: (i) handles ReLU, Tanh and Sigmoid activation functions, (ii) supports feedforward and convolutional architectures, (iii) is significantly more scalable and precise, and (iv) and is sound with respect to floating point arithmetic. These benefits are due to carefully designed approximations tailored to the setting of neural networks. As an example, DeepZ achieves a verification accuracy of 97% on a large network with 88,500 hidden units under $L_{\infty}$ attack with $\epsilon = 0.1$ with an average runtime of 133 seconds.


Fast and Effective Robustness Certification

Singh, Gagandeep, Gehr, Timon, Mirman, Matthew, Püschel, Markus, Vechev, Martin

Neural Information Processing Systems

We present a new method and system, called DeepZ, for certifying neural network robustness based on abstract interpretation. Compared to state-of-the-art automated verifiers for neural networks, DeepZ: (i) handles ReLU, Tanh and Sigmoid activation functions, (ii) supports feedforward and convolutional architectures, (iii) is significantly more scalable and precise, and (iv) and is sound with respect to floating point arithmetic. These benefits are due to carefully designed approximations tailored to the setting of neural networks. As an example, DeepZ achieves a verification accuracy of 97% on a large network with 88,500 hidden units under $L_{\infty}$ attack with $\epsilon = 0.1$ with an average runtime of 133 seconds.