Goto

Collaborating Authors

 Lomuscio, Alessio


A Scalable Approach to Probabilistic Neuro-Symbolic Verification

arXiv.org Artificial Intelligence

Neuro-Symbolic Artificial Intelligence (NeSy AI) has emerged as a promising direction for integrating neural learning with symbolic reasoning. In the probabilistic variant of such systems, a neural network first extracts a set of symbols from sub-symbolic input, which are then used by a symbolic component to reason in a probabilistic manner towards answering a query. In this work, we address the problem of formally verifying the robustness of such NeSy probabilistic reasoning systems, therefore paving the way for their safe deployment in critical domains. We analyze the complexity of solving this problem exactly, and show that it is $\mathrm{NP}^{\# \mathrm{P}}$-hard. To overcome this issue, we propose the first approach for approximate, relaxation-based verification of probabilistic NeSy systems. We demonstrate experimentally that the proposed method scales exponentially better than solver-based solutions and apply our technique to a real-world autonomous driving dataset, where we verify a safety property under large input dimensionalities and network sizes.


Verification of Neural Networks against Convolutional Perturbations via Parameterised Kernels

arXiv.org Artificial Intelligence

We develop a method for the efficient verification of neural networks against convolutional perturbations such as blurring or sharpening. To define input perturbations we use well-known camera shake, box blur and sharpen kernels. We demonstrate that these kernels can be linearly parameterised in a way that allows for a variation of the perturbation strength while preserving desired kernel properties. To facilitate their use in neural network verification, we develop an efficient way of convolving a given input with these parameterised kernels. The result of this convolution can be used to encode the perturbation in a verification setting by prepending a linear layer to a given network. This leads to tight bounds and a high effectiveness in the resulting verification step. We add further precision by employing input splitting as a branch and bound strategy. We demonstrate that we are able to verify robustness on a number of standard benchmarks where the baseline is unable to provide any safety certificates. To the best of our knowledge, this is the first solution for verifying robustness against specific convolutional perturbations such as camera shake.


Tight Verification of Probabilistic Robustness in Bayesian Neural Networks

arXiv.org Artificial Intelligence

We introduce two algorithms for computing tight guarantees on the probabilistic robustness of Bayesian Neural Networks (BNNs). Computing robustness guarantees for BNNs is a significantly more challenging task than verifying the robustness of standard Neural Networks (NNs) because it requires searching the parameters' space for safe weights. Moreover, tight and complete approaches for the verification of standard NNs, such as those based on Mixed-Integer Linear Programming (MILP), cannot be directly used for the verification of BNNs because of the polynomial terms resulting from the consecutive multiplication of variables encoding the weights. Our algorithms efficiently and effectively search the parameters' space for safe weights by using iterative expansion and the network's gradient and can be used with any verification algorithm of choice for BNNs. In addition to proving that our algorithms compute tighter bounds than the SoA, we also evaluate our algorithms against the SoA on standard benchmarks, such as MNIST and CIFAR10, showing that our algorithms compute bounds up to 40% tighter than the SoA.


Expressive Losses for Verified Robustness via Convex Combinations

arXiv.org Artificial Intelligence

In order to train networks for verified adversarial robustness, previous work typically over-approximates the worst-case loss over (subsets of) perturbation regions or induces verifiability on top of adversarial training. The key to state-of-the-art performance lies in the expressivity of the employed loss function, which should be able to match the tightness of the verifiers to be employed post-training. We formalize a definition of expressivity, and show that it can be satisfied via simple convex combinations between adversarial attacks and IBP bounds. We then show that the resulting algorithms, named CC-IBP and MTL-IBP, yield state-of-the-art results across a variety of settings in spite of their conceptual simplicity. In particular, for $\ell_\infty$ perturbations of radius $\frac{1}{255}$ on TinyImageNet and downscaled ImageNet, MTL-IBP improves on the best standard and verified accuracies from the literature by from $1.98\%$ to $3.92\%$ points while only relying on single-step adversarial attacks.


Approximating Perfect Recall when Model Checking Strategic Abilities: Theory and Applications

Journal of Artificial Intelligence Research

The model checking problem for multi-agent systems against specifications in the alternating-time temporal logic ATL, hence ATL∗, under perfect recall and imperfect information is known to be undecidable. To tackle this problem, in this paper we investigate a notion of bounded recall under incomplete information. We present a novel three-valued semantics for ATL∗ in this setting and analyse the corresponding model checking problem. We show that the three-valued semantics here introduced is an approximation of the classic two-valued semantics, then give a sound, albeit partial, algorithm for model checking two-valued perfect recall via its approximation as three-valued bounded recall. Finally, we extend MCMAS, an open-source model checker for ATL and other agent specifications, to incorporate bounded recall; we illustrate its use and present experimental results.


Formal Verification of CNN-based Perception Systems

arXiv.org Machine Learning

We address the problem of verifying neural-based perception systems implemented by convolutional neural networks. We define a notion of local robustness based on affine and photometric transformations. We show the notion cannot be captured by previously employed notions of robustness. The method proposed is based on reachability analysis for feed-forward neural networks and relies on MILP encodings of both the CNNs and transformations under question. We present an implementation and discuss the experimental results obtained for a CNN trained from the MNIST data set.


Parameterised Verification of Infinite State Multi-Agent Systems via Predicate Abstraction

AAAI Conferences

We define a class of parameterised infinite state multi-agent systems (MAS) that is unbounded in both the number of agents composing the system and the domain of the variables encoding the agents. We analyse their verification problem by combining and extending existing techniques in parameterised model checking with predicate abstraction procedures. The resulting methodology addresses both forms of unboundedness and provides a technique for verifying unbounded MAS defined on infinite-state variables. We illustrate the effectiveness of the technique on an infinite-domain variant of an unbounded version of the train-gate-controller.


Verifying Emergent Properties of Swarms

AAAI Conferences

We investigate the general problem of establishing whether a swarm satisfies an emergent property. We put forward a formal model for swarms that accounts for their nature of unbounded collections of agents following simple local protocols. We formally define the decision problem of determining whether a swarm satisfies an emergent property. We introduce a sound and complete procedure for solving the problem. We illustrate the technique by applying it to the Beta aggregation algorithm.


Finite Abstractions for the Verification of Epistemic Properties in Open Multi-Agent Systems

AAAI Conferences

We develop a methodology to model and verify Regarding the second limitation, proposals have been put open multi-agent systems (OMAS), where agents forward to consider a set of objects that vary at design time; may join in or leave at run time. Further, we specify the set of agents is normally considered to be finite in each properties of interest on OMAS in a variant of firstorder system run. This is a sensible assumption in many scenarios, temporal-epistemic logic, whose characterising but there are applications of MAS (e.g., e-commerce, smart features include epistemic modalities indexed grids) where an unbounded number of agents may freely enter to individual terms, interpreted on agents appearing and leave the system at run time. There is, therefore, at a given state. This formalism notably allows a need to account for the unbounded and possibly infinite to express group knowledge dynamically. We study agents joining in or leaving an open MAS. In this setting it the verification problem of these systems and show is still of interest to reason about their evolution and what that, under specific conditions, finite bisimilar abstractions they know individually and collectively. For example, in an can be obtained.


A Counter Abstraction Technique for the Verification of Robot Swarms

AAAI Conferences

We study parameterised verification of robot swarms against temporal-epistemic specifications. We relax some of the significant restrictions assumed in the literature and present a counter abstraction approach that enable us to verify a potentially much smaller abstract model when checking a formula on a swarm of any size. We present an implementation and discuss experimental results obtained for the alpha algorithm for robot swarms.