Marta Kwiatkowska is a co-proposer of the Validate AI Conference. She is Professor of Computing Systems and Fellow of Trinity College, University of Oxford. Prior to this she was Professor in the School of Computer Science at the University of Birmingham, Lecturer at the University of Leicester and Assistant Professor at the Jagiellonian University in Cracow, Poland. Kwiatkowska has made fundamental contributions to the theory and practice of model checking for probabilistic systems, focusing on automated techniques for verification and synthesis from quantitative specifications. More recently, she has been working on safety and robustness verification for neural networks with provable guarantees.

Jeannette M. Wing* Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213-3890 Context: Formal Specification and Verification Given formal specifications of a system and a desired property of that system, the verification question is "Does this property hold of this system?" My longstanding interest in the specification and verification area has recently turned to autonomous and embedded systems. Typical autonomous systems are unmanned spacecraft and planetary exploration robots. Typical embedded systems are controllers found in aircraft and automobiles, though they are increasingly found in household appliances and consumer electronics. There are some technically challenging formal modeling problems that arise from these kinds of applications because of their operating environments: The system's environment is unknown or unpredictable.

Junges, Sebastian (RWTH Aachen University) | Jansen, Nils (University of Texas at Austin) | Katoen, Joost-Pieter (RWTH Aachen University) | Topcu, Ufuk (University of Texas at Austin)

Many robotics applications and scenarios that involve interaction with humans are safety or performance critical. A natural path to assessing such notions is to include a cognitive model describing typical human behaviors into a larger modeling context. In this work, we set out to investigate a combination of such a model with formal verification. We present a general and flexible framework utilizing methods from probabilistic model checking and discuss current pitfalls. We start from information about typical behavior, obtained from generalizing specific scenarios by the usage of inverse reinforcement learning. We translate this information in order to define a formal model exhibiting stochastic behavior (whenever significant data is present) or nondeterminism (if the model is underspecified or no significant data is present) that can be analyzed. This model for a human can be combined with a robot model by using standard parallel composition. The benefit is manyfold: First, safe or optimal strategies for involved robots regarding a human can be synthesized depending on the given model. In general, verification can determine if such benign strategies are even possible. Furthermore, the cognitive model itself can be analyzed with respect to possible unnatural behaviors; thereby feedback to developers of such models is provided. We evaluate and describe our approaches by means of a well-known model for visiomotor tasks and provide a framework that can readily incorporate other models.

Huang, Xiaowei (University of Oxford) | Kwiatkowska, Marta Zofia (University of Oxford)

We consider the setting of stochastic multiagent systems and formulate an automated verification framework for quantifying and reasoning about agents' trust. To capture human trust, we work with a cognitive notion of trust defined as a subjective evaluation that agent A makes about agent B's ability to complete a task, which in turn may lead to a decision by A to rely on B. We propose a probabilistic rational temporal logic PRTL*, which extends the logic PCTL* with reasoning about mental attitudes (beliefs, goals and intentions), and includes novel operators that can express concepts of social trust such as competence, disposition and dependence. The logic can express, for example, that "agent A will eventually trust agent B with probability at least p that B will be have in a way that ensures the successful completion of a given task". We study the complexity of the automated verification problem and, while the general problem is undecidable, we identify restrictions on the logic and the system that result in decidable, or even tractable, subproblems.

Cardelli, Luca, Kwiatkowska, Marta, Laurenti, Luca, Paoletti, Nicola, Patane, Andrea, Wicker, Matthew

We introduce a probabilistic robustness measure for Bayesian Neural Networks (BNNs), defined as the probability that, given a test point, there exists a point within a bounded set such that the BNN prediction differs between the two. Such a measure can be used, for instance, to quantify the probability of the existence of adversarial examples. Building on statistical verification techniques for probabilistic models, we develop a framework that allows us to estimate probabilistic robustness for a BNN with statistical guarantees, i.e., with a priori error and confidence bounds. We provide experimental comparison for several approximate BNN inference techniques on image classification tasks associated to MNIST and a two-class subset of the GTSRB dataset. Our results enable quantification of uncertainty of BNN predictions in adversarial settings.