Logic & Formal Reasoning


r/MachineLearning - [R] HOList: An Environment for Machine Learning of Higher-Order Theorem Proving

#artificialintelligence

Abstract: We present an environment, benchmark, and deep learning driven automated theorem prover for higher-order logic. Higher-order interactive theorem provers enable the formalization of arbitrary mathematical theories and thereby present an interesting, open-ended challenge for deep learning. We provide an open-source framework based on the HOL Light theorem prover that can be used as a reinforcement learning environment. HOL Light comes with a broad coverage of basic mathematical theorems on calculus and the formal proof of the Kepler conjecture, from which we derive a challenging benchmark for automated reasoning. We also present a deep reinforcement learning driven automated theorem prover, DeepHOL, with strong initial results on this benchmark.


Nonmonotonic Reasoning

Journal of Artificial Intelligence Research

Nonmonotonic reasoning concerns situations when information is incomplete or uncertain. Thus, conclusions drawn lack iron-clad certainty that comes with classical logic reasoning. New information, even if the original one is retained, may change conclusions. Formal ways to capture mechanisms involved in nonmonotonic reasoning, and to exploit them for computation as in the answer set programming paradigm are at the heart of this research area. The six papers accepted for the special track contain significant contributions to the foundations of logic programming under the answer set semantics, to nonmonotonic extensions of description logics, to belief change in restricted settings, and to argumentation.


Research in Theoretical Computer Science

Communications of the ACM

Theoretical computer science has been a vibrant part of computing research in India for the past 30 years. India has always had a strong mathematical tradition. One could also argue that in the 1980s and 1990s, theory offered a unique opportunity to keep up with international research in computing despite limited access to state-of-the-art hardware. The Annual International Conference Foundations of Software Technology and Theoretical Computer Science (FSTTCS) was launched in 1981. FSTTCS2 allowed Indian researchers a natural opportunity to interact with leading academics worldwide.


Knowledge of Uncertain Worlds: Programming with Logical Constraints

arXiv.org Artificial Intelligence

Programming with logic for sophisticated applications must deal with recursion and negation, which have created significant challenges in logic, leading to many different, conflicting semantics of rules. This paper describes a unified language, DA logic, for design and analysis logic, based on the unifying founded semantics and constraint semantics, that support the power and ease of programming with different intended semantics. The key idea is to provide meta constraints, support the use of uncertain information in the form of either undefined values or possible combinations of values, and promote the use of knowledge units that can be instantiated by any new predicates, including predicates with additional arguments.


Recapping my Practical Program Synthesis presentation at AI DevWorld SnapLogic

#artificialintelligence

One person who followed up with me after my session wasn't familiar with the research area but gained an appreciation for the complexity of the problem. When we write software, even in high-level language, we are really doing all the heavy lifting to get computers to do what we want them to do. That is, as humans we still need to work at the level of the machine. Programs required an incredible amount of detail. So, it is challenging to go from high-level goals expressed in natural language, which lacks detail, to actual code.


#ValidateAI Conference

#artificialintelligence

Marta Kwiatkowska is a co-proposer of the Validate AI Conference. She is Professor of Computing Systems and Fellow of Trinity College, University of Oxford. Prior to this she was Professor in the School of Computer Science at the University of Birmingham, Lecturer at the University of Leicester and Assistant Professor at the Jagiellonian University in Cracow, Poland. Kwiatkowska has made fundamental contributions to the theory and practice of model checking for probabilistic systems, focusing on automated techniques for verification and synthesis from quantitative specifications. More recently, she has been working on safety and robustness verification for neural networks with provable guarantees.


#ValidateAI Conference

#artificialintelligence

Marta Kwiatkowska is a co-proposer of the Validate AI Conference. She is Professor of Computing Systems and Fellow of Trinity College, University of Oxford. Prior to this she was Professor in the School of Computer Science at the University of Birmingham, Lecturer at the University of Leicester and Assistant Professor at the Jagiellonian University in Cracow, Poland. Kwiatkowska has made fundamental contributions to the theory and practice of model checking for probabilistic systems, focusing on automated techniques for verification and synthesis from quantitative specifications. More recently, she has been working on safety and robustness verification for neural networks with provable guarantees.


Computer-supported Analysis of Positive Properties, Ultrafilters and Modal Collapse in Variants of G\"odel's Ontological Argument

arXiv.org Artificial Intelligence

Three variants of Kurt G\"odel's ontological argument, as proposed byDana Scott, C. Anthony Anderson and Melvin Fitting, are encoded and rigorously assessed on the computer. In contrast to Scott's version of G\"odel's argument, the two variants contributed by Anderson and Fitting avoid modal collapse. Although they appear quite different on a cursory reading, they are in fact closely related, as our computer-supported formal analysis (conducted in the proof assistant system Isabelle/HOL) reveals. Key to our formal analysis is the utilization of suitably adapted notions of (modal) ultrafilters, and a careful distinction between extensions and intensions of positive properties.


Analysing Machine Learning Models with Imandra

#artificialintelligence

The vast majority of work within formal methods (the area of computer science that reasons about hardware and software as mathematical objects in order to prove they have certain properties) has involved analysing models that are fully specified by the user. More and more, however, critical parts of algorithmic pipelines are constituted by models that are instead learnt from data using artificial intelligence (AI). The task of analysing these kinds of models presents fresh challenges for the formal methods community and has seen exciting progress in recent years. While scalability is still an important, open research problem -- with state-of-the-art machine learning (ML) models often having millions of parameters --in this post we give an introduction to the paradigm by analysing two simple yet powerful learnt models using Imandra, a cloud-native automated reasoning engine bringing formal methods to the masses! Verifying properties of learnt models is a difficult task, but is becoming increasingly important in order to make sure that the AI systems using such models are safe, robust, and explainable.


Blameworthiness in Security Games

arXiv.org Artificial Intelligence

Security games are an example of a successful real-world application of game theory. The paper defines blameworthiness of the defender and the attacker in security games using the principle of alternative possibilities and provides a sound and complete logical system for reasoning about blameworthiness in such games. Introduction In this paper we study the properties of blameworthiness in security games (von Stackelberg 1934). Security games are used for canine airport patrol (Pita et al. 2008; Jain et al. 2010), airport passenger screening (Brown et al. 2016), protecting endangered animals and fish stocks (Fang, Stone, and Tambe 2015), U.S. Coast Guard port patrol (Sinha et al. 2018; An, Tambe, and Sinha 2016), and randomized deployment of U.S. air marshals (Sinha et al. 2018). Defender \Attacker Terminal 1 Terminal 2 Terminal 1 20 120 Terminal 2 200 16 Figure 1: Expected Human Losses in Security Game G 1. As an example, consider a security game G 1 in which a defender is trying to protect two terminals in an airport from an attacker. Due to limited resources, the defender can patrol only one terminal at a given time. If the defender chooses to patrol Terminal 1 and the attacker chooses to attack Terminal 2, then the human losses at Terminal 2 are estimated at 120, see Figure 1. However, if the defender chooses to patrol Terminal 2 while the attacker still chooses to attack Terminal 2, then the expected number of the human losses at Terminal 2 is only 16, see Figure 1. Generally speaking, the goal of the defender is to minimize human losses, while the goal of the attacker is to maximize them. However, the utility functions in security games usually take into account not only the human losses, but also the cost to protect and to attack the target to the defender and the attacker respectively.