Goto

Collaborating Authors

Logic & Formal Reasoning


Automata Modulo Theories

Communications of the ACM

Finite automata are one of the most fundamental models of computation and are taught in almost all undergraduate computer-science curricula. Although automata are typically presented as a theoretical model of computation, they have found their place in a variety of practical applications, such as natural language processing, networking, program verification, and regular-expression matching.


Let's Be Honest

Communications of the ACM

We have a serious problem with how we have been teaching computability theory, a central component of the ACM/IEEE computer science curriculum. For a fair number of years, I taught a computability course.


A Satisfying Result

Communications of the ACM

A team of researchers has completed the understanding of the Keller Conjecture, first proposed in 1930, about the packing of squares, cubes, and their higher-dimensional analogues. The conjecture states that any tiling of identical hypercubes that fills space must contain a pair of neighbors that share an entire face. You might convince yourself it is true for two or three dimensions by toying with squares or blocks. Mathematicians later established the status of the conjecture for all dimensions except seven. The new result, which shared the best paper award at the 2020 International Joint Conference on Automated Reasoning (IJCAR 2020), fills that gap.


ossu/computer-science

#artificialintelligence

The OSSU curriculum is a complete education in computer science using online materials. It's for those who want a proper, well-rounded grounding in concepts fundamental to all computing disciplines, and for those who have the discipline, will, and (most importantly!) good habits to obtain this education largely on their own, but with support from a worldwide community of fellow learners. It is designed according to the degree requirements of undergraduate computer science majors, minus general education (non-CS) requirements, as it is assumed most of the people following this curriculum are already educated outside the field of CS. The courses themselves are among the very best in the world, often coming from Harvard, Princeton, MIT, etc., but specifically chosen to meet the following criteria. When no course meets the above criteria, the coursework is supplemented with a book.


CoCoMoT: Conformance Checking of Multi-Perspective Processes via SMT (Extended Version)

arXiv.org Artificial Intelligence

Conformance checking is a key process mining task for comparing the expected behavior captured in a process model and the actual behavior recorded in a log. While this problem has been extensively studied for pure control-flow processes, conformance checking with multi-perspective processes is still at its infancy. In this paper, we attack this challenging problem by considering processes that combine the data and control-flow dimensions. In particular, we adopt data Petri nets (DPNs) as the underlying reference formalism, and show how solid, well-established automated reasoning techniques can be effectively employed for computing conformance metrics and data-aware alignments. We do so by introducing the CoCoMoT (Computing Conformance Modulo Theories) framework, with a fourfold contribution. First, we show how SAT-based encodings studied in the pure control-flow setting can be lifted to our data-aware case, using SMT as the underlying formal and algorithmic framework. Second, we introduce a novel preprocessing technique based on a notion of property-preserving clustering, to speed up the computation of conformance checking outputs. Third, we provide a proof-of-concept implementation that uses a state-of-the-art SMT solver and report on preliminary experiments. Finally, we discuss how CoCoMoT directly lends itself to a number of further tasks, like multi- and anti-alignments, log analysis by clustering, and model repair.


Flexible FOND Planning with Explicit Fairness Assumptions

arXiv.org Artificial Intelligence

We consider the problem of reaching a propositional goal condition in fully-observable non-deterministic (FOND) planning under a general class of fairness assumptions that are given explicitly. The fairness assumptions are of the form A/B and say that state trajectories that contain infinite occurrences of an action a from A in a state s and finite occurrence of actions from B, must also contain infinite occurrences of action a in s followed by each one of its possible outcomes. The infinite trajectories that violate this condition are deemed as unfair, and the solutions are policies for which all the fair trajectories reach a goal state. We show that strong and strong-cyclic FOND planning, as well as QNP planning, a planning model introduced recently for generalized planning, are all special cases of FOND planning with fairness assumptions of this form which can also be combined. FOND+ planning, as this form of planning is called, combines the syntax of FOND planning with some of the versatility of LTL for expressing fairness constraints. A new planner is implemented by reducing FOND+ planning to answer set programs, and the performance of the planner is evaluated in comparison with FOND and QNP planners, and LTL synthesis tools.


A conditional, a fuzzy and a probabilistic interpretation of self-organising maps

arXiv.org Artificial Intelligence

In this paper we establish a link between preferential semantics for description logics and self-organising maps, which have been proposed as possible candidates to explain the psychological mechanisms underlying category generalisation. In particular, we show that a concept-wise multipreference semantics, which takes into account preferences with respect to different concepts and has been recently proposed for defeasible description logics, can be used to to provide a logical interpretation of SOMs. We also provide a logical interpretation of SOMs in terms of a fuzzy description logic as well as a probabilistic account.


Consensus Maximisation Using Influences of Monotone Boolean Functions

arXiv.org Artificial Intelligence

Consensus maximisation (MaxCon), which is widely used for robust fitting in computer vision, aims to find the largest subset of data that fits the model within some tolerance level. In this paper, we outline the connection between MaxCon problem and the abstract problem of finding the maximum upper zero of a Monotone Boolean Function (MBF) defined over the Boolean Cube. Then, we link the concept of influences (in a MBF) to the concept of outlier (in MaxCon) and show that influences of points belonging to the largest structure in data would generally be smaller under certain conditions. Based on this observation, we present an iterative algorithm to perform consensus maximisation. Results for both synthetic and real visual data experiments show that the MBF based algorithm is capable of generating a near optimal solution relatively quickly. This is particularly important where there are large number of outliers (gross or pseudo) in the observed data.


Training a First-Order Theorem Prover from Synthetic Data

arXiv.org Artificial Intelligence

A major challenge in applying machine learning to automated theorem proving is the scarcity of training data, which is a key ingredient in training successful deep learning models. To tackle this problem, we propose an approach that relies on training purely with synthetically generated theorems, without any human data aside from axioms. We use these theorems to train a neurally-guided saturationbased prover. Our neural prover outperforms the state-of-the-art E-prover on this synthetic data in both time and search steps, and shows significant transfer to the unseen human-written theorems from the TPTP library, where it solves 72% of first-order problems without equality. Most work applying machine learning to theorem proving takes the following approach: 1) pick a dataset of formalized mathematics, such as Mizar or Metamath, or the standard library of a major proof assistant such as HOL-Light or Coq; 2) split the dataset into train and test; 3) use imitation learning or reinforcement learning on the training set to learn a policy; and finally 4) evaluate the policy on the test set (Loos et al. (2017), Bansal et al. (2019), Yang & Deng (2019), Han et al. (2021), Polu & Sutskever (2020)). Such methods are fundamentally limited by the size of the training set, particularly when relying on deep neural networks (Kaplan et al., 2020). Unfortunately, unlike in computer vision and natural language processing, theorem proving datasets are comparatively tiny.


Measuring Mathematical Problem Solving With the MATH Dataset

arXiv.org Artificial Intelligence

Many intellectual endeavors require mathematical problem solving, but this skill remains beyond the capabilities of computers. To measure this ability in machine learning models, we introduce MATH, a new dataset of 12, 500 challenging competition mathematics problems. Each problem in MATH has a full step-by-step solution which can be used to teach models to generate answer derivations and explanations. To facilitate future research and increase accuracy on MATH, we also contribute a large auxiliary pretraining dataset which helps teach models the fundamentals of mathematics. Even though we are able to increase accuracy on MATH, our results show that accuracy remains relatively low, even with enormous Transformer models. Moreover, we find that simply increasing budgets and model parameter counts will be impractical for achieving strong mathematical reasoning if scaling trends continue. While scaling Transformers is automatically solving most other text-based tasks, scaling is not currently solving MATH. To have more traction on mathematical problem solving we will likely need new algorithmic advancements from the broader research community.