Logic & Formal Reasoning


How to Create AI That Can Safely Navigate Our World - An Interview With Andre Platzer - Future of Life Institute

#artificialintelligence

Over the last few decades, the unprecedented pace of technological progress has allowed us to upgrade and modernize much of our infrastructure and solve many long-standing logistical problems. For example, Babylon Health's AI-driven smartphone app is helping assess and prioritize 1.2 million patients in North London, electronic transfers allow us to instantly send money nearly anywhere in the world, and, over the last 20 years, GPS has revolutionized how we navigate, how we track and ship goods, and how we regulate traffic. However, exponential growth comes with its own set of hurdles that must be navigated. The foremost issue is that it's exceedingly difficult to predict how various technologies will evolve. As a result, it becomes challenging to plan for the future and ensure that the necessary safety features are in place.


The Church-Turing Thesis

Communications of the ACM

Chapter in New Computational Paradigms: Changing Conceptions of What Is Computable, S.B. Cooper, B. Lowe, and A. Sorbi, Eds.


Proceedings of the elevent Workshop on Answer Set Programming and Other Computing Paradigms 2018

arXiv.org Artificial Intelligence

This is the Proceedings of the elevent Workshop on Answer Set Programming and Other Computing Paradigms (ASPOCP) 2018, which was held in Oxford, UK, July 18th, 2018.


Toward the Engineering of Virtuous Machines

arXiv.org Artificial Intelligence

While various traditions under the 'virtue ethics' umbrella have been studied extensively and advocated by ethicists, it has not been clear that there exists a version of virtue ethics rigorous enough to be a target for machine ethics (which we take to include the engineering of an ethical sensibility in a machine or robot itself, not only the study of ethics in the humans who might create artificial agents). We begin to address this by presenting an embryonic formalization of a key part of any virtue-ethics theory: namely, the learning of virtue by a focus on exemplars of moral virtue. Our work is based in part on a computational formal logic previously used to formally model other ethical theories and principles therein, and to implement these models in artificial agents.


Scaling up Probabilistic Inference in Linear and Non-Linear Hybrid Domains by Leveraging Knowledge Compilation

arXiv.org Artificial Intelligence

Weighted model integration (WMI) extends weighted model counting (WMC) in providing a computational abstraction for probabilistic inference in mixed discrete-continuous domains. WMC has emerged as an assembly language for state-of-the-art reasoning in Bayesian networks, factor graphs, probabilistic programs and probabilistic databases. In this regard, WMI shows immense promise to be much more widely applicable, especially as many real-world applications involve attribute and feature spaces that are continuous and mixed. Nonetheless, state-of-the-art tools for WMI are limited and less mature than their propositional counterparts. In this work, we propose a new implementation regime that leverages propositional knowledge compilation for scaling up inference. In particular, we use sentential decision diagrams, a tractable representation of Boolean functions, as the underlying model counting and model enumeration scheme. Our regime performs competitively to state-of-the-art WMI systems, but is also shown, for the first time, to handle non-linear constraints over non-linear potentials.


Counting Complexity for Reasoning in Abstract Argumentation

arXiv.org Artificial Intelligence

In this paper, we consider counting and projected model counting of extensions in abstract argumentation for various semantics. When asking for projected counts we are interested in counting the number of extensions of a given argumentation framework while multiple extensions that are identical when restricted to the projected arguments count as only one projected extension. We establish classical complexity results and parameterized complexity results when the problems are parameterized by treewidth of the undirected argumentation graph. To obtain upper bounds for counting projected extensions, we introduce novel algorithms that exploit small treewidth of the undirected argumentation graph of the input instance by dynamic programming (DP). Our algorithms run in time double or triple exponential in the treewidth depending on the considered semantics. Finally, we take the exponential time hypothesis (ETH) into account and establish lower bounds of bounded treewidth algorithms for counting extensions and projected extension.


Partial Evaluation of Logic Programs in Vector Spaces

arXiv.org Artificial Intelligence

In this paper, we introduce methods of encoding propositional logic programs in vector spaces. Interpretations are represented by vectors and programs are represented by matrices. The least model of a definite program is computed by multiplying an interpretation vector and a program matrix. To optimize computation in vector spaces, we provide a method of partial evaluation of programs using linear algebra. Partial evaluation is done by unfolding rules in a program, and it is realized in a vector space by multiplying program matrices. We perform experiments using randomly generated programs and show that partial evaluation has potential for realizing efficient computation in huge scale of programs.


Stepping Stones to Inductive Synthesis of Low-Level Looping Programs

arXiv.org Artificial Intelligence

Inductive program synthesis, from input/output examples, can provide an opportunity to automatically create programs from scratch without presupposing the algorithmic form of the solution. For induction of general programs with loops (as opposed to loop-free programs, or synthesis for domain-specific languages), the state of the art is at the level of introductory programming assignments. Most problems that require algorithmic subtlety, such as fast sorting, have remained out of reach without the benefit of significant problem-specific background knowledge. A key challenge is to identify cues that are available to guide search towards correct looping programs. We present MAKESPEARE, a simple delayed-acceptance hillclimbing method that synthesizes low-level looping programs from input/output examples. During search, delayed acceptance bypasses small gains to identify significantly-improved stepping stone programs that tend to generalize and enable further progress. The method performs well on a set of established benchmarks, and succeeds on the previously unsolved "Collatz Numbers" program synthesis problem. Additional benchmarks include the problem of rapidly sorting integer arrays, in which we observe the emergence of comb sort (a Shell sort variant that is empirically fast). MAKESPEARE has also synthesized a record-setting program on one of the puzzles from the TIS-100 assembly language programming game.


Search-based Program Synthesis

Communications of the ACM

Writing programs that are both correct and efficient is challenging. A potential solution lies in program synthesis aimed at automatic derivation of an executable implementation (the "how") from a high-level logical specification of the desired input-to-output behavior (the "what"). A mature synthesis technology can have a transformative impact on programmer productivity by liberating the programmer from low-level coding details. For instance, for the classical computational problem of sorting a list of numbers, the programmer has to simply specify that given an input array A of n numbers, compute an output array B consisting of exactly the same numbers as A such that B[i] B[i 1] for 1 i n, leaving it to the synthesizer to figure out the sequence of steps needed for the desired computation. Traditionally, program synthesis is formalized as a problem in deductive theorem proving:17 A program is derived from the constructive proof of the theorem that states that for all inputs, there exists an output, such that the desired correctness specification holds.


Backdoor Decomposable Monotone Circuits and their Propagation Complete Encodings

arXiv.org Artificial Intelligence

We describe a compilation language of backdoor decomposable monotone circuits (BDMCs) which generalizes several concepts appearing in the literature, e.g. DNNFs and backdoor trees. A BDMC sentence is a monotone circuit which satisfies decomposability property (such as in DNNF) in which the inputs (or leaves) are associated with CNF encodings of some functions. We consider two versions of BDMCs. In case of PC-BDMCs the encodings in the leaves are propagation complete encodings and in case of URC-BDMCs the encodings in the leaves are unit refutation complete encodings of respective functions. We show that a representation of a boolean function with a PC-BDMC can be transformed into a propagation complete encoding of the same function whose size is polynomial in the size of the input PC-BDMC sentence. We obtain a similar result in case of URC-BDMCs. We also relate the size of PC-BDMCs to the size of DNNFs and backdoor trees.