Logic & Formal Reasoning


Efficient Search-Based Weighted Model Integration

arXiv.org Artificial Intelligence

Weighted model integration (WMI) extends Weighted model counting (WMC) to the integration of functions over mixed discrete-continuous domains. It has shown tremendous promise for solving inference problems in graphical models and probabilistic programming. Yet, state-of-the-art tools for WMI are limited in terms of performance and ignore the independence structure that is crucial to improving efficiency. To address this limitation, we propose an efficient model integration algorithm for theories with tree primal graphs. We exploit the sparse graph structure by using search to performing integration. Our algorithm greatly improves the computational efficiency on such problems and exploits context-specific independence between variables. Experimental results show dramatic speedups compared to existing WMI solvers on problems with tree-shaped dependencies.


A Formal Framework for Robot Construction Problems: A Hybrid Planning Approach

arXiv.org Artificial Intelligence

We study robot construction problems where multiple autonomous robots rearrange stacks of prefabricated blocks to build stable structures. These problems are challenging due to ramifications of actions, true concurrency, and requirements of supportedness of blocks by other blocks and stability of the structure at all times. We propose a formal hybrid planning framework to solve a wide range of robot construction problems, based on Answer Set Programming. This framework not only decides for a stable final configuration of the structure, but also computes the order of manipulation tasks for multiple autonomous robots to build the structure from an initial configuration, while simultaneously ensuring the stability, supportedness and other desired properties of the partial construction at each step of the plan. We prove the soundness and completeness of our formal method with respect to these properties. We introduce a set of challenging robot construction benchmark instances, including bridge building and stack overhanging scenarios, discuss the usefulness of our framework over these instances, and demonstrate the applicability of our method using a bimanual Baxter robot.


Probabilistic Temporal Logic over Finite Traces (Technical Report)

arXiv.org Artificial Intelligence

Temporal logics over finite traces have recently gained attention due to their use in real-world applications, in particular in business process modelling and planning. In real life, processes contain some degree of uncertainty that is impossible to handle with classical logics. We propose a new probabilistic temporal logic over finite traces based on superposition semantics, where all possible evolutions are possible, until observed. We study the properties of the logic and provide automata-based mechanisms for deriving probabilistic inferences from its formulas. We ground the approach in the context of declarative process modelling, showing how the temporal patterns used in Declare can be lifted to our setting, and discussing how probabilistic inferences can be exploited to provide key offline and runtime reasoning tasks, and how to discover probabilistic Declare patterns from event data by minor adjustments to existing discovery algorithms.


Knowledge compilation languages as proof systems

arXiv.org Artificial Intelligence

In this paper, we study proof systems in the sense of Cook-Reckhow for problems that are higher in the polynomial hierarchy than coNP, in particular, #SAT and maxSAT. We start by explaining how the notion of Cook-Reckhow proof systems can be apply to these problems and show how one can twist existing languages in knowledge compilation such as decision DNNF so that they can be seen as proof systems for problems such as #SAT and maxSAT.


Learning $\textit{Ex Nihilo}$

arXiv.org Artificial Intelligence

This paper introduces, philosophically and to a degree formally, the novel concept of learning $\textit{ex nihilo}$, intended (obviously) to be analogous to the concept of creation $\textit{ex nihilo}$. Learning $\textit{ex nihilo}$ is an agent's learning "from nothing," by the suitable employment of schemata for deductive and inductive reasoning. This reasoning must be in machine-verifiable accord with a formal proof/argument theory in a $\textit{cognitive calculus}$ (i.e., roughly, an intensional higher-order multi-operator quantified logic), and this reasoning is applied to percepts received by the agent, in the context of both some prior knowledge, and some prior and current interests. Learning $\textit{ex nihilo}$ is a challenge to contemporary forms of ML, indeed a severe one, but the challenge is offered in the spirt of seeking to stimulate attempts, on the part of non-logicist ML researchers and engineers, to collaborate with those in possession of learning-$\textit{ex nihilo}$ frameworks, and eventually attempts to integrate directly with such frameworks at the implementation level. Such integration will require, among other things, the symbiotic interoperation of state-of-the-art automated reasoners and high-expressivity planners, with statistical/connectionist ML technology.


Specifying and Computing Causes for Query Answers in Databases via Database Repairs and Repair Programs

arXiv.org Artificial Intelligence

A correspondence between database tuples as causes for query answers in databases and tuple-based repairs of inconsistent databases with respect to denial constraints has already been established. In this work, answer-set programs that specify repairs of databases are used as a basis for solving computational and reasoning problems about causes. Here, causes are also introduced at the attribute level by appealing to a both null-based and attribute-based repair semantics. The corresponding repair programs are presented, and they are used as a basis for computation and reasoning about attribute-level causes. They are extended to deal with the case of causality under integrity constraints. Several examples with the DLV system are shown.


Homunculus' Brain and Categorical Logic

arXiv.org Artificial Intelligence

The interaction between syntax (formal language) and its semantics (meanings of language) is well studied in categorical logic. Results of this study are employed to understand how the brain could create meanings. To emphasize the toy character of the proposed model, we prefer to speak on homunculus' brain rather than just on the brain. Homunculus' brain consists of neurons, each of which is modeled by a category, and axons between neurons, which are modeled by functors between the corresponding neuron-categories. Each neuron (category) has its own program enabling its working, i.e. a "theory" of this neuron. In analogy with what is known from categorical logic, we postulate the existence of the pair of adjoint functors, called Lang and Syn, from a category, now called BRAIN, of categories, to a category, now called MIND, of theories. Our homunculus is a kind of "mathematical robot", the neuronal architecture of which is not important. Its only aim is to provide us with the opportunity to study how such a simple brain-like structure could "create meanings" out of its purely syntactic program. The pair of adjoint functors Lang and Syn models mutual dependencies between the syntactical structure of a given theory of MIND and the internal logic of its semantics given by a category of BRAIN. In this way, a formal language (syntax) and its meanings (semantics) are interwoven with each other in a manner corresponding to the adjointness of the functors Lang and Syn. Categories BRAIN and MIND interact with each other with their entire structures and, at the same time, these very structures are shaped by this interaction.


Community-based 3-SAT Formulas with a Predefined Solution

arXiv.org Artificial Intelligence

It is crucial to generate crafted SAT formulas with predefined solutions for the testing and development of SAT solvers since many SAT formulas from real-world applications have solutions. Although some generating algorithms have been proposed to generate SAT formulas with predefined solutions, community structures of SAT formulas are not considered. We propose a 3-SAT formula generating algorithm that not only guarantees the existence of a predefined solution, but also simultaneously considers community structures and clause distributions. The proposed 3-SAT formula generating algorithm controls the quality of community structures through controlling (1) the number of clauses whose variables have a common community, which we call intra-community clauses, and (2) the number of variables that only belong to one community, which we call intra-community variables. To study the combined effect of community structures and clause distributions on the hardness of SAT formulas, we measure solving runtimes of two solvers, gluHack (a leading CDCL solver) and CPSparrow (a leading SLS solver), on the generated SAT formulas under different groups of parameter settings. Through extensive experiments, we obtain some noteworthy observations on the SAT formulas generated by the proposed algorithm: (1) The community structure has little or no effects on the hardness of SAT formulas with regard to CPSparrow but a strong effect with regard to gluHack. (2) Only when the proportion of true literals in a SAT formula in terms of the predefined solution is 0.5, SAT formulas are hard-to-solve with regard to gluHack; when this proportion is below 0.5, SAT formulas are hard-to-solve with regard to CPSparrow. (3) When the ratio of the number of clauses to that of variables is around 4.25, the SAT formulas are hard-to-solve with regard to both gluHack and CPSparrow.


Experimental Study on CTL model checking using Machine Learning

arXiv.org Artificial Intelligence

The existing core methods, which are employed by the popular CTL model checking tools, are facing the famous state explode problem. In our previous study, a method based on the Machine Learning (ML) algorithms was proposed to address this problem. However, the accuracy is not satisfactory. First, we conduct a comprehensive experiment on Graph Lab to seek the optimal accuracy using the five machine learning algorithms. Second, given the optimal accuracy, the average time is seeked. The results show that the Logistic Regressive (LR)-based approach can simulate CTL model checking with the accuracy of 98.8%, and its average efficiency is 459 times higher than that of the existing method, as well as the Boosted Tree (BT)-based approach can simulate CTL model checking with the accuracy of 98.7%, and its average efficiency is 639 times higher than that of the existing method.


Founded World Views with Autoepistemic Equilibrium Logic

arXiv.org Artificial Intelligence

Defined by Gelfond in 1991 (G91), epistemic specifications (or programs) are an extension of logic programming under stable models semantics that introduces subjective literals. A subjective literal allows checking whether some regular literal is true in all (or in some of) the stable models of the program, being those models collected in a set called world view. One epistemic program may yield several world views but, under the original G91 semantics, some of them resulted from selfsupported derivations. During the last eight years, several alternative approaches have been proposed to get rid of these self-supported world views. Unfortunately, their success could only be measured by studying their behaviour on a set of common examples in the literature, since no formal property of "self-supportedness" had been defined. To fill this gap, we extend in this paper the idea of unfounded set from standard logic programming to the epistemic case. We define when a world view is founded with respect to some program and propose the foundedness property for any semantics whose world views are always founded. Using counterexamples, we explain that the previous approaches violate foundedness, and proceed to propose a new semantics based on a combination of Moore's Autoepistemic Logic and Pearce's Equilibrium Logic. The main result proves that this new semantics precisely captures the set of founded G91 world views.