Goto

Collaborating Authors

Logic & Formal Reasoning


GitHub - ossu/computer-science: Path to a free self-taught education in Computer Science!

#artificialintelligence

The OSSU curriculum is a complete education in computer science using online materials. It's for those who want a proper, well-rounded grounding in concepts fundamental to all computing disciplines, and for those who have the discipline, will, and (most importantly!) good habits to obtain this education largely on their own, but with support from a worldwide community of fellow learners. It is designed according to the degree requirements of undergraduate computer science majors, minus general education (non-CS) requirements, as it is assumed most of the people following this curriculum are already educated outside the field of CS. The courses themselves are among the very best in the world, often coming from Harvard, Princeton, MIT, etc., but specifically chosen to meet the following criteria. When no course meets the above criteria, the coursework is supplemented with a book.


A Clustering and Demotion Based Algorithm for Inductive Learning of Default Theories

arXiv.org Artificial Intelligence

We present a clustering- and demotion-based algorithm called Kmeans-FOLD to induce nonmonotonic logic programs from positive and negative examples. Our algorithm improves upon-and is inspired by-the FOLD algorithm. The FOLD algorithm itself is an improvement over the FOIL algorithm. Our algorithm generates a more concise logic program compared to the FOLD algorithm. Our algorithm uses the K-means based clustering method to cluster the input positive samples before applying the FOLD algorithm. Positive examples that are covered by the partially learned program in intermediate steps are not discarded as in the FOLD algorithm, rather they are demoted, i.e., their weights are reduced in subsequent iterations of the algorithm. Our experiments on the UCI dataset show that a combination of K-Means clustering and our demotion strategy produces significant improvement for datasets with more than one cluster of positive examples. The resulting induced program is also more concise and therefore easier to understand compared to the FOLD and ALEPH systems, two state of the art inductive logic programming (ILP) systems.


RuleBert: Teaching Soft Rules to Pre-trained Language Models

arXiv.org Artificial Intelligence

While pre-trained language models (PLMs) are the go-to solution to tackle many natural language processing problems, they are still very limited in their ability to capture and to use common-sense knowledge. In fact, even if information is available in the form of approximate (soft) logical rules, it is not clear how to transfer it to a PLM in order to improve its performance for deductive reasoning tasks. Here, we aim to bridge this gap by teaching PLMs how to reason with soft Horn rules. We introduce a classification task where, given facts and soft rules, the PLM should return a prediction with a probability for a given hypothesis. We release the first dataset for this task, and we propose a revised loss function that enables the PLM to learn how to predict precise probabilities for the task. Our evaluation results show that the resulting fine-tuned models achieve very high performance, even on logical rules that were unseen at training. Moreover, we demonstrate that logical notions expressed by the rules are transferred to the fine-tuned model, yielding state-of-the-art results on external datasets.



Union and Intersection of all Justifications

arXiv.org Artificial Intelligence

We present new algorithms for computing the union and intersection of all justifications for a given ontological consequence without first computing the set of all justifications. Through an empirical evaluation, we show that our approach works well in practice for expressive description logics. In particular, the union of all justifications can be computed much faster than with existing justification-enumeration approaches. We further discuss how to use these results to repair ontologies.


Reactive Answer Set Programming

arXiv.org Artificial Intelligence

Logic Production System (LPS) is a logic-based framework for modelling reactive behaviour. Based on abductive logic programming, it combines reactive rules with logic programs, a database and a causal theory that specifies transitions between the states of the database. This paper proposes a systematic mapping of the Kernel of this framework (called KELPS) into an answer set program (ASP). For this purpose a new variant of KELPS with finite models, called $n$-distance KELPS, is introduced. A formal definition of the mapping from this $n$-distance KELPS to ASP is given and proven sound and complete. The Answer Set Programming paradigm allows to capture additional behaviours to the basic reactivity of KELPS, in particular proactive, preemptive and prospective behaviours. These are all discussed and illustrated with examples. Then a hybrid framework is proposed that integrates KELPS and ASP, allowing to combine the strengths of both paradigms. Under consideration in Theory and Practice of Logic Programming (TPLP).


The Horn Non-Clausal Class and its Polynomiality

arXiv.org Artificial Intelligence

The expressiveness of propositional non-clausal (NC) formulas is exponentially richer than that of clausal formulas. Yet, clausal efficiency outperforms non-clausal one. Indeed, a major weakness of the latter is that, while Horn clausal formulas, along with Horn algorithms, are crucial for the high efficiency of clausal reasoning, no Horn-like formulas in non-clausal form had been proposed. To overcome such weakness, we define the hybrid class $\mathbb{H_{NC}}$ of Horn Non-Clausal (Horn-NC) formulas, by adequately lifting the Horn pattern to NC form, and argue that $\mathbb{H_{NC}}$, along with future Horn-NC algorithms, shall increase non-clausal efficiency just as the Horn class has increased clausal efficiency. Secondly, we: (i) give the compact, inductive definition of $\mathbb{H_{NC}}$; (ii) prove that syntactically $\mathbb{H_{NC}}$ subsumes the Horn class but semantically both classes are equivalent, and (iii) characterize the non-clausal formulas belonging to $\mathbb{H_{NC}}$. Thirdly, we define the Non-Clausal Unit-Resolution calculus, $UR_{NC}$, and prove that it checks the satisfiability of $\mathbb{H_{NC}}$ in polynomial time. This fact, to our knowledge, makes $\mathbb{H_{NC}}$ the first characterized polynomial class in NC reasoning. Finally, we prove that $\mathbb{H_{NC}}$ is linearly recognizable, and also that it is both strictly succincter and exponentially richer than the Horn class. We discuss that in NC automated reasoning, e.g. satisfiability solving, theorem proving, logic programming, etc., can directly benefit from $\mathbb{H_{NC}}$ and $UR_{NC}$ and that, as a by-product of its proved properties, $\mathbb{H_{NC}}$ arises as a new alternative to analyze Horn functions and implication systems.


Aggregate Semantics for Propositional Answer Set Programs

arXiv.org Artificial Intelligence

Answer Set Programming (ASP) emerged in the late 1990ies as a paradigm for Knowledge Representation and Reasoning. The attractiveness of ASP builds on an expressive high-level modeling language along with the availability of powerful off-the-shelf solving systems. While the utility of incorporating aggregate expressions in the modeling language has been realized almost simultaneously with the inception of the first ASP solving systems, a general semantics of aggregates and its efficient implementation have been long-standing challenges. Aggregates have been proposed and widely used in database systems, and also in the deductive database language Datalog, which is one of the main precursors of ASP. The use of aggregates was, however, still restricted in Datalog (by either disallowing recursion or only allowing monotone aggregates), while several ways to integrate unrestricted aggregates evolved in the context of ASP. In this survey, we pick up at this point of development by presenting and comparing the main aggregate semantics that have been proposed for propositional ASP programs. We highlight crucial properties such as computational complexity and expressive power, and outline the capabilities and limitations of different approaches by illustrative examples.


On the Convergence of Tsetlin Machines for the AND and the OR Operators

arXiv.org Artificial Intelligence

The Tsetlin Machine (TM) is a novel machine-learning algorithm based on propositional logic, which has obtained state-of-the-art performance on several pattern recognition problems. In previous studies, the convergence properties of TM for 1-bit operation and XOR operation have been analyzed. To make the analyses for the basic digital operations complete, in this article, we analyze the convergence when input training samples follow AND and OR operators respectively. Our analyses reveal that the TM can converge almost surely to reproduce AND and OR operators, which are learnt from training data over an infinite time horizon. The analyses on AND and OR operators, together with the previously analysed 1-bit and XOR operations, complete the convergence analyses on basic operators in Boolean algebra.


Repurposing of Resources: from Everyday Problem Solving through to Crisis Management

arXiv.org Artificial Intelligence

The human ability to repurpose objects and processes is universal, but it is not a well-understood aspect of human intelligence. Repurposing arises in everyday situations such as finding substitutes for missing ingredients when cooking, or for unavailable tools when doing DIY. It also arises in critical, unprecedented situations needing crisis management. After natural disasters and during wartime, people must repurpose the materials and processes available to make shelter, distribute food, etc. Repurposing is equally important in professional life (e.g. clinicians often repurpose medicines off-license) and in addressing societal challenges (e.g. finding new roles for waste products,). Despite the importance of repurposing, the topic has received little academic attention. By considering examples from a variety of domains such as every-day activities, drug repurposing and natural disasters, we identify some principle characteristics of the process and describe some technical challenges that would be involved in modelling and simulating it. We consider cases of both substitution, i.e. finding an alternative for a missing resource, and exploitation, i.e. identifying a new role for an existing resource. We argue that these ideas could be developed into general formal theory of repurposing, and that this could then lead to the development of AI methods based on commonsense reasoning, argumentation, ontological reasoning, and various machine learning methods, to develop tools to support repurposing in practice.