Logic & Formal Reasoning



Neural Theorem Provers Do Not Learn Rules Without Exploration

arXiv.org Machine Learning

Neural symbolic processing aims to combine the generalization of logical learning approaches and the performance of neural networks. The Neural Theorem Proving (NTP) model by Rocktaschel et al (2017) learns embeddings for concepts and performs logical unification. While NTP is promising and effective in predicting facts accurately, we have little knowledge how well it can extract true relationship among data. To this end, we create synthetic logical datasets with injected relationships, which can be generated on-the-fly, to test neural-based relation learning algorithms including NTP. We show that it has difficulty recovering relationships in all but the simplest settings. Critical analysis and diagnostic experiments suggest that the optimization algorithm suffers from poor local minima due to its greedy winner-takes-all strategy in identifying the most informative structure (proof path) to pursue. We alter the NTP algorithm to increase exploration, which sharply improves performance. We argue and demonstate that it is insightful to benchmark with synthetic data with ground-truth relationships, for both evaluating models and revealing algorithmic issues.


Self-organized inductive reasoning with NeMuS

arXiv.org Artificial Intelligence

In this direction, patterns of concepts can be used to justify (and explain) Neural Multi-Space (NeMuS) is a weighted multispace "shortcuts" to generate recursive hypothesis from very large representation for a portion of first-order sets of relations without the need to compute the entire path logic designed for use with machine learning and to justify it. This is critical when the background knowledge neural network methods. It was demonstrated that has huge amounts of data. It could be adequately handled it can be used to perform reasoning based on regions as regions of concepts and categories, similar to the human forming patterns of refutation and also in brain map organization. This will allow symbolic deduction the process of inductive learning in ILP-like style.


A Computational-Hermeneutic Approach for Conceptual Explicitation

arXiv.org Artificial Intelligence

We present a computer-supported approach for the logical analysis and conceptual explicitation of argumentative discourse. Computational hermeneutics harnesses recent progresses in automated reasoning for higher-order logics and aims at formalizing natural-language argumentative discourse using flexible combinations of expressive non-classical logics. In doing so, it allows us to render explicit the tacit conceptualizations implicit in argumentative discursive practices. Our approach operates on networks of structured arguments and is iterative and two-layered. At one layer we search for logically correct formalizations for each of the individual arguments. At the next layer we select among those correct formalizations the ones which honor the argument's dialectic role, i.e. attacking or supporting other arguments as intended. We operate at these two layers in parallel and continuously rate sentences' formalizations by using, primarily, inferential adequacy criteria. An interpretive, logical theory will thus gradually evolve. This theory is composed of meaning postulates serving as explications for concepts playing a role in the analyzed arguments. Such a recursive, iterative approach to interpretation does justice to the inherent circularity of understanding: the whole is understood compositionally on the basis of its parts, while each part is understood only in the context of the whole (hermeneutic circle). We summarily discuss previous work on exemplary applications of human-in-the-loop computational hermeneutics in metaphysical discourse. We also discuss some of the main challenges involved in fully-automating our approach. By sketching some design ideas and reviewing relevant technologies, we argue for the technological feasibility of a highly-automated computational hermeneutics.


Dynamic Term-Modal Logics for Epistemic Planning

arXiv.org Artificial Intelligence

Classical planning frameworks are built on first-order languages. The first-order expressive power is desirable for compactly representing actions via schemas, and for specifying goal formulas such as $\neg\exists x\mathsf{blocks\_door}(x)$. In contrast, several recent epistemic planning frameworks build on propositional modal logic. The modal expressive power is desirable for investigating planning problems with epistemic goals such as $K_{a}\neg\mathsf{problem}$. The present paper presents an epistemic planning framework with first-order expressiveness of classical planning, but extending fully to the epistemic operators. In this framework, e.g. $\exists xK_{x}\exists y\mathsf{blocks\_door}(y)$ is a formula. Logics with this expressive power are called "term-modal" in the literature. This paper presents a rich but well-behaved semantics for term-modal logic. The semantics are given a dynamic extension using first-order "action models" allowing for epistemic planning, and it is shown how corresponding "action schemas" allow for a very compact action representation. Concerning metatheory, the paper defines axiomatic normal term-modal logics, shows a Canonical Model Theorem-like result, present non-standard frame characterization formulas, shows decidability for the finite agent case, and shows a general completeness result for the dynamic extension by reduction axioms.


Efficient predicate invention using shared "NeMuS"

arXiv.org Artificial Intelligence

Amao is a cognitive agent framework that tackles the invention of predicates with a different strategy as compared to recent advances in Inductive Logic Programming (ILP) approaches like Meta-Intepretive Learning (MIL) technique. It uses a Neural Multi-Space (NeMuS) graph structure to anti-unify atoms from the Herbrand base, which passes in the inductive momentum check. Inductive Clause Learning (ICL), as it is called, is extended here by using the weights of logical components, already present in NeMuS, to support inductive learning by expanding clause candidates with anti-unified atoms. An efficient invention mechanism is achieved, including the learning of recursive hypotheses, while restricting the shape of the hypothesis by adding bias definitions or idiosyncrasies of the language.


Inductive Logic Programming via Differentiable Deep Neural Logic Networks

arXiv.org Artificial Intelligence

We propose a novel paradigm for solving Inductive Logic Programming (ILP) problems via deep recurrent neural networks. This proposed ILP solver is designed based on differentiable implementation of the deduction via forward chaining. In contrast to the majority of past methods, instead of searching through the space of possible first-order logic rules by using some restrictive rule templates, we directly learn the symbolic logical predicate rules by introducing a novel differentiable Neural Logic (dNL) network. The proposed dNL network is able to learn and represent Boolean functions efficiently and in an explicit manner. We show that the proposed dNL-ILP solver supports desirable features such as recursion and predicate invention. Further, we investigate the performance of the proposed ILP solver in classification tasks involving benchmark relational datasets. In particular, we show that our proposed method outperforms the state of the art ILP solvers in classification tasks for Mutagenesis, Cora and IMDB datasets.


CoAPI: An Efficient Two-Phase Algorithm Using Core-Guided Over-Approximate Cover for Prime Compilation of Non-Clausal Formulae

arXiv.org Artificial Intelligence

Prime compilation, i.e., the generation of all prime implicates or implicants (primes for short) of formulae, is a prominent fundamental issue for AI. Recently, the prime compilation for non-clausal formulae has received great attention. The state-of-the-art approaches generate all primes along with a prime cover constructed by prime implicates using dual rail encoding. However, the dual rail encoding potentially expands search space. In addition, constructing a prime cover, which is necessary for their methods, is time-consuming. To address these issues, we propose a novel two-phase method -- CoAPI. The two phases are the key to construct a cover without using dual rail encoding. Specifically, given a non-clausal formula, we first propose a core-guided method to rewrite the non-clausal formula into a cover constructed by over-approximate implicates in the first phase. Then, we generate all the primes based on the cover in the second phase. In order to reduce the size of the cover, we provide a multi-order based shrinking method, with a good tradeoff between the small size and efficiency, to compress the size of cover considerably. The experimental results show that CoAPI outperforms state-of-the-art approaches. Particularly, for generating all prime implicates, CoAPI consumes about one order of magnitude less time.


One-shot Information Extraction from Document Images using Neuro-Deductive Program Synthesis

arXiv.org Artificial Intelligence

Our interest in this paper is in meeting a rapidly growing industrial With the rapid advancement of Deep Learning (DL) for computer demand for information extraction from images of documents such vision problems, many DL architectures are available today for as invoices, bills, receipts etc. In practice users are able to provide a document image understanding ([11], [18], [22], [28]). But like most very small number of example images labeled with the information DLbased techniques, training these models from scratch is resource that needs to be extracted. We adopt a novel'two-level''neurodeductive', and data intensive. This is a major stumbling block for industrial approach where (a) we use pre-trained deep neural problems for which collecting and annotating data incur significant networks to populate a relational database with facts about each costs in time and money. In this paper, we use two complementary document-image; and (b) we use a form of deductive reasoning, forms learning to address this problem: related to meta-interpretive learning of transition systems to learn extraction programs: Given task-specific transitions defined using (1) Neural-learning: Using pre-trained DL models for reading the entities and relations identified by the neural detectors and document images and converting them into a structured a small number of instances (usually 1, sometimes 2) of images form by populating a predefined database schema.


Learning dynamic polynomial proofs

arXiv.org Machine Learning

Polynomial inequalities lie at the heart of many mathematical disciplines. In this paper, we consider the fundamental computational task of automatically searching for proofs of polynomial inequalities. We adopt the framework of semi-algebraic proof systems that manipulate polynomial inequalities via elementary inference rules that infer new inequalities from the premises. These proof systems are known to be very powerful, but searching for proofs remains a major difficulty. In this work, we introduce a machine learning based method to search for a dynamic proof within these proof systems. We propose a deep reinforcement learning framework that learns an embedding of the polynomials and guides the choice of inference rules, taking the inherent symmetries of the problem as an inductive bias. We compare our approach with powerful and widely-studied linear programming hierarchies based on static proof systems, and show that our method reduces the size of the linear program by several orders of magnitude while also improving performance. These results hence pave the way towards augmenting powerful and well-studied semi-algebraic proof systems with machine learning guiding strategies for enhancing the expressivity of such proof systems.