Goto

Collaborating Authors

logic programming


A new perspective of paramodulation complexity by solving massive 8 puzzles

arXiv.org Artificial Intelligence

A sliding puzzle is a combination puzzle where a player slide pieces along certain routes on a board to reach a certain end-configuration. In this paper, we propose a novel measurement of complexity of massive sliding puzzles with paramodulation which is an inference method of automated reasoning. It turned out that by counting the number of clauses yielded with paramodulation, we can evaluate the difficulty of each puzzle. In experiment, we have generated 100 * 8 puzzles which passed the solvability checking by countering inversions. By doing this, we can distinguish the complexity of 8 puzzles with the number of generated with paramodulation. For example, board [2,3,6,1,7,8,5,4, hole] is the easiest with score 3008 and board [6,5,8,7,4,3,2,1, hole] is the most difficult with score 48653. Besides, we have succeeded to obverse several layers of complexity (the number of clauses generated) in 100 puzzles. We can conclude that proposal method can provide a new perspective of paramodulation complexity concerning sliding block puzzles.


Online Action Recognition

arXiv.org Artificial Intelligence

Recognition in planning seeks to find agent intentions, goals or activities given a set of observations and a knowledge library (e.g. goal states, plans or domain theories). In this work we introduce the problem of Online Action Recognition. It consists in recognizing, in an open world, the planning action that best explains a partially observable state transition from a knowledge library of first-order STRIPS actions, which is initially empty. We frame this as an optimization problem, and propose two algorithms to address it: Action Unification (AU) and Online Action Recognition through Unification (OARU). The former builds on logic unification and generalizes two input actions using weighted partial MaxSAT. The latter looks for an action within the library that explains an observed transition. If there is such action, it generalizes it making use of AU, building in this way an AU hierarchy. Otherwise, OARU inserts a Trivial Grounded Action (TGA) in the library that explains just that transition. We report results on benchmarks from the International Planning Competition and PDDLGym, where OARU recognizes actions accurately with respect to expert knowledge, and shows real-time performance.


Comprehension and Knowledge

arXiv.org Artificial Intelligence

The ability of an agent to comprehend a sentence is tightly connected to the agent's prior experiences and background knowledge. The paper suggests to interpret comprehension as a modality and proposes a complete bimodal logical system that describes an interplay between comprehension and knowledge modalities.


NSL: Hybrid Interpretable Learning From Noisy Raw Data

arXiv.org Artificial Intelligence

Inductive Logic Programming (ILP) systems learn generalised, interpretable rules in a data-efficient manner utilising existing background knowledge. However, current ILP systems require training examples to be specified in a structured logical format. Neural networks learn from unstructured data, although their learned models may be difficult to interpret and are vulnerable to data perturbations at run-time. This paper introduces a hybrid neural-symbolic learning framework, called NSL, that learns interpretable rules from labelled unstructured data. NSL combines pre-trained neural networks for feature extraction with FastLAS, a state-of-the-art ILP system for rule learning under the answer set semantics. Features extracted by the neural components define the structured context of labelled examples and the confidence of the neural predictions determines the level of noise of the examples. Using the scoring function of FastLAS, NSL searches for short, interpretable rules that generalise over such noisy examples. We evaluate our framework on propositional and first-order classification tasks using the MNIST dataset as raw data. Specifically, we demonstrate that NSL is able to learn robust rules from perturbed MNIST data and achieve comparable or superior accuracy when compared to neural network and random forest baselines whilst being more general and interpretable.


Towards Coinductive Models for Natural Language Understanding. Bringing together Deep Learning and Deep Semantics

arXiv.org Artificial Intelligence

This article contains a proposal to add coinduction to the computational apparatus of natural language understanding. This, we argue, will provide a basis for more realistic, computationally sound, and scalable models of natural language dialogue, syntax and semantics. Given that the bottom up, inductively constructed, semantic and syntactic structures are brittle, and seemingly incapable of adequately representing the meaning of longer sentences or realistic dialogues, natural language understanding is in need of a new foundation. Coinduction, which uses top down constraints, has been successfully used in the design of operating systems and programming languages. Moreover, implicitly it has been present in text mining, machine translation, and in some attempts to model intensionality and modalities, which provides evidence that it works. This article shows high level formalizations of some of such uses. Since coinduction and induction can coexist, they can provide a common language and a conceptual model for research in natural language understanding. In particular, such an opportunity seems to be emerging in research on compositionality. This article shows several examples of the joint appearance of induction and coinduction in natural language processing. We argue that the known individual limitations of induction and coinduction can be overcome in empirical settings by a combination of the the two methods. We see an open problem in providing a theory of their joint use.


The AI patent boom

#artificialintelligence

The World Intellectual Property Organization's (WIPO) first report of a series called WIPO Technology Trends, an extensive study of patent applications and other scientific documents, offers clues to the next big thing in AI. Rather than treating'AI' as a single homogeneous discipline (see our guide to AI terminology), the WIPO report divides it into AI techniques, AI functional applications and AI application fields, offering a finer-grained analysis. AI techniques are advanced forms of statistical and mathematical models used in AI, including machine learning, logic programming, ontology engineering, probabilistic reasoning and fuzzy logic. Machine learning is included in more than one third of all identified inventions and represents 89 per cent of AI filings, the report finds. Between 2013 and 2016, filings related to deep learning rocketed by about 175 per cent.


The Model Counting Competition 2020

arXiv.org Artificial Intelligence

Many computational problems in modern society account to probabilistic reasoning, statistics, and combinatorics. A variety of these real-world questions can be solved by representing the question in (Boolean) formulas and associating the number of models of the formula directly with the answer to the question. Since there has been an increasing interest in practical problem solving for model counting over the last years, the Model Counting (MC) Competition was conceived in fall 2019. The competition aims to foster applications, identify new challenging benchmarks, and to promote new solvers and improve established solvers for the model counting problem and versions thereof. We hope that the results can be a good indicator of the current feasibility of model counting and spark many new applications. In this paper, we report on details of the Model Counting Competition 2020, about carrying out the competition, and the results. The competition encompassed three versions of the model counting problem, which we evaluated in separate tracks. The first track featured the model counting problem (MC), which asks for the number of models of a given Boolean formula. On the second track, we challenged developers to submit programs that solve the weighted model counting problem (WMC). The last track was dedicated to projected model counting (PMC). In total, we received a surprising number of 9 solvers in 34 versions from 8 groups.


Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Fair and Explainable Automatic Recruitment

arXiv.org Artificial Intelligence

Machine learning methods are growing in relevance for biometrics and personal information processing in domains such as forensics, e-health, recruitment, and e-learning. In these domains, white-box (human-readable) explanations of systems built on machine learning methods can become crucial. Inductive Logic Programming (ILP) is a subfield of symbolic AI aimed to automatically learn declarative theories about the process of data. Learning from Interpretation Transition (LFIT) is an ILP technique that can learn a propositional logic theory equivalent to a given black-box system (under certain conditions). The present work takes a first step to a general methodology to incorporate accurate declarative explanations to classic machine learning by checking the viability of LFIT in a specific AI application scenario: fair recruitment based on an automatic tool generated with machine learning methods for ranking Curricula Vitae that incorporates soft biometric information (gender and ethnicity). We show the expressiveness of LFIT for this specific problem and propose a scheme that can be applicable to other domains.


Latent Programmer: Discrete Latent Codes for Program Synthesis

arXiv.org Artificial Intelligence

In many sequence learning tasks, such as program synthesis and document summarization, a key problem is searching over a large space of possible output sequences. We propose to learn representations of the outputs that are specifically meant for search: rich enough to specify the desired output but compact enough to make search more efficient. Discrete latent codes are appealing for this purpose, as they naturally allow sophisticated combinatorial search strategies. The latent codes are learned using a self-supervised learning principle, in which first a discrete autoencoder is trained on the output sequences, and then the resulting latent codes are used as intermediate targets for the end-to-end sequence prediction task. Based on these insights, we introduce the \emph{Latent Programmer}, a program synthesis method that first predicts a discrete latent code from input/output examples, and then generates the program in the target language. We evaluate the Latent Programmer on two domains: synthesis of string transformation programs, and generation of programs from natural language descriptions. We demonstrate that the discrete latent representation significantly improves synthesis accuracy.


Automated Reasoning

#artificialintelligence

Automated reasoning is the general process that gives machine learning algorithms an organized framework to define, approach and solve problems. While more a theoretical field of research than a specific technique itself, automated reasoning underpins many machine learning practices, such as logic programming, fuzzy logic, Bayesian inference, and maximal entropy reasoning. The ultimate goal is to create deep learning systems that can mimic human deduction without human interference.