Goto

Collaborating Authors

 Description Logic


Lutz's Spoiler Technique Revisited: A Unified Approach to Worst-Case Optimal Entailment of Unions of Conjunctive Queries in Locally-Forward Description Logics

arXiv.org Artificial Intelligence

We present a unified approach to (both finite and unrestricted) worst-case optimal entailment of (unions of) conjunctive queries (U)CQs in the wide class of "locally-forward" description logics. The main technique that we employ is a generalisation of Lutz's spoiler technique, originally developed for CQ entailment in ALCHQ. Our result closes numerous gaps present in the literature, most notably implying ExpTime-completeness of (U)CQ-querying for any superlogic of ALC contained in ALCHbregQ, and, as we believe, is abstract enough to be employed as a black-box in many new scenarios.


Semantic Reasoning with Differentiable Graph Transformations

arXiv.org Artificial Intelligence

This paper introduces a differentiable semantic reasoner, where rules are presented as a relevant set of graph transformations. These rules can be written manually or inferred by a set of facts and goals presented as a training set. While the internal representation uses embeddings in a latent space, each rule can be expressed as a set of predicates conforming to a subset of Description Logic.


A Rational Entailment for Expressive Description Logics via Description Logic Programs

arXiv.org Artificial Intelligence

Lehmann and Magidor's rational closure is acknowledged as a landmark in the field of non-monotonic logics and it has also been re-formulated in the context of Description Logics (DLs). We show here how to model a rational form of entailment for expressive DLs, such as SROIQ, providing a novel reasoning procedure that compiles a non-monotone DL knowledge base into a description logic program (dl-program).


A Description Logic for Analogical Reasoning

arXiv.org Artificial Intelligence

Ontologies formalise how the concepts from a given domain are interrelated. Despite their clear potential as a backbone for explainable AI, existing ontologies tend to be highly incomplete, which acts as a significant barrier to their more widespread adoption. To mitigate this issue, we present a mechanism to infer plausible missing knowledge, which relies on reasoning by analogy. To the best of our knowledge, this is the first paper that studies analogical reasoning within the setting of description logic ontologies. After showing that the standard formalisation of analogical proportion has important limitations in this setting, we introduce an alternative semantics based on bijective mappings between sets of features. We then analyse the properties of analogies under the proposed semantics, and show among others how it enables two plausible inference patterns: rule translation and rule extrapolation.


Signature-Based Abduction with Fresh Individuals and Complex Concepts for Description Logics (Extended Version)

arXiv.org Artificial Intelligence

Given a knowledge base and an observation as a set of facts, ABox abduction aims at computing a hypothesis that, when added to the knowledge base, is sufficient to entail the observation. In signature-based ABox abduction, the hypothesis is further required to use only names from a given set. This form of abduction has applications such as diagnosis, KB repair, or explaining missing entailments. It is possible that hypotheses for a given observation only exist if we admit the use of fresh individuals and/or complex concepts built from the given signature, something most approaches for ABox abduction so far do not support or only support with restrictions. In this paper, we investigate the computational complexity of this form of abduction -- allowing either fresh individuals, complex concepts, or both -- for various description logics, and give size bounds on the hypotheses if they exist.


Finding Good Proofs for Description Logic Entailments Using Recursive Quality Measures (Extended Technical Report)

arXiv.org Artificial Intelligence

Logic-based approaches to AI have the advantage that their behavior can in principle be explained to a user. If, for instance, a Description Logic reasoner derives a consequence that triggers some action of the overall system, then one can explain such an entailment by presenting a proof of the consequence in an appropriate calculus. How comprehensible such a proof is depends not only on the employed calculus, but also on the properties of the particular proof, such as its overall size, its depth, the complexity of the employed sentences and proof steps, etc. For this reason, we want to determine the complexity of generating proofs that are below a certain threshold w.r.t. a given measure of proof quality. Rather than investigating this problem for a fixed proof calculus and a fixed measure, we aim for general results that hold for wide classes of calculi and measures. In previous work, we first restricted the attention to a setting where proof size is used to measure the quality of a proof. We then extended the approach to a more general setting, but important measures such as proof depth were not covered. In the present paper, we provide results for a class of measures called recursive, which yields lower complexities and also encompasses proof depth. In addition, we close some gaps left open in our previous work, thus providing a comprehensive picture of the complexity landscape.


Learning Description Logic Ontologies. Five Approaches. Where Do They Stand?

arXiv.org Artificial Intelligence

The quest for acquiring a formal representation of the knowledge of a domain of interest has attracted researchers with various backgrounds into a diverse field called ontology learning. We highlight classical machine learning and data mining approaches that have been proposed for (semi-)automating the creation of description logic (DL) ontologies. These are based on association rule mining, formal concept analysis, inductive logic programming, computational learning theory, and neural networks. We provide an overview of each approach and how it has been adapted for dealing with DL ontologies. Finally, we discuss the benefits and limitations of each of them for learning DL ontologies.


On the Complexity of Learning Description Logic Ontologies

arXiv.org Artificial Intelligence

Ontologies are a popular way of representing domain knowledge, in particular, knowledge in domains related to life sciences. (Semi-)automating the process of building an ontology has attracted researchers from different communities into a field called "Ontology Learning". We provide a formal specification of the exact and the probably approximately correct learning models from computational learning theory. Then, we recall from the literature complexity results for learning lightweight description logic (DL) ontologies in these models. Finally, we highlight other approaches proposed in the literature for learning DL ontologies.


A conditional, a fuzzy and a probabilistic interpretation of self-organising maps

arXiv.org Artificial Intelligence

In this paper we establish a link between preferential semantics for description logics and self-organising maps, which have been proposed as possible candidates to explain the psychological mechanisms underlying category generalisation. In particular, we show that a concept-wise multipreference semantics, which takes into account preferences with respect to different concepts and has been recently proposed for defeasible description logics, can be used to to provide a logical interpretation of SOMs. We also provide a logical interpretation of SOMs in terms of a fuzzy description logic as well as a probabilistic account.


First Order-Rewritability and Containment of Conjunctive Queries in Horn Description Logics

#artificialintelligence

We study FO-rewritability of conjunctive queries in the presence of ontologies formulated in a description logic between EL and Horn-SHIF, along with related query containment problems. Apart from providing characterizations, we establish complexity results ranging from ExpTime via NExpTime to 2ExpTime, pointing out several interesting effects. In particular, FO-rewriting is more complex for conjunctive queries than for atomic queries when inverse roles are present, but not otherwise.