Goto

Collaborating Authors

Results


Cleaning Inconsistent Data in Temporal DL-Lite Under Best Repair Semantics

arXiv.org Artificial Intelligence

In this paper, we address the problem of handling inconsistent data in Temporal Description Logic (TDL) knowledge bases. Considering the data part of the knowledge base as the source of inconsistency over time, we propose an ABox repair approach. This is the first work handling the repair in TDL Knowledge bases. To do so, our goal is twofold: 1) detect temporal inconsistencies and 2) propose a data temporal reparation. For the inconsistency detection, we propose a reduction approach from TDL to DL which allows to provide a tight NP-complete upper bound for TDL concept satisfiability and to use highly optimised DL reasoners that can bring precise explanation (the set of inconsistent data assertions). Thereafter, from the obtained explanation, we propose a method for automatically computing the best repair in the temporal setting based on the allowed rigid predicates and the time order of assertions.


Finding Good Proofs for Description Logic Entailments Using Recursive Quality Measures (Extended Technical Report)

arXiv.org Artificial Intelligence

Logic-based approaches to AI have the advantage that their behavior can in principle be explained to a user. If, for instance, a Description Logic reasoner derives a consequence that triggers some action of the overall system, then one can explain such an entailment by presenting a proof of the consequence in an appropriate calculus. How comprehensible such a proof is depends not only on the employed calculus, but also on the properties of the particular proof, such as its overall size, its depth, the complexity of the employed sentences and proof steps, etc. For this reason, we want to determine the complexity of generating proofs that are below a certain threshold w.r.t. a given measure of proof quality. Rather than investigating this problem for a fixed proof calculus and a fixed measure, we aim for general results that hold for wide classes of calculi and measures. In previous work, we first restricted the attention to a setting where proof size is used to measure the quality of a proof. We then extended the approach to a more general setting, but important measures such as proof depth were not covered. In the present paper, we provide results for a class of measures called recursive, which yields lower complexities and also encompasses proof depth. In addition, we close some gaps left open in our previous work, thus providing a comprehensive picture of the complexity landscape.


Conjunctive Queries: Unique Characterizations and Exact Learnability

arXiv.org Artificial Intelligence

We answer the question which conjunctive queries are uniquely characterized by polynomially many positive and negative examples, and how to construct such examples efficiently. As a consequence, we obtain a new efficient exact learning algorithm for a class of conjunctive queries. At the core of our contributions lie two new polynomial-time algorithms for constructing frontiers in the homomorphism lattice of finite structures. We also discuss implications for the unique characterizability and learnability of schema mappings and of description logic concepts.


Extending Description Logic EL++ with Linear Constraints on the Probability of Axioms

arXiv.org Artificial Intelligence

One of the main reasons to employ a description logic such as EL or EL++ is the fact that it has efficient, polynomial-time algorithmic properties such as deciding consistency and inferring subsumption. However, simply by adding negation of concepts to it, we obtain the expressivity of description logics whose decision procedure is {ExpTime}-complete. Similar complexity explosion occurs if we add probability assignments on concepts. To lower the resulting complexity, we instead concentrate on assigning probabilities to Axioms (GCIs). We show that the consistency detection problem for such a probabilistic description logic is NP-complete, and present a linear algebraic deterministic algorithm to solve it, using the column generation technique. We also examine and provide algorithms for the probabilistic extension problem, which consists of inferring the minimum and maximum probabilities for a new axiom, given a consistent probabilistic knowledge base.


Consequence-Based Reasoning for Description Logics with Disjunctions and Number Restrictions

Journal of Artificial Intelligence Research

Classification of description logic (DL) ontologies is a key computational problem in modern data management applications, so considerable effort has been devoted to the development and optimisation of practical reasoning calculi. Consequence-based calculi combine ideas from hypertableau and resolution in a way that has proved very effective in practice. However, existing consequence-based calculi can handle either Horn DLs (which do not support disjunction) or DLs without number restrictions. In this paper, we overcome this important limitation and present the first consequence-based calculus for deciding concept subsumption in the DL ALCHIQ+. Our calculus runs in exponential time assuming unary coding of numbers, and on ELH ontologies it runs in polynomial time. The extension to disjunctions and number restrictions is technically involved: we capture the relevant consequences using first-order clauses, and our inference rules adapt paramodulation techniques from first-order theorem proving. By using a well-known preprocessing step, the calculus can also decide concept subsumptions in SRIQ---a rich DL that covers all features of OWL 2 DL apart from nominals and datatypes. We have implemented our calculus in a new reasoner called Sequoia. We present the architecture of our reasoner and discuss several novel and important implementation techniques such as clause indexing and redundancy elimination. Finally, we present the results of an extensive performance evaluation, which revealed Sequoia to be competitive with existing reasoners. Thus, the calculus and the techniques we present in this paper provide an important addition to the repertoire of practical implementation techniques for description logic reasoning.


Probabilistic DL Reasoning with Pinpointing Formulas: A Prolog-based Approach

arXiv.org Artificial Intelligence

When modeling real world domains we have to deal with information that is incomplete or that comes from sources with different trust levels. This motivates the need for managing uncertainty in the Semantic Web. To this purpose, we introduced a probabilistic semantics, named DISPONTE, in order to combine description logics with probability theory. The probability of a query can be then computed from the set of its explanations by building a Binary Decision Diagram (BDD). The set of explanations can be found using the tableau algorithm, which has to handle non-determinism. Prolog, with its efficient handling of non-determinism, is suitable for implementing the tableau algorithm. TRILL and TRILLP are systems offering a Prolog implementation of the tableau algorithm. TRILLP builds a pinpointing formula, that compactly represents the set of explanations and can be directly translated into a BDD. Both reasoners were shown to outperform state-of-the-art DL reasoners. In this paper, we present an improvement of TRILLP, named TORNADO, in which the BDD is directly built during the construction of the tableau, further speeding up the overall inference process. An experimental comparison shows the effectiveness of TORNADO. All systems can be tried online in the TRILL on SWISH web application at http://trill.ml.unife.it/.


Finite Query Answering in Expressive Description Logics with Transitive Roles

arXiv.org Artificial Intelligence

We study the problem of finite ontology mediated query answering (FOMQA), the variant of OMQA where the represented world is assumed to be finite, and thus only finite models of the ontology are considered. We adopt the most typical setting with unions of conjunctive queries and ontologies expressed in description logics (DLs). The study of FOMQA is relevant in settings that are not finitely controllable. This is the case not only for DLs without the finite model property, but also for those allowing transitive role declarations. When transitive roles are allowed, evaluating queries is challenging: FOMQA is undecidable for SHOIF and only known to be decidable for the Horn fragment of ALCIF. We show decidability of FOMQA for three proper fragments of SOIF: SOI, SOF, and SIF. Our approach is to characterise models relevant for deciding finite query entailment. Relying on a certain regularity of these models, we develop automata-based decision procedures with optimal complexity bounds.


Exploiting Partial Assignments for Efficient Evaluation of Answer Set Programs with External Source Access

Journal of Artificial Intelligence Research

Answer Set Programming (ASP) is a well-known declarative problem solving approach based on nonmonotonic logic programs, which has been successfully applied to a wide range of applications in artificial intelligence and beyond. To address the needs of modern applications, HEX-programs were introduced as an extension of ASP with external atoms for accessing information outside programs via an API style bi-directional interface mechanism. To evaluate such programs, conflict-driving learning algorithms for SAT and ASP solving have been extended in order to capture the semantics of external atoms. However, a drawback of the state-of-the-art approach is that external atoms are only evaluated under complete assignments (i.e., input to the external source) while in practice, their values often can be determined already based on partial assignments alone (i.e., from incomplete input to the external source). This prevents early backtracking in case of conflicts, and hinders more efficient evaluation of HEX-programs. We thus extend the notion of external atoms to allow for three-valued evaluation under partial assignments, while the two-valued semantics of the overall HEX-formalism remains unchanged. This paves the way for three enhancements: first, to evaluate external sources at any point during model search, which can trigger learning knowledge about the source behavior and/or early backtracking in the spirit of theory propagation in SAT modulo theories (SMT). Second, to optimize the knowledge learned in terms of so-called nogoods, which roughly speaking are impossible input-output configurations. Shrinking nogoods to their relevant input part leads to more effective search space pruning. And third, to make a necessary minimality check of candidate answer sets more efficient by exploiting early external evaluation calls. As this check usually accounts for a large share of the total runtime, optimization is here particularly important. We further present an experimental evaluation of an implementation of a novel HEX-algorithm that incorporates these enhancements using a benchmark suite. Our results demonstrate a clear efficiency gain over the state-of-the-art HEX-solver for the benchmarks, and provide insights regarding the most effective combinations of solver configurations.


Ontologies Come of Age

AITopics Original Links

This is an updated version of Usability Issues in Description Logic Systems'' published in Proceedings of International Workshop on Description Logics, Gif sur Yvette, (Paris), France, September, 1997.


The Complexity of Answering Conjunctive and Navigational Queries over OWL 2 EL Knowledge Bases

Journal of Artificial Intelligence Research

OWL 2 EL is a popular ontology language that supports role inclusions---that is, axioms that capture compositional properties of roles. Role inclusions closely correspond to context-free grammars, which was used to show that answering conjunctive queries (CQs) over OWL 2 EL knowledge bases with unrestricted role inclusions is undecidable. However, OWL 2 EL inherits from OWL 2 DL the syntactic regularity restriction on role inclusions, which ensures that role chains implying a particular role can be described using a finite automaton (FA). This is sufficient to ensure decidability of CQ answering; however, the FAs can be worst-case exponential in size so the known approaches do not provide a tight upper complexity bound. In this paper, we solve this open problem and show that answering CQs over OWL 2 EL knowledge bases is PSPACE-complete in combined complexity (i.e., the complexity measured in the total size of the input). To this end, we use a novel encoding of regular role inclusions using bounded-stack pushdown automata---that is, FAs extended with a stack of bounded size. Apart from theoretical interest, our encoding can be used in practical tableau algorithms to avoid the exponential blowup due to role inclusions. In addition, we sharpen the lower complexity bound and show that the problem is PSPACE-hard even if we consider only role inclusions as part of the input (i.e., the query and all other parts of the knowledge base are fixed). Finally, we turn our attention to navigational queries over OWL 2 EL knowledge bases, and we show that answering positive, converse-free conjunctive graph XPath queries is PSPACE-complete as well; this is interesting since allowing the converse operator in queries is known to make the problem EXPTIME-hard. Thus, in this paper we present several important contributions to the landscape of the complexity of answering expressive queries over description logic knowledge bases.