Stoilos, Giorgos
A Framework and Positive Results for IAR-answering
Trivela, Despoina (Athens University of Economics and Business) | Stoilos, Giorgos (Babylon Health) | Vassalos, Vasilis (Athens University of Economics and Business)
Inconsistency-tolerant semantics, like the IAR semantics, have been proposed as means to compute meaningful query answers over inconsistent Description Logic (DL) ontologies. So far query answering under the IAR semantics (IAR-answering) is known to be tractable only for arguably weak DLs like DL-Lite and the quite restricted EL ⊥nr fragment of E L⊥. Towards providing a systematic study of IAR-answering, in the current paper we first present a general framework/algorithm for IAR-answering which applies to arbitrary DLs but need not terminate. Nevertheless, this framework allows us to develop a sufficient condition for tractability of IAR-answering and hence of termination of our algorithm. We then show that this condition is always satisfied by the arguably expressive DL DL-Lite bool , providing the first positive result for IAR-answering over a non-Horn-DL. In addition, recent results show that this condition usually holds for real-world ontologies and techniques and algorithms for checking it in practice have also been studied recently; thus, overall our results are highly relevant in practice. Finally, we have provided a prototype implementation and a preliminary evaluation obtaining encouraging results.
Computing Datalog Rewritings Beyond Horn Ontologies
Grau, Bernardo Cuenca (University of Oxford) | Motik, Boris (University of Oxford) | Stoilos, Giorgos (National Technical University of Athens) | Horrocks, Ian (University of Oxford)
Rewriting-based approaches for answering queries over an OWL 2 DL ontology have so far been developed mainly for Horn fragments of OWL 2 DL. In this paper, we study the possibilities of answering queries over non-Horn ontologies using datalog rewritings. We prove that this is impossible in general even for very simple ontology languages, and even if PTIME = NP. Furthermore, we present a resolution-based procedure for SHI ontologies that, in case it terminates, produces a datalog rewriting of the ontology. We also show that our procedure necessarily terminates on DL-Lite Bool H,+ ontologies — an extension of OWL 2 QL with transitive roles and Boolean connectives.
Computing Datalog Rewritings beyond Horn Ontologies
Grau, Bernardo Cuenca, Motik, Boris, Stoilos, Giorgos, Horrocks, Ian
Rewriting-based approaches for answering queries over an OWL 2 DL ontology have so far been developed mainly for Horn fragments of OWL 2 DL. In this paper, we study the possibilities of answering queries over non-Horn ontologies using datalog rewritings. We prove that this is impossible in general even for very simple ontology languages, and even if PTIME = NP. Furthermore, we present a resolution-based procedure for $\SHI$ ontologies that, in case it terminates, produces a datalog rewriting of the ontology. Our procedure necessarily terminates on DL-Lite_{bool}^H ontologies---an extension of OWL 2 QL with transitive roles and Boolean connectives.
Benchmarking Ontology-Based Query Rewriting Systems
Imprialou, Martha (University of Oxford) | Stoilos, Giorgos (National Technical University of Athens) | Grau, Bernardo Cuenca (University of Oxford)
Query rewriting is a prominent reasoning technique in ontology-based data access applications. A wide variety of query rewriting algorithms have been proposed in recent years and implemented in highly optimised reasoning systems. Query rewriting systems are complex software programs; even if based on provably correct algorithms, sophisticated optimisations make the systems more complex and errors become more likely to happen. In this paper, we present an algorithm that, given an ontology as input, synthetically generates ``relevant'' test queries. Intuitively, each of these queries can be used to verify whether the system correctly performs a certain set of ``inferences'', each of which can be traced back to axioms in the input ontology. Furthermore, we present techniques that allow us to determine whether a system is unsound and/or incomplete for a given test query and ontology. Our evaluation shows that most publicly available query rewriting systems are unsound and/or incomplete, even on commonly used benchmark ontologies; more importantly, our techniques revealed the precise causes of their correctness issues and the systems were then corrected based on our feedback. Finally, since our evaluation is based on a larger set of test queries than existing benchmarks, which are based on hand-crafted queries, it also provides a better understanding of the scalability behaviour of each system.
How Incomplete Is Your Semantic Web Reasoner?
Stoilos, Giorgos (Oxford University Computing Laboratory) | Grau, Bernardo Cuenca (Oxford University Computing Laboratory) | Horrocks, Ian (Oxford University Computing Laboratory)
Conjunctive query answering is a key reasoning service for many ontology-based applications. In order to improve scalability, many Semantic Web query answering systems give up completeness (i.e., they do not guarantee to return all query answers). It may be useful or even critical to the designers and users of such systems to understand how much and what kind of information is (potentially) being lost. We present a method for generating test data that can be used to provide at least partial answers to these questions, a purpose for which existing benchmarks are not well suited. In addition to developing a general framework that formalises the problem, we describe practical data generation algorithms for some popular ontology languages, and present some very encouraging results from our preliminary evaluation.