"An ontology defines the terms used to describe and represent an area of knowledge. … Ontologies include computer-usable definitions of basic concepts in the domain and the relationships among them."
– from OWL Web Ontology Language Use Cases and Requirements. W3C Recommendation (10 February 2004). Jeff Heflin, editor.
The semina\"ive algorithm can materialise all consequences of arbitrary datalog rules, and it also forms the basis for incremental algorithms that update a materialisation as the input facts change. Certain (combinations of) rules, however, can be handled much more efficiently using custom algorithms. To integrate such algorithms into a general reasoning approach that can handle arbitrary rules, we propose a modular framework for materialisation computation and its maintenance. We split a datalog program into modules that can be handled using specialised algorithms, and handle the remaining rules using the semina\"ive algorithm. We also present two algorithms for computing the transitive and the symmetric-transitive closure of a relation that can be used within our framework. Finally, we show empirically that our framework can handle arbitrary datalog programs while outperforming existing approaches, often by orders of magnitude.
This paper develops the concept of knowledge and its exchange using Semantic Web technologies. It points out that knowledge is more than information because it embodies the meaning, that is to say semantic and context. These characteristics will influence our approach to represent and to treat the knowledge. In order to be adopted, the developed system needs to be simple and to use standards. The goal of the paper is to find standards to model knowledge and exchange it with an other person. Therefore, we propose to model knowledge using UML models to show a graphical representation and to exchange it with XML to ensure the portability at low cost. We introduce the concept of ontology for organizing knowledge and for facilitating the knowledge exchange. Proposals have been tested by implementing an application on the design knowledge of a pen.
We study the problem of finite ontology mediated query an-swering (FOMQA), the variant of OMQA where the represented world is assumed to be finite, and thus only finite models of the ontology are considered. We adopt the most typical setting with unions of conjunctive queries and ontologies expressed in description logics (DLs). The study of FOMQA isrelevant in settings that are not finitely controllable. This is the case not only for DLs without the finite model property, but also for those allowing transitive role declarations. When transitive roles are allowed, evaluating queries is challenging: FOMQA is undecidable for SHOIF and only known to be decidable for the Horn fragment of ALCIF. We show decidability of FOMQA for three proper fragments of SOIF: SOI, SOF, and SIF. Our approach is to characterise models relevant for deciding finite query entailment. Relying on a certain regularity of these models, we develop automata-based decision procedures with optimal complexity bounds.
Combined approaches have become a successful technique for solving conjunctive query (CQ) answering over description logics (DL) ontologies. Nevertheless, existing approaches are restricted to tractable DL languages. In this work, we extend the combined method to the more expressive DL Horn-ALCHOIQ—a language for which CQ answering is EXPTIME-complete—in order to develop an efficient and scalable CQ answering procedure which is worst-case optimal for Horn-ALCHOIQ and ELHO ontologies. We implement and study the feasibility of our algorithm, and compare its performance to the DL reasoner Konclude.
Baader, Franz (Technische Universität Dresden ) | Kriegel, Francesco (Technische Universität Dresden) | Nuradiansyah, Adrian (Technische Universität Dresden) | Peñaloza, Rafael (Free University of Bozen-Bolzano)
The classical approach for repairing a Description Logic ontology O in the sense of removing an unwanted consequence c is to delete a minimal number of axioms from O such that the resulting ontology O' does not have the consequence c. However, the complete deletion of axioms may be too rough, in the sense that it may also remove consequences that are actually wanted. To alleviate this problem, we propose a more gentle notion of repair in which axioms are not deleted, but only weakened. On the one hand, we investigate general properties of this gentle repair method. On the other hand, we propose and analyze concrete approaches for weakening axioms expressed in the Description Logic EL.
The problem of representing and reasoning with context dependent knowledge has been of certain interest since the beginning of AI. Among the available solutions, we consider the Contextualized Knowledge Repository (CKR) framework. In CKR applications it is often useful to reason over a hierarchical organization of contexts: however, the CKR model is not able to represent exception handling in the inheritance of knowledge across contexts. In this paper we develop a proposal, based on a recent principle for exception handling for inheritance in description logics, that allows CKRs with context dependent defeasible axioms which can be overridden by more specific local knowledge. We provide an alternative semantics for a core (simple) version of CKR that copes with contextual defeasible axioms, and we define a datalog translation generating programs that are complete w.r.t.
Machine learning explanation can significantly boost machine learning's application in decision making,but the usability of current methods is limited in human-centric explanation,especially for transfer learning,an important machine learning branch that aims at utilizing knowledge from one learning domain (i.e., a pair of dataset and prediction task) to enhance prediction model training in another learning domain.In this paper, we propose an ontology-based approach for human-centric explanation of transfer learning. Three kinds of knowledge-based explanatory evidence, with different granularities, including general factors, particular narrators and core contexts are first proposedand then inferred with both local ontologies and external knowledge bases.The evaluation with US flight data and DBpedia has presented their confidence and availability in explaining the transferability of feature representation in flight departure delay forecasting.
Concept diagrams form a visual language that is aimed at non-experts for the specification of ontologies and reasoning about them. Empirical evidence suggests that they are more accessible to ontology users than symbolic notations typically used (e.g., DL, OWL). Here, we report on iCon, an interactive theorem prover for concept diagrams that allows reasoning about ontologies diagrammatically. The input to iCon is a theorem that needs proving to establish how an entailment, in an ontology that needs debugging, is caused by a minimal set of axioms. Such a minimal set of axioms is called an entailment justification. Carrying out inference in iCon provides a diagrammatic proof (i.e., explanation) that shows how the axioms in an entailment justification give rise to the entailment under investigation. iCon proofs are formally verified and guaranteed to be correct.
Andresel, Medina (Vienna University of Technology) | Ibanez-Garcia, Yazmin Angelica (Vienna University of Technology) | Ortiz, Magdalena (Vienna University of Technology) | Simkus, Mantas (Vienna University of Technolgy)
We investigate query reformulation rules in OBDA to obtain either more or less answers. We extend DL-Lite with complex role inclusions and define rules that produce query relaxations/restrictions over any dataset. We also introduce a set of data-driven rules to get more fine-grained reformulations. In Ontology-based data access (OBDA) an ontology provides a conceptual view of a collection of data sources, and describes knowledge about the domain of interest at a high level of abstraction. Thus users can formulate queries over data sources using a familiar vocabulary provided by the ontology, while the represented knowledge can be leveraged to retrieve more complete answers.
Metadata, such as mappings or constraints, is used in a variety of scenarios to facilitate query answering; these include data integration and exchange, consistent query answering, and ontology-based data access. A common feature of these scenarios is that data and metadata together produce multiple databases, and answers to queries must be certain, i.e., true in all such databases. This usually incurs prohibitively high complexity outside very restricted classes of queries such as conjunctive queries and their unions. To overcome this, we propose to approximate such query answering by reducing it to another scenario where multiple databases need to be taken into account, namely incomplete information in databases. For them, well-behaved approximation schemes exist for much larger classes of queries. We give a generic representation of query answering via incomplete data, and show how it works in the scenarios listed above. We use the connection to show how to effectively approximate several intractable query answering problems, and discuss differences between applying this framework under open and closed world semantics.