Al-Bakri, Mustafa (University of Grenoble Alpes) | Atencia, Manuel (University of Grenoble Alpes) | Lalande, Steffen (Institut National de l’Audiovisuel) | Rousset, Marie-Christine (University of Grenoble Alpes)
In this paper we model the problem of data linkage in Linked Data as a reasoning problem on possibly decentralized data. We describe a novel import-by-query algorithm that alternates steps of sub-query rewriting and of tailored querying the Linked Data cloud in order to import data as specific as possible for inferring or contradicting given target same-as facts. Experiments conducted on a real-world dataset have demonstrated the feasibility of this approach and its usefulness in practice for data linkage and disambiguation.
Effective communication in open environments relies on the ability of agents to reach a mutual understanding of the exchanged message by reconciling the vocabulary (ontology) used. Various approaches have considered how mutually acceptable mappings between corresponding concepts in the agents' own ontologies may be determined dynamically through argumentation-based negotiation (such as Meaning-based Argumentation). However, the complexity of this process is high, approaching π 2 (p) -complete in some cases. As reducing this complexity is non-trivial, we propose the use of ontology modularization as a means of reducing the space over which possible concepts are negotiated. The suitability of different modularization approaches as filtering mechanisms for reducing the negotiation search space is investigated, and a framework that integrates modularization with Meaning-based Argumentation is proposed. We empirically demonstrate that some modularization approaches not only reduce the number of alignments required to reach consensus, but also predict those cases where a service provider is unable to satisfy a request, without the need for negotiation.
There are many significant research projects focused on providing semantic web repositories that are scalable and efficient. However, the true value of the semantic web architecture is its ability to represent meaningful knowledge and not just data. Therefore, a semantic web knowledge base should do more than retrieve collections of triples. We propose RDFKB (Resource Description Knowledge Base), a complete semantic web knowledge case. RDFKB is a solution for managing, persisting and querying semantic web knowledge. Our experiments with real world and synthetic datasets demonstrate that RDFKB achieves superior query performance to other state-of-the-art solutions. The key features of RDFKB that differentiate it from other solutions are: 1) a simple and efficient process for data additions, deletions and updates that does not involve reprocessing the dataset; 2) materialization of inferred triples at addition time without performance degradation; 3) materialization of uncertain information and support for queries involving probabilities; 4) distributed inference across datasets; 5) ability to apply alignments to the dataset and perform queries against multiple sources using alignment. RDFKB allows more knowledge to be stored and retrieved; it is a repository not just for RDF datasets, but also for inferred triples, probability information, and lineage information. RDFKB provides a complete and efficient RDF data repository and knowledge base.
Ontology matching is the problem of determining correspondences between concepts, properties, and individuals of different heterogeneous ontologies. With this paper we present a novel probabilistic-logical framework for ontology matching based on Markov logic. We define the syntax and semantics and provide a formalization of the ontology matching problem within the framework. The approach has several advantages over existing methods such as ease of experimentation, incoherence mitigation during the alignment process, and the incorporation of a-priori confidence values. We show empirically that the approach is efficient and more accurate than existing matchers on an established ontology alignment benchmark dataset.
Luz, Nuno (GECAD - Knowledge Engineering and Decision Support Research Center) | Silva, Nuno ( GECAD - Knowledge Engineering and Decision Support Research Center Institute of Engineering - Polytechnic of Porto (ISEP/IPP) ) | Maio, Paulo ( GECAD - Knowledge Engineering and Decision Support Research Center Institute of Engineering - Polytechnic of Porto (ISEP/IPP) ) | Novais, Paulo ( CCTC - Computer Science and Technology Center University of Minho )
Currently, the majority of matchers are able to establish simple correspondences between entities, but are not able to provide complex alignments. Furthermore, the resulting alignments do not contain additional information on how they were extracted and formed. Not only it becomes hard to debug the alignment results, but it is also difficult to justify correspondences. We propose a method to generate complex ontology alignments that captures the semantics of matching algorithms and human-oriented ontology alignment definition processes. Through these semantics, arguments that provide an abstraction over the specificities of the alignment process are generated and used by agents to share, negotiate and combine correspondences. After the negotiation process, the resulting arguments and their relations can be visualized by humans in order to debug and understand the given correspondences.