Goto

Collaborating Authors

Ontologies


Introducing the viewpoint in the resource description using machine learning

#artificialintelligence

Search engines allow providing the user with data information according to their interests and specialty. Thus, it is necessary to exploit descriptions of the resources, which take into consideration viewpoints. Generally, the resource descriptions are available in RDF (e.g., DBPedia of Wikipedia content). However, these descriptions do not take into consideration viewpoints. In this paper, we propose a new approach, which allows converting a classic RDF resource description to a resource description that takes into consideration viewpoints. To detect viewpoints in the document, a machine learning technique will be exploited on an instanced ontology. This latter allows representing the viewpoint in a given domain. An experimental study shows that the conversion of the classic RDF resource description to a resource description that takes into consideration viewpoints, allows giving very relevant responses to the user's requests.


Expressing High-Level Scientific Claims with Formal Semantics

arXiv.org Artificial Intelligence

The use of semantic technologies is gaining significant traction in science communication with a wide array of applications in disciplines including the Life Sciences, Computer Science, and the Social Sciences. Languages like RDF, OWL, and other formalisms based on formal logic are applied to make scientific knowledge accessible not only to human readers but also to automated systems. These approaches have mostly focused on the structure of scientific publications themselves, on the used scientific methods and equipment, or on the structure of the used datasets. The core claims or hypotheses of scientific work have only been covered in a shallow manner, such as by linking mentioned entities to established identifiers. In this research, we therefore want to find out whether we can use existing semantic formalisms to fully express the content of high-level scientific claims using formal semantics in a systematic way. Analyzing the main claims from a sample of scientific articles from all disciplines, we find that their semantics are more complex than what a straight-forward application of formalisms like RDF or OWL account for, but we managed to elicit a clear semantic pattern which we call the 'super-pattern'. We show here how the instantiation of the five slots of this super-pattern leads to a strictly defined statement in higher-order logic. We successfully applied this super-pattern to an enlarged sample of scientific claims. We show that knowledge representation experts, when instructed to independently instantiate the super-pattern with given scientific claims, show a high degree of consistency and convergence given the complexity of the task and the subject. These results therefore open the door for expressing high-level scientific findings in a manner they can be automatically interpreted, which on the longer run can allow us to do automated consistency checking, and much more.


Union and Intersection of all Justifications

arXiv.org Artificial Intelligence

We present new algorithms for computing the union and intersection of all justifications for a given ontological consequence without first computing the set of all justifications. Through an empirical evaluation, we show that our approach works well in practice for expressive description logics. In particular, the union of all justifications can be computed much faster than with existing justification-enumeration approaches. We further discuss how to use these results to repair ontologies.


A formalisation of BPMN in Description Logics

arXiv.org Artificial Intelligence

In this paper we present a textual description, in terms of Description Logics, of the BPMN Ontology, which provides a clear semantic formalisation of the structural components of the Business Process Modelling Notation (BPMN), based on the latest stable BPMN specifications from OMG [BPMN Version 1.1 -- January 2008]. The development of the ontology was guided by the description of the complete set of BPMN Element Attributes and Types contained in Annex B of the BPMN specifications.


Automated and Explainable Ontology Extension Based on Deep Learning: A Case Study in the Chemical Domain

arXiv.org Artificial Intelligence

Reference ontologies provide a shared vocabulary and knowledge resource for their domain. Manual construction enables them to maintain a high quality, allowing them to be widely accepted across their community. However, the manual development process does not scale for large domains. We present a new methodology for automatic ontology extension and apply it to the ChEBI ontology, a prominent reference ontology for life sciences chemistry. We trained a Transformer-based deep learning model on the leaf node structures from the ChEBI ontology and the classes to which they belong. The model is then capable of automatically classifying previously unseen chemical structures. The proposed model achieved an overall F1 score of 0.80, an improvement of 6 percentage points over our previous results on the same dataset. Additionally, we demonstrate how visualizing the model's attention weights can help to explain the results by providing insight into how the model made its decisions.


Ontology-based n-ball Concept Embeddings Informing Few-shot Image Classification

arXiv.org Artificial Intelligence

We propose a novel framework named ViOCE that integrates ontology-based background knowledge in the form of $n$-ball concept embeddings into a neural network based vision architecture. The approach consists of two components - converting symbolic knowledge of an ontology into continuous space by learning n-ball embeddings that capture properties of subsumption and disjointness, and guiding the training and inference of a vision model using the learnt embeddings. We evaluate ViOCE using the task of few-shot image classification, where it demonstrates superior performance on two standard benchmarks.


Blockchains through ontologies: the case study of the Ethereum ERC721 standard in OASIS (Extended Version)

arXiv.org Artificial Intelligence

Blockchains are gaining momentum due to the interest of industries and people in \emph{decentralized applications} (Dapps), particularly in those for trading assets through digital certificates secured on blockchain, called tokens. As a consequence, providing a clear unambiguous description of any activities carried out on blockchains has become crucial, and we feel the urgency to achieve that description at least for trading. This paper reports on how to leverage the \emph{Ontology for Agents, Systems, and Integration of Services} ("\ONT{}") as a general means for the semantic representation of smart contracts stored on blockchain as software agents. Special attention is paid to non-fungible tokens (NFTs), whose management through the ERC721 standard is presented as a case study.


Fixpoint Semantics for Recursive SHACL

arXiv.org Artificial Intelligence

SHACL is a W3C-proposed language for expressing structural constraints on RDF graphs. The recommendation only specifies semantics for non-recursive SHACL; recently, some efforts have been made to allow recursive SHACL schemas. In this paper, we argue that for defining and studying semantics of recursive SHACL, lessons can be learned from years of research in non-monotonic reasoning. We show that from a SHACL schema, a three-valued semantic operator can directly be obtained. Building on Approximation Fixpoint Theory (AFT), this operator immediately induces a wide variety of semantics, including a supported, stable, and well-founded semantics, related in the expected ways. By building on AFT, a rich body of theoretical results becomes directly available for SHACL. As such, the main contribution of this short paper is providing theoretical foundations for the study of recursive SHACL, which can later enable an informed decision for an extension of the W3C recommendation.


An Ontology-Based Information Extraction System for Residential Land Use Suitability Analysis

arXiv.org Artificial Intelligence

We propose an Ontology-Based Information Extraction (OBIE) system to automate the extraction of the criteria and values applied in Land Use Suitability Analysis (LUSA) from bylaw and regulation documents related to the geographic area of interest. The results obtained by our proposed LUSA OBIE system (land use suitability criteria and their values) are presented as an ontology populated with instances of the extracted criteria and property values. This latter output ontology is incorporated into a Multi-Criteria Decision Making (MCDM) model applied for constructing suitability maps for different kinds of land uses. The resulting maps may be the final desired product or can be incorporated into the cellular automata urban modeling and simulation for predicting future urban growth. A case study has been conducted where the output from LUSA OBIE is applied to help produce a suitability map for the City of Regina, Saskatchewan, to assist in the identification of suitable areas for residential development. A set of Saskatchewan bylaw and regulation documents were downloaded and input to the LUSA OBIE system. We accessed the extracted information using both the populated LUSA ontology and the set of annotated documents. In this regard, the LUSA OBIE system was effective in producing a final suitability map.


Matching with Transformers in MELT

arXiv.org Artificial Intelligence

One of the strongest signals for automated matching of ontologies and knowledge graphs are the textual descriptions of the concepts. The methods that are typically applied (such as character- or token-based comparisons) are relatively simple, and therefore do not capture the actual meaning of the texts. With the rise of transformer-based language models, text comparison based on meaning (rather than lexical features) is possible. In this paper, we model the ontology matching task as classification problem and present approaches based on transformer models. We further provide an easy to use implementation in the MELT framework which is suited for ontology and knowledge graph matching. We show that a transformer-based filter helps to choose the correct correspondences given a high-recall alignment and already achieves a good result with simple alignment post-processing methods.