Ontologies


Why "Ontology" Will Be A Big Word In Your Company's Future

Forbes Technology

Who's doing this? 75% of the Fortune 500 companies have some kind of smart data or semantics program underway, most under the banner of 360 initiatives, comprehensive enterprise data systems, or machine learning/data science projects. Amazon has recently added linked data capabilities to their AWS infrastructure with the Neptune project, and social media giants have built their entire data infrastructure around smart ontological data. Moreover, China, Japan, England, the OECD, and the United States have all moved critical data resources into semantic form, and semantics has become one of the hottest areas for investment banks such as Wells Fargo, Morgan Stanley, Citigroup, Goldman Sachs and others. It even ties into such cutting edge technologies as Blockchain and the Internet of Things.


The Shape of a Benedictine Monastery: The SaintGall Ontology (Extended Version)

arXiv.org Artificial Intelligence

We present an OWL 2 ontology representing the Saint Gall plan, one of the most ancient documents arrived intact to us, which describes the ideal model of a Benedictine monastic complex that inspired the design of many European monasteries.


A Computational Theory for Life-Long Learning of Semantics

arXiv.org Artificial Intelligence

Semantic vectors are learned from data to express semantic relationships between elements of information, for the purpose of solving and informing downstream tasks. Other models exist that learn to map and classify supervised data. However, the two worlds of learning rarely interact to inform one another dynamically, whether across types of data or levels of semantics, in order to form a unified model. We explore the research problem of learning these vectors and propose a framework for learning the semantics of knowledge incrementally and online, across multiple mediums of data, via binary vectors. We discuss the aspects of this framework to spur future research on this approach and problem.


Automatic White-Box Testing of First-Order Logic Ontologies

arXiv.org Artificial Intelligence

Formal ontologies are axiomatizations in a logic-based formalism. The development of formal ontologies, and their important role in the Semantic Web area, is generating considerable research on the use of automated reasoning techniques and tools that help in ontology engineering. One of the main aims is to refine and to improve axiomatizations for enabling automated reasoning tools to efficiently infer reliable information. Defects in the axiomatization can not only cause wrong inferences, but can also hinder the inference of expected information, either by increasing the computational cost of, or even preventing, the inference. In this paper, we introduce a novel, fully automatic white-box testing framework for first-order logic ontologies. Our methodology is based on the detection of inference-based redundancies in the given axiomatization. The application of the proposed testing method is fully automatic since a) the automated generation of tests is guided only by the syntax of axioms and b) the evaluation of tests is performed by automated theorem provers. Our proposal enables the detection of defects and serves to certify the grade of suitability --for reasoning purposes-- of every axiom. We formally define the set of tests that are generated from any axiom and prove that every test is logically related to redundancies in the axiom from which the test has been generated. We have implemented our method and used this implementation to automatically detect several non-trivial defects that were hidden in various first-order logic ontologies. Throughout the paper we provide illustrative examples of these defects, explain how they were found, and how each proof --given by an automated theorem-prover-- provides useful hints on the nature of each defect. Additionally, by correcting all the detected defects, we have obtained an improved version of one of the tested ontologies: Adimen-SUMO.


Extraction Of Technical Information From Normative Documents Using Automated Methods Based On Ontologies : Application To The Iso 15531 Mandate Standard - Methodology And First Results

arXiv.org Artificial Intelligence

Problems faced by international standardization bodies become more and more crucial as the number and the size of the standards they produce increase. Sometimes, also, the lack of coordination among the committees in charge of the development of standards may lead to overlaps, mistakes or incompatibilities in the documents. The aim of this study is to present a methodology enabling an automatic extraction of the technical concepts (terms) found in normative documents, through the use of semantic tools coming from the field of language processing. The first part of the paper provides a description of the standardization world, its structure, its way of working and the problems faced; we then introduce the concepts of semantic annotation, information extraction and the software tools available in this domain. The next section explains the concept of ontology and its potential use in the field of standardization. We propose here a methodology enabling the extraction of technical information from a given normative corpus, based on a semantic annotation process done according to reference ontologies. The application to the ISO 15531 MANDATE corpus provides a first use case of the methodology described in this paper. The paper ends with the description of the first experimental results produced by this approach, and with some issues and perspectives, notably its application to other standards and, or Technical Committees and the possibility offered to create pre-defined technical dictionaries of terms.


TrQuery: An Embedding-based Framework for Recommanding SPARQL Queries

arXiv.org Artificial Intelligence

In this paper, we present an embedding-based framework (TrQuery) for recommending solutions of a SPARQL query, including approximate solutions when exact querying solutions are not available due to incompleteness or inconsistencies of real-world RDF data. Within this framework, embedding is applied to score solutions together with edit distance so that we could obtain more fine-grained recommendations than those recommendations via edit distance. For instance, graphs of two querying solutions with a similar structure can be distinguished in our proposed framework while the edit distance depending on structural difference becomes unable. To this end, we propose a novel score model built on vector space generated in embedding system to compute the similarity between an approximate subgraph matching and a whole graph matching. Finally, we evaluate our approach on large RDF datasets DBpedia and YAGO, and experimental results show that TrQuery exhibits an excellent behavior in terms of both effectiveness and efficiency.


Using Statistical and Semantic Models for Multi-Document Summarization • r/textdatamining

#artificialintelligence

The main finding, that adding additional techniques for summarization for a combined approach can produce improved results, is unsurprising. I don't see any link to software on github, etc., which makes the article hard to follow up on.


Ontologies for Business Analysis Udemy

#artificialintelligence

The practice of Business Analysis revolves around the formation, transformation and finalisation of requirements to recommend suitable solutions to support enterprise change programmes. Practitioners working in the field of business analysis apply a wide range of modelling tools to capture the various perspectives of the enterprise, for example, business process perspective, data flow perspective, functional perspective, static structure perspective, and more. These tools aid in decision support and are especially useful in the effort towards the transformation of a business into the "intelligent enterprise", in other words, one which is to some extent "self-describing" and able to adapt to organisational change. However, a fundamental piece remains missing from the puzzle. Achieving this capability requires us to think beyond the idea of simply using the current mainstream modelling tools.


Inference -- GraphDB Free 8.5 documentation

#artificialintelligence

GraphDB supports inference out of the box and provides updates to inferred facts automatically. Facts change all the time and the amount of resources it would take to manually manage updates or rerun the inferencing process would be overwhelming without this capability. This results in improved query speed, data availability and accurate analysis. GraphDB will use the data and the rules to infer more facts and thus produce a richer data set than the one you started with. GraphDB can be configured via "rule-sets" – sets of axiomatic triples and entailment rules – that determine the applied semantics.


NBA analytics and RDF graphs: Game, data, and metadata evolution, and Occam's razor

ZDNet

With the NBA playoffs in full swing, we are used to having statistics nuggets thrown into game coverage. While it has been argued that not every aspect of the game should be purely data driven, sports analytics can be fun for fans as well as a useful tool for organizations. The NBA has come into organizing analytics hackathons, asking participants to propose novel ideas in terms of both the game itself as well as its business side. Projecting the impact of hypothetical rule changes or predicting the entertainment value of games are some examples of ideas investigated in this context. You don't have to be the NBA, or professional media, or a sports organization with a dedicated analytics team to do some analysis of your own.