ontology


BIM4EEB ontologies framework - BIM4EEB

#artificialintelligence

Interoperability in the construction sector is a key issue and researchers, developers and designers have tackled since the introduction of CAD systems. Traditionally, engineers, architects and site operators interact and track their information exchange through paper or digitalized drawings and e-mails. With the introduction of Building Information Modelling (BIM) techniques and tools, operators are using new solutions and methods to keep track and exploit these data. Cover image: ifcOWL ontology (version IFC4ADD2) visualized thanks to WebVOWL, available hereWhat has been described as traditional method corresponds to Level 0 in well-known BIM levels definition. The concept of BIM level 1 represents the criteria needed for the full collaboration for the industry.


How to provide relevant Search Results - Paperless Lab Academy

#artificialintelligence

The relevance of search results is essential for finding information. Indeed, a user will almost never look further than the first few results of a search engine. It is therefore necessary that the relevant information is ranked as high as possible so that the information sought by the user is found in the first results. The order, or "ranking" of search results is essential for search engines, which will therefore use more or less complex algorithms to display the results that users will find most relevant first. It is usually not possible to find the algorithms used by popular search engines.


Knowledge Graphs on the Web -- an Overview

arXiv.org Artificial Intelligence

Knowledge Graphs are an emerging form of knowledge representation. While Google coined the term Knowledge Graph first and promoted it as a means to improve their search results, they are used in many applications today. In a knowledge graph, entities in the real world and/or a business domain (e.g., people, places, or events) are represented as nodes, which are connected by edges representing the relations between those entities. While companies such as Google, Microsoft, and Facebook have their own, non-public knowledge graphs, there is also a larger body of publicly available knowledge graphs, such as DBpedia or Wikidata. In this chapter, we provide an overview and comparison of those publicly available knowledge graphs, and give insights into their contents, size, coverage, and overlap.


The Knowledge Graph Track at OAEI -- Gold Standards, Baselines, and the Golden Hammer Bias

arXiv.org Artificial Intelligence

The Ontology Alignment Evaluation Initiative (OAEI) is an annual evaluation of ontology matching tools. In 2018, we have started the Knowledge Graph track, whose goal is to evaluate the simultaneous matching of entities and schemas of large-scale knowledge graphs. In this paper, we discuss the design of the track and two different strategies of gold standard creation. We analyze results and experiences obtained in first editions of the track, and, by revealing a hidden task, we show that all tools submitted to the track (and probably also to other tracks) suffer from a bias which we name the golden hammer bias.


Semantic Web Environments for Multi-Agent Systems: Enabling agents to use Web of Things via semantic web

arXiv.org Artificial Intelligence

The Web is ubiquitous, increasingly populated with interconnected data, services, people, and objects. Semantic web technologies (SWT) promote uniformity of data formats, as well as modularization and reuse of specifications (e.g., ontologies), by allowing them to include and refer to information provided by other ontologies. In such a context, multi-agent system (MAS) technologies are the right abstraction for developing decentralized and open Web applications in which agents discover, reason and act on Web resources and cooperate with each other and with people. The aim of the project is to propose an approach to transform "Agent and artifact (A&A) meta-model" into a Web-readable format with ontologies in line with semantic web formats and to reuse already existing ontologies in order to provide uniform access for agents to things.


Knowledge Reconciliation of $n$-ary Relations

arXiv.org Artificial Intelligence

In the expanding Semantic Web, an increasing number of sources of data and knowledge are accessible by human and software agents. Sources may differ in granularity or completeness, and thus be complementary. Consequently, unlocking the full potential of the available knowledge requires combining them. To this aim, we define the task of knowledge reconciliation, which consists in identifying, within and across sources, equivalent, more specific, or similar units. This task can be challenging since knowledge units are heterogeneously represented in sources (e.g., in terms of vocabularies). In this paper, we propose a rule-based methodology for the reconciliation of $n$-ary relations. To alleviate the heterogeneity in representation, we rely on domain knowledge expressed by ontologies. We tested our method on the biomedical domain of pharmacogenomics by reconciling 50,435 $n$-ary relations from four different real-world sources, which highlighted noteworthy agreements and discrepancies within and across sources.


How To Avoid Another AI Winter

#artificialintelligence

Although there has been great progress in artificial intelligence (AI) over the past few years, many of us remember the AI winter in the 1990s, which resulted from overinflated promises by developers and unnaturally high expectations from end users. Now, industry insiders, such as Facebook head of AI Jerome Pesenti, are predicting that AI will soon hit another wall--this time due to the lack of semantic understanding. "Deep learning and current AI, if you are really honest, has a lot of limitations," said Pesenti. "We are very, very far from human intelligence, and there are some criticisms that are valid: It can propagate human biases, it's not easy to explain, it doesn't have common sense, it's more on the level of pattern matching than robust semantic understanding." Other computer scientists believe that AI is currently facing a "reproducibility crisis" because many complex machine-learning algorithms are a "black box" and cannot be easily reproduced.


A Novel Kuhnian Ontology for Epistemic Classification of STM Scholarly Articles

arXiv.org Artificial Intelligence

Thomas Kuhn proposed his paradigmatic view of scientific discovery five decades ago. The concept of paradigm has not only explained the progress of science, but has also become the central epistemic concept among STM scientists. Here, we adopt the principles of Kuhnian philosophy to construct a novel ontology aims at classifying and evaluating the impact of STM scholarly articles. First, we explain how the Kuhnian cycle of science describes research at different epistemic stages. Second, we show how the Kuhnian cycle could be reconstructed into modular ontologies which classify scholarly articles according to their contribution to paradigm-centred knowledge. The proposed ontology and its scenarios are discussed. To the best of the authors knowledge, this is the first attempt for creating an ontology for describing scholarly articles based on the Kuhnian paradigmatic view of science.


Graphs in the 2020s: Databases, Platforms and The Evolution of Knowledge

#artificialintelligence

Graphs, and knowledge graphs, are key concepts and technologies for the 2020s. What will they look like, and what will they enable going forward? We have been keeping track of the evolution of graphs since the early 2000s, and publishing the Year of the Graph newsletter since 2018. Graphs have numerous applications that span analytics, AI, and knowledge management. All of the above are built on a common substrate: data.


Overview of chemical ontologies

arXiv.org Artificial Intelligence

Ontologies order and interconnect knowledge of a certain field in a formal and semantic way so that they are machine-parsable. They try to define allwhere acceptable definition of concepts and objects, classify them, provide properties as well as interconnect them with relations (e.g. "A is a special case of B"). More precisely, Tom Gruber defines Ontologies as a "specification of a conceptualization; [...] a description (like a formal specification of a program) of the concepts and relationships that can exist for an agent or a community of agents." [1] An Ontology is made of Individuals which are organized in Classes. Both can have Attributes and Relations among themselves. Some complex Ontologies define Restrictions, Rules and Events which change attributes or relations. To be computer accessible they are written in certain ontology languages, like the OBO language or the more used Common Algebraic Specification Language. With the rising of a digitalized, interconnected and globalized world, where common standards have to be found, ontologies are of great interest. [2] Yet, the development of chemical ontologies is in the beginning. Indeed, some interesting basic approaches towards chemical ontologies can be found, but nevertheless they suffer from two main flaws. Firstly, we found that they are mostly only fragmentary completed or are still in an architecture state. Secondly, apparently no chemical ontology is widespread accepted. Therefore, we herein try to describe the major ontology-developments in the chemical related fields Ontologies about chemical analytical methods, Ontologies about name reactions and Ontologies about scientific units.