Goto

Collaborating Authors

The CIDOC Conceptual Reference Module: An Ontological Approach to Semantic Interoperability of Metadata

AI Magazine

This article presents the methodology that has been successfully used over the past seven years by an interdisciplinary team to create the International Committee for Documentation of the International Council of Museums (CIDOC) CONCEPTUAL REFERENCE MODEL (CRM), a high-level ontology to enable information integration for cultural heritage data and their correlation with library and archive information. The CIDOC CRM is now in the process to become an International Organization for Standardization (ISO) standard. This article justifies in detail the methodology and design by functional requirements and gives examples of its contents. The CIDOC CRM analyzes the common conceptualizations behind data and metadata structures to support data transformation, mediation, and merging. It is argued that such ontologies are propertycentric, in contrast to terminological systems, and should be built with different methodologies.


Introducing a Graph-based Semantic Layer in Enterprises

@machinelearnbot

Things, not Strings Entity-centric views on enterprise information and all kinds of data sources provide means to get a more meaningful picture about all sorts of business objects. This method of information processing is as relevant to customers, citizens, or patients as it is to knowledge workers like lawyers, doctors, or researchers. People actually do not search for documents, but rather for facts and other chunks of information to bundle them up to provide answers to concrete questions. Strings, or names for things are not the same as the things they refer to. Still, those two aspects of an entity get mixed up regularly to nurture the Babylonian language confusion.


Solving Semantic Problems Using Contexts Extracted from Knowledge Graphs

AAAI Conferences

This thesis seeks to address word reasoning problems from a semantic standpoint, proposing a uniform approach for generating solutions while also providing human-understandable explanations. Current state of the art solvers of semantic problems rely on traditional machine learning methods. Therefore their results are not easily reusable by algorithms or interpretable by humans. We propose leveraging web-scale knowledge graphs to determine a semantic frame of interpretation. Semantic knowledge graphs are graphs in which nodes represent concepts and the edges represent the relations between them. Our approach has the following advantages: (1) it reduces the space in which the problem is to be solved; (2) sparse and noisy data can be used without relying only on the relations deducible from the data itself; (3) the output of the inference algorithm is supported by an interpretable justification. We demonstrate our approach in two domains: (1) Topic Modeling: We form topics using connectivity in semantic graphs. We use the same topic models for two very different recommendation systems, one designed for high noise interactive applications and the other for large amounts of web data. (2) Analogy Solving: For humans, analogies are a fundamental reasoning pattern, which relies on abstraction and comparative analysis. In order for an analogy to be understood, precise relations have to be identified and mapped. We introduce graph algorithms to assess the analogy strength in contexts derived from the analogy words. We demonstrate our approach by solving standardized test analogy question.


What IBM, the Semantic Web Company, and Siemens are doing with semantic technologies

ZDNet

The Semantics conference is one of the biggest events for all things semantics. Key research and industry players gathered this week in Leipzig to showcase and discuss, and we were there to get that vibe. Graphs are everywhere: we have social graphs and knowledge graphs and office graphs, and in the minds of most these have been associated with Facebook and Google and Microsoft. But the concept of Knowledge Graphs is broader and vendor-agnostic. All graphs can be considered as knowledge graphs, insofar as they represent information by means of nodes and (directional) edges.


Understanding How Increased Interoperability Enables Increased Use of Artificial Intelligence and Automation

#artificialintelligence

When I think about "managing information" and using "information of many types and from many sources" I think about the different levels of interoperability of that information and the different types of AI and automation that occurs at different levels of interoperability. In this article, I introduce 4 levels of interoperability used in industries like Healthcare and the associated AI and automation that aligns with or is enabled by increasing levels of interoperability. These 4 levels of interoperability are critical to managing information and realizing the full potential of AI and automation for enabling a "holistic cyber defense machine". Foundational Interoperability (Level 1) – establishes the inter-connectivity requirements needed for one system or application to securely communicate data to and receive data from another. Foundational Interoperability lets the data transmitted by one system to be received by another.