Toward a universal decoder of linguistic meaning from brain activation


Humans have the unique capacity to translate thoughts into words, and to infer others' thoughts from their utterances. This ability is based on mental representations of meaning that can be mapped to language, but to which we have no direct access. The approach to meaning representation that currently dominates the field of natural language processing relies on distributional semantic models, which rest on the simple yet powerful idea that words similar in meaning occur in similar linguistic contexts1. A word is represented as a semantic vector in a high-dimensional space, where similarity between two word vectors reflects similarity of the contexts in which those words appear in the language2. More recently, these models have been extended beyond single words to express meanings of phrases and sentences5,6,7, and the resulting representations predict human similarity judgments for phrase- and sentence-level paraphrases8,9.

#IoT #Ecosystems require #Ontologies of your #Products and #Services – Paradigm Interactions


Products and services associated with the IoT currently operate in closed ecosystems like Home Automation, in effect, they are simply networked products with linking software.

The problem of the development ontology-driven architecture of intellectual software systems

arXiv.org Artificial Intelligence

The paper describes the architecture of the intelligence system for automated design of ontological knowledge bases of domain areas and the software model of the management GUI (Graphical User Interface) subsystem

Datafication concept: definitions and examples - Apiumhub


Datafication is a buzzword of the last several years, that is used actively along Big Data industry. Honestly, if you would search the term'datafication' on the internet you probably won't find that much relative information about it, yet it is a word we are hearing a lot these days. However, after analyzing the topic itself, I could say that many of us understand the meaning of the term, but probably named it another way.

Ontology based Scene Creation for the Development of Automated Vehicles

arXiv.org Artificial Intelligence

The introduction of automated vehicles without permanent human supervision demands a functional system description, including functional system boundaries and a comprehensive safety analysis. These inputs to the technical development can be identified and analyzed by a scenario-based approach. Furthermore, to establish an economical test and release process, a large number of scenarios must be identified to obtain meaningful test results. Experts are doing well to identify scenarios that are difficult to handle or unlikely to happen. However, experts are unlikely to identify all scenarios possible based on the knowledge they have on hand. Expert knowledge modeled for computer aided processing may help for the purpose of providing a wide range of scenarios. This contribution reviews ontologies as knowledge-based systems in the field of automated vehicles, and proposes a generation of traffic scenes in natural language as a basis for a scenario creation.

Towards a New Data Modelling Architecture - Part 2


How do we design a data model, how do we connect data, how do we represent information, how do we store or retrieve them? These are all fundamental questions in data modeling but there is a common key to unlock them. You have to start by defining a primitive information resource, and then understand how one can build complex information structures on top of these fundamental units. And this is because everything in nature or systems follow this kind of abstraction from the simple to the most sophisticated. There are patterns that recur at progressively smaller scales. There are fundamental building blocks that can build higher-order structures.

Semantic Integration Through Invariants

AI Magazine

A semantics-preserving exchange of information between two software applications requires mappings between logically equivalent concepts in the ontology of each application. The challenge of semantic integration is therefore equivalent to the problem of generating such mappings, determining that they are correct, and providing a vehicle for executing the mappings, thus translating terms from one ontology into another. This article presents an approach toward this goal using techniques that exploit the model-theoretic structures underlying ontologies. With these as inputs, semiautomated and automated components may be used to create mappings between ontologies and perform translations. A major barrier to such interoperability is semantic heterogeneity: different applications, databases, and agents may ascribe disparate meanings to the same terms or use distinct terms to convey the same meaning.