Goto

Collaborating Authors

Concept-Oriented Deep Learning

arXiv.org Artificial Intelligence

Concepts are the foundation of human deep learning, understanding, and knowledge integration and transfer. We propose concept-oriented deep learning (CODL) which extends (machine) deep learning with concept representations and conceptual understanding capability. CODL addresses some of the major limitations of deep learning: interpretability, transferability, contextual adaptation, and requirement for lots of labeled training data. We discuss the major aspects of CODL including concept graph, concept representations, concept exemplars, and concept representation learning systems supporting incremental and continual learning.


Learning Conceptual Space Representations of Interrelated Concepts

arXiv.org Artificial Intelligence

Several recently proposed methods aim to learn conceptual space representations from large text collections. These learned representations asso- ciate each object from a given domain of interest with a point in a high-dimensional Euclidean space, but they do not model the concepts from this do- main, and can thus not directly be used for catego- rization and related cognitive tasks. A natural solu- tion is to represent concepts as Gaussians, learned from the representations of their instances, but this can only be reliably done if sufficiently many in- stances are given, which is often not the case. In this paper, we introduce a Bayesian model which addresses this problem by constructing informative priors from background knowledge about how the concepts of interest are interrelated with each other. We show that this leads to substantially better pre- dictions in a knowledge base completion task.


An extended description logic system with knowledge element based on ALC

arXiv.org Artificial Intelligence

With the rise of knowledge management and knowledge economy, the knowledge elements that directly link and embody the knowledge system have become the research focus and hotspot in certain areas. The existing knowledge element representation methods are limited in functions to deal with the formality, logic and reasoning. Based on description logic ALC and the common knowledge element model, in order to describe the knowledge element, the description logic ALC is expanded. The concept is extended to two different ones (that is, the object knowledge element concept and the attribute knowledge element concept). The relationship is extended to three (that is, relationship between object knowledge element concept and attribute knowledge element concept, relationship among object knowledge element concepts, relationship among attribute knowledge element concepts), and the inverse relationship constructor is added to propose a description logic KEDL system. By demonstrating, the relevant properties, such as completeness, reliability, of the described logic system KEDL are obtained. Finally, it is verified by the example that the description logic KEDL system has strong knowledge element description ability. Introduction With the rise of knowledge management and knowledge economy, knowledge has attracted people's attention as an important strategic resource. The direct control and management of knowledge itself has become the focus of attention in various disciplines.


Beyond Word Embeddings: Learning Entity and Concept Representations from Large Scale Knowledge Bases

arXiv.org Artificial Intelligence

Text representations using neural word embeddings have proven effective in many NLP applications. Recent researches adapt the traditional word embedding models to learn vectors of multiword expressions (concepts/entities). However, these methods are limited to textual knowledge bases (e.g., Wikipedia). In this paper, we propose a novel and simple technique for integrating the knowledge about concepts from two large scale knowledge bases of different structure (Wikipedia and Probase) in order to learn concept representations. We adapt the efficient skip-gram model to seamlessly learn from the knowledge in Wikipedia text and Probase concept graph. We evaluate our concept embedding models on two tasks: (1) analogical reasoning, where we achieve a state-of-the-art performance of 91% on semantic analogies, (2) concept categorization, where we achieve a state-of-the-art performance on two benchmark datasets achieving categorization accuracy of 100% on one and 98% on the other. Additionally, we present a case study to evaluate our model on unsupervised argument type identification for neural semantic parsing. We demonstrate the competitive accuracy of our unsupervised method and its ability to better generalize to out of vocabulary entity mentions compared to the tedious and error prone methods which depend on gazetteers and regular expressions.


Term Subsumption Languages in Knowledge Representation

AI Magazine

The Workshop on Term Subsumption Languages in Knowledge Representation was held 18-20 October 1989 at the Inn at Thorn Hill, located in the White Mountain region of New Hampshire. The workshop was organized by Peter F. Patel-Schneider of AT&T Bell Laboratories, Murray Hill, New Jersey; Marc Vilain of MITRE, Bedford, Massachusetts; Ramesh S. Patil of the Massachusetts Institute of Technology (MIT); and Bill Mark of the Lockheed AI Center, Menlo Park, California. Support was provided by the American Association for Artificial Intelligence and AT&T Bell Laboratories. This workshop was the latest in a series in this area. Previous workshops have had a slightly narrower focus, being explicitly concerned with KL-One, the first knowledge representation system based on a term subsumption language (TSL), or its successor, NIKL.