Semantic Networks


The knowledge graph as the default data model for learning on heterogeneous knowledge - IOS Press

@machinelearnbot

In modern machine learning, raw data is the preferred input for our models. Where a decade ago data scientists were still engineering features, manually picking out the details we thought salient, they now prefer the data in their raw form. As long as we can assume that all relevant and irrelevant information is present in the input data, we can design deep models that build up intermediate representations to sift out relevant features. However, these models are often domain specific and tailored to the task at hand, and therefore unsuited for learning on heterogeneous knowledge: information of different types and from different domains. If we can develop methods that operate on this form of knowledge, we can dispense with a great deal more ad-hoc feature engineering and train deep models end-to-end in many more domains.


Global Bigdata Conference

#artificialintelligence

Like previous technology revolutions in Web and mobile, however, there will be huge dividends for those organizations who can harness this technology for competitive advantage.


Extrofitting: Enriching Word Representation and its Vector Space with Semantic Lexicons

arXiv.org Artificial Intelligence

We propose post-processing method for enriching not only word representation but also its vector space using semantic lexicons, which we call extrofitting. The method consists of 3 steps as follows: (i) Expanding 1 or more dimension(s) on all the word vectors, filling with their representative value. (ii) Transferring semantic knowledge by averaging each representative values of synonyms and filling them in the expanded dimension(s). These two steps make representations of the synonyms close together. (iii) Projecting the vector space using Linear Discriminant Analysis, which eliminates the expanded dimension(s) with semantic knowledge. When experimenting with GloVe, we find that our method outperforms Faruqui's retrofitting on some of word similarity task. We also report further analysis on our method in respect to word vector dimensions, vocabulary size as well as other well-known pretrained word vectors (e.g., Word2Vec, Fasttext).


KBGAN: Adversarial Learning for Knowledge Graph Embeddings

arXiv.org Artificial Intelligence

We introduce KBGAN, an adversarial learning framework to improve the performances of a wide range of existing knowledge graph embedding models. Because knowledge graphs typically only contain positive facts, sampling useful negative training examples is a non-trivial task. Replacing the head or tail entity of a fact with a uniformly randomly selected entity is a conventional method for generating negative facts, but the majority of the generated negative facts can be easily discriminated from positive facts, and will contribute little towards the training. Inspired by generative adversarial networks (GANs), we use one knowledge graph embedding model as a negative sample generator to assist the training of our desired model, which acts as the discriminator in GANs. This framework is independent of the concrete form of generator and discriminator, and therefore can utilize a wide variety of knowledge graph embedding models as its building blocks. In experiments, we adversarially train two translation-based models, TransE and TransD, each with assistance from one of the two probability-based models, DistMult and ComplEx. We evaluate the performances of KBGAN on the link prediction task, using three knowledge base completion datasets: FB15k-237, WN18 and WN18RR. Experimental results show that adversarial training substantially improves the performances of target embedding models under various settings.


Google's Knowledge Graph Identifies your Medical Symptoms

#artificialintelligence

Google's mobile site as well as its iOS and Android apps introduced a feature that aims to track down information on medical symptoms. Instead of having to search for a condition, you can search for a certain symptom, such as "my stomach hurts."


Expeditious Generation of Knowledge Graph Embeddings

arXiv.org Artificial Intelligence

Knowledge Graph Embedding methods aim at representing entities and relations in a knowledge base as points or vectors in a continuous vector space. Several approaches using embeddings have shown promising results on tasks such as link prediction, entity recommendation, question answering, and triplet classification. However, only a few methods can compute low-dimensional embeddings of very large knowledge bases. In this paper, we propose KG2Vec, a novel approach to Knowledge Graph Embedding based on the skip-gram model. Instead of using a predefined scoring function, we learn it relying on Long Short-Term Memories. We evaluated the goodness of our embeddings on knowledge graph completion and show that KG2Vec is comparable to the quality of the scalable state-of-the-art approaches and can process large graphs by parsing more than a hundred million triples in less than 6 hours on common hardware.


Why Knowledge Graphs Are Foundational to Artificial Intelligence

#artificialintelligence

AI is poised to drive the next wave of technological disruption across industries. Like previous technology revolutions in Web and mobile, however, there will be huge dividends for those organizations who can harness this technology for competitive advantage. I spend a lot of time working with customers, many of whom are investing significant time and effort in building AI applications for this very reason. From the outside, these applications couldn't be more diverse – fraud detection, retail recommendation engines, knowledge sharing – but I see a sweeping opportunity across the board: context. Without context (who the user is, what they are searching for, what similar users have searched for in the past, and how all these connections play together) these AI applications may never reach their full potential.


Ido Dagan: Open Knowledge Graphs: Consolidating and Exploring Textual Information

#artificialintelligence

IDO DAGAN TITLE: Open Knowledge Graphs: Consolidating and Exploring Textual Information ABSTRACT: How can we capture effectively the information expressed in multiple texts? How can we allow people, as well as computer applications, to easily explore it? The current semantic NLP pipeline typically ends at the single sentence level, putting the burden on applications to consolidate related information that is spread across different texts. Further, semantic representations are often based on non-trivial pre-specified schemata, which require expert annotation and hence complicate the creation of large scale corpora for effective training. In this talk, I will outline a proposal for a novel open representation of the information exressed jointly by multiple texts, which we term Open Knowledge Graphs (OKG).



Incorporating Literals into Knowledge Graph Embeddings

arXiv.org Machine Learning

Knowledge graphs, on top of entities and their relationships, contain another important element: literals. Literals encode interesting properties (e.g. the height) of entities that are not captured by links between entities alone. Most of the existing work on embedding (or latent feature) based knowledge graph modeling focuses mainly on the relations between entities. In this work, we study the effect of incorporating literal information into existing knowledge graph models. Our approach, which we name LiteralE, is an extension that can be plugged into existing latent feature methods. LiteralE merges entity embeddings with their literal information using a learnable, parametrized function, such as a simple linear or nonlinear transformation, or a multilayer neural network. We extend several popular embedding models using LiteralE and evaluate the performance on the task of link prediction. Despite its simplicity, LiteralE proves to be an effective way to incorporate literal information into existing embedding based models, improving their performance on different standard datasets, which we augmented with their literals and provide as testbed for further research.