Semantic Networks


Knowledge Graphs: The Path to Enterprise AI - Neo4j Graph Database Platform

#artificialintelligence

Michael Moore, Ph.D. is an Executive Director in the Advisory Services practice of Ernst & Young LLP. He is the National practice lead for Enterprise Knowledge Graphs AI in EY's Data and Analytics (DnA) Group. Moore helps EY clients deploy large-scale knowledge graphs using cutting-edge technologies, real-time architectures and advanced analytics. Omar Azhar is the Manager of EY Financial Services Organization Advisory – AI Strategy and Advanced Analytics COE at EY. Your email address will not be published.


A Standard to build Knowledge Graphs: 12 Facts about SKOS

@machinelearnbot

These days, many organisations have begun to develop their own knowledge graphs. One reason might be to build a solid basis for various machine learning and cognitive computing efforts. For many of those, it remains still unclear where to start. SKOS offers a simple way to start and opens many doors to extend a knowledge graph over time. The usage of open standards for data and knowledge models eliminates proprietary vendor lock-in.


KG^2: Learning to Reason Science Exam Questions with Contextual Knowledge Graph Embeddings

arXiv.org Machine Learning

The AI2 Reasoning Challenge (ARC), a new benchmark dataset for question answering (QA) has been recently released. ARC only contains natural science questions authored for human exams, which are hard to answer and require advanced logic reasoning. On the ARC Challenge Set, existing state-of-the-art QA systems fail to significantly outperform random baseline, reflecting the difficult nature of this task. In this paper, we propose a novel framework for answering science exam questions, which mimics human solving process in an open-book exam. To address the reasoning challenge, we construct contextual knowledge graphs respectively for the question itself and supporting sentences. Our model learns to reason with neural embeddings of both knowledge graphs. Experiments on the ARC Challenge Set show that our model outperforms the previous state-of-the-art QA systems.


The knowledge graph as the default data model for learning on heterogeneous knowledge - IOS Press

@machinelearnbot

In modern machine learning, raw data is the preferred input for our models. Where a decade ago data scientists were still engineering features, manually picking out the details we thought salient, they now prefer the data in their raw form. As long as we can assume that all relevant and irrelevant information is present in the input data, we can design deep models that build up intermediate representations to sift out relevant features. However, these models are often domain specific and tailored to the task at hand, and therefore unsuited for learning on heterogeneous knowledge: information of different types and from different domains. If we can develop methods that operate on this form of knowledge, we can dispense with a great deal more ad-hoc feature engineering and train deep models end-to-end in many more domains.


Global Bigdata Conference

#artificialintelligence

Like previous technology revolutions in Web and mobile, however, there will be huge dividends for those organizations who can harness this technology for competitive advantage.


Extrofitting: Enriching Word Representation and its Vector Space with Semantic Lexicons

arXiv.org Artificial Intelligence

We propose post-processing method for enriching not only word representation but also its vector space using semantic lexicons, which we call extrofitting. The method consists of 3 steps as follows: (i) Expanding 1 or more dimension(s) on all the word vectors, filling with their representative value. (ii) Transferring semantic knowledge by averaging each representative values of synonyms and filling them in the expanded dimension(s). These two steps make representations of the synonyms close together. (iii) Projecting the vector space using Linear Discriminant Analysis, which eliminates the expanded dimension(s) with semantic knowledge. When experimenting with GloVe, we find that our method outperforms Faruqui's retrofitting on some of word similarity task. We also report further analysis on our method in respect to word vector dimensions, vocabulary size as well as other well-known pretrained word vectors (e.g., Word2Vec, Fasttext).


KBGAN: Adversarial Learning for Knowledge Graph Embeddings

arXiv.org Artificial Intelligence

We introduce KBGAN, an adversarial learning framework to improve the performances of a wide range of existing knowledge graph embedding models. Because knowledge graphs typically only contain positive facts, sampling useful negative training examples is a non-trivial task. Replacing the head or tail entity of a fact with a uniformly randomly selected entity is a conventional method for generating negative facts, but the majority of the generated negative facts can be easily discriminated from positive facts, and will contribute little towards the training. Inspired by generative adversarial networks (GANs), we use one knowledge graph embedding model as a negative sample generator to assist the training of our desired model, which acts as the discriminator in GANs. This framework is independent of the concrete form of generator and discriminator, and therefore can utilize a wide variety of knowledge graph embedding models as its building blocks. In experiments, we adversarially train two translation-based models, TransE and TransD, each with assistance from one of the two probability-based models, DistMult and ComplEx. We evaluate the performances of KBGAN on the link prediction task, using three knowledge base completion datasets: FB15k-237, WN18 and WN18RR. Experimental results show that adversarial training substantially improves the performances of target embedding models under various settings.


Google's Knowledge Graph Identifies your Medical Symptoms

#artificialintelligence

Google's mobile site as well as its iOS and Android apps introduced a feature that aims to track down information on medical symptoms. Instead of having to search for a condition, you can search for a certain symptom, such as "my stomach hurts."


Expeditious Generation of Knowledge Graph Embeddings

arXiv.org Artificial Intelligence

Knowledge Graph Embedding methods aim at representing entities and relations in a knowledge base as points or vectors in a continuous vector space. Several approaches using embeddings have shown promising results on tasks such as link prediction, entity recommendation, question answering, and triplet classification. However, only a few methods can compute low-dimensional embeddings of very large knowledge bases. In this paper, we propose KG2Vec, a novel approach to Knowledge Graph Embedding based on the skip-gram model. Instead of using a predefined scoring function, we learn it relying on Long Short-Term Memories. We evaluated the goodness of our embeddings on knowledge graph completion and show that KG2Vec is comparable to the quality of the scalable state-of-the-art approaches and can process large graphs by parsing more than a hundred million triples in less than 6 hours on common hardware.


Why Knowledge Graphs Are Foundational to Artificial Intelligence

#artificialintelligence

AI is poised to drive the next wave of technological disruption across industries. Like previous technology revolutions in Web and mobile, however, there will be huge dividends for those organizations who can harness this technology for competitive advantage. I spend a lot of time working with customers, many of whom are investing significant time and effort in building AI applications for this very reason. From the outside, these applications couldn't be more diverse – fraud detection, retail recommendation engines, knowledge sharing – but I see a sweeping opportunity across the board: context. Without context (who the user is, what they are searching for, what similar users have searched for in the past, and how all these connections play together) these AI applications may never reach their full potential.