Semantic Networks


Knowledge Graphs & NLP @ EMNLP 2019 Part I

#artificialintelligence

Language models (LMs) are the hottest topic in the NLP research right now. The most prominent examples are BERT and GPT-2 but new LMs are published every month trained on humongous volumes of text. Are LMs capable of encoding knowledge in a way similar to knowledge graphs? Petroni et al study this problem comparing language models with knowledge graphs on Question Answering and NLG tasks where factual knowledge is required, e.g., a question is posed by inserting a MASK token instead of an answer. Turns out LMs demonstrate similar to KGs performance on very simple questions such as "Adolphe Adam died in [Paris]" .



Knowledge Graphs Strengthen Your AI Strategy - PoolParty Semantic Suite

#artificialintelligence

Companies that develop and build Knowledge Graphs are taking large amounts of data from various data silos and adding value to it so it can be used in a meaningful and more intelligent way. Download this presentation to learn how our customers are using Knowledge Graphs to drive the business value of their data and strengthen their AI strategy.


Using Pairwise Occurrence Information to Improve Knowledge Graph Completion on Large-Scale Datasets

arXiv.org Machine Learning

Using Pairwise Occurrence Information to Improve Knowledge Graph Completion on Large-Scale Datasets Esma Balkır 1,2*, Masha Naslidnyk 2, Dave Palfrey 2 and Arpit Mittal 2 1 University of Edinburgh, Scotland, UK 2 Amazon Research, Cambridge, UK 1 esma.balkir@ed.ac.uk 2 { naslidny, dpalfrey, mitarpit }@amazon.co.uk Abstract Bilinear models such as DistMult and ComplEx are effective methods for knowledge graph (KG) completion. However, they require large batch sizes, which becomes a performance bottleneck when training on large scale datasets due to memory constraints. In this paper we use occurrences of entity-relation pairs in the dataset to construct a joint learning model and to increase the quality of sampled negatives during training. We show on three standard datasets that when these two techniques are combined, they give a significant improvement in performance, especially when the batch size and the number of generated negative examples are low relative to the size of the dataset. We then apply our techniques to a dataset containing 2 million entities and demonstrate that our model outperforms the baseline by 2.8% absolute on hits@1. 1 Introduction A Knowledge Graph (KG) is a collection of facts which are stored as triples, e.g. Even though knowledge graphs are essential for various NLP tasks, open domain knowledge graphs have missing facts.


Towards Combinational Relation Linking over Knowledge Graphs

arXiv.org Artificial Intelligence

Given a natural language phrase, relation linking aims to find a relation (predicate or property) from the underlying knowledge graph to match the phrase. It is very useful in many applications, such as natural language question answering, personalized recommendation and text summarization. However, the previous relation linking algorithms usually produce a single relation for the input phrase and pay little attention to a more general and challenging problem, i.e., combinational relation linking that extracts a subgraph pattern to match the compound phrase (e.g. In this paper, we focus on the task of combinational relation linking over knowledge graphs. To resolve the problem, we design a systematic method based on the data-driven relation assembly technique, which is performed under the guidance of meta patterns. We also introduce external knowledge to enhance the system understanding ability. Finally, we conduct extensive experiments over the real knowledge graph to study the performance of the proposed method. 1 Introduction Knowledge graphs have been important repositories to materialize a huge amount of structured information in the form of triples, where a triple consists of nullsubject, predicate, objectnull or null subject, property, value null. There have been many such knowledge graphs, e.g., DBpedia (Auer et al. 2007), Y ago (Suchanek, Kasneci, and Weikum 2007), and Freebase (Bollacker et al. 2008). In order to bridge the gap between unstructured text (including text documents and natural language questions) and structured knowledge, an important and interesting task is conducting relation linking over the knowledge graph, i.e., finding the specific predicates/properties from the knowledge graph that match the phrases detected in the sentence (also may be a question). Relation linking can power many downstream applications. As a friendly and intuitive approach to exploring knowledge graphs, using natural language questions to query the knowledge graph has attracted a lot of attentions in both academia and industrial communities (Berant et al. 2013; Bao et al. 2016; Das et al. 2017; Hu et al. 2018; Huang et al. 2019). Generally, the simple questions, e.g., who is the founder of Microsoft, are easy to answer since Figure 1: Example of combinational relations matching the compound phrase mother-in-law. it is straightforward to choose the predicate "founder" from the knowledge graph that matches the phrase "founder" in the input question.


Question Answering over Knowledge Graphs via Structural Query Patterns

arXiv.org Artificial Intelligence

Natural language question answering over knowledge graphs is an important and interesting task as it enables common users to gain accurate answers in an easy and intuitive manner. However, it remains a challenge to bridge the gap between unstructured questions and structured knowledge graphs. To address the problem, a natural discipline is building a structured query to represent the input question. Searching the structured query over the knowledge graph can produce answers to the question. Distinct from the existing methods that are based on semantic parsing or templates, we propose an effective approach powered by a novel notion, structural query pattern, in this paper. Given an input question, we first generate its query sketch that is compatible with the underlying structure of the knowledge graph. Then, we complete the query graph by labeling the nodes and edges under the guidance of the structural query pattern. Finally, answers can be retrieved by executing the constructed query graph over the knowledge graph. Evaluations on three question answering benchmarks show that our proposed approach outperforms state-of-the-art methods significantly.


Knowledge Graph -- A Powerful Data Science Technique to Mine Information from Text (with Python code)

#artificialintelligence

Lionel Messi needs no introduction. Even folks who don't follow football have heard about the brilliance of one of the greatest players to have graced the sport. We have text, tons of hyperlinks, and even an audio clip. The possibilities of putting this into a use case are endless. However, there is a slight problem. This is not an ideal source of data to feed to our machines.


r/MachineLearning - [R] How Contextual are Contextualized Word Representations?

#artificialintelligence

Abstract: Replacing static word embeddings with contextualized word representations has yielded significant improvements on many NLP tasks. However, just how contextual are the contextualized representations produced by models such as ELMo and BERT? Are there infinitely many context-specific representations for each word, or are words essentially assigned one of a finite number of word-sense representations? For one, we find that the contextualized representations of all words are not isotropic in any layer of the contextualizing model. While representations of the same word in different contexts still have a greater cosine similarity than those of two different words, this self-similarity is much lower in upper layers.


The Data Fabric for Machine Learning – Part 2: Building a Knowledge-Graph - KDnuggets

#artificialintelligence

How the new advances in semantics can help us be better at Machine Learning. Deep learning on graphs is taking more importance by the day. I've been talking about the data fabric in general, and giving some concepts of Machine Learning and Deep Learning in the data fabric. The Data Fabric is the platform that supports all the data in the company. How it's managed, described, combined and universally accessed.


How to build a Knowledge Graph from Text Using spaCy

#artificialintelligence

Lionel Messi needs no introduction. Even folks who don't follow football have heard about the brilliance of one of the greatest players to have graced the sport. We have text, tons of hyperlinks, and even an audio clip. The possibilities of putting this into a use case are endless. However, there is a slight problem. This is not an ideal source of data to feed to our machines.