Semantic Networks


Knowledge graphs beyond the hype: Getting knowledge in and out of graphs and databases

ZDNet

We can officially say this now, since Gartner included knowledge graphs in the 2018 hype cycle for emerging technologies. Though we did not have to wait for Gartner -- declaring this as the "Year of the Graph" was our opener for 2018. Like anyone active in the field, we see the opportunity, as well as the threat in this: With hype comes confusion. They have been for the last 20 years at least. Knowledge graphs, in their original definition and incarnation, have been about knowledge representation and reasoning.


Hypernetwork Knowledge Graph Embeddings

arXiv.org Machine Learning

Knowledge graphs are large graph-structured databases of facts, which typically suffer from incompleteness. Link prediction is the task of inferring missing relations (links) between entities (nodes) in a knowledge graph. We propose to solve this task by using a hypernetwork architecture to generate convolutional layer filters specific to each relation and apply those filters to the subject entity embeddings. This architecture enables a trade-off between non-linear expressiveness and the number of parameters to learn. Our model simplifies the entity and relation embedding interactions introduced by the predecessor convolutional model, while outperforming all previous approaches to link prediction across all standard link prediction datasets.


Using fastText and Comet.ml to classify relationships in Knowledge Graphs

#artificialintelligence

An increasing number of machine learning solutions, and companies are leveraging knowledge graph data, to tackle industries that require deep domain expertise. In fact, knowledge graphs underpin the natural language capabilities of Alexa, Siri, Cortana and Google Now. Our users at Comet.ml are exploring applications, such as; semantic search, intelligent chatbots, advanced drug research and dynamic risk analysis. In this post we will try to provide an introduction to knowledge graphs and walkthrough a simple model developed at Facebook, that performs surprisingly well at knowledge base completion tasks.


AceKG: A Large-scale Knowledge Graph for Academic Data Mining

arXiv.org Artificial Intelligence

Most existing knowledge graphs (KGs) in academic domains suffer from problems of insufficient multi-relational information, name ambiguity and improper data format for large-scale machine pro- cessing. In this paper, we present AceKG, a new large-scale KG in academic domain. AceKG not only provides clean academic information, but also offers a large-scale benchmark dataset for researchers to conduct challenging data mining projects including link prediction, community detection and scholar classification. Specifically, AceKG describes 3.13 billion triples of academic facts based on a consistent ontology, including necessary properties of papers, authors, fields of study, venues and institutes, as well as the relations among them. To enrich the proposed knowledge graph, we also perform entity alignment with existing databases and rule-based inference. Based on AceKG, we conduct experiments of three typical academic data mining tasks and evaluate several state-of- the-art knowledge embedding and network representation learning approaches on the benchmark datasets built from AceKG. Finally, we discuss several promising research directions that benefit from AceKG.


Knowledge Graphs: The Path to Enterprise AI - Neo4j Graph Database Platform

#artificialintelligence

Michael Moore, Ph.D. is an Executive Director in the Advisory Services practice of Ernst & Young LLP. He is the National practice lead for Enterprise Knowledge Graphs AI in EY's Data and Analytics (DnA) Group. Moore helps EY clients deploy large-scale knowledge graphs using cutting-edge technologies, real-time architectures and advanced analytics. Omar Azhar is the Manager of EY Financial Services Organization Advisory – AI Strategy and Advanced Analytics COE at EY. Your email address will not be published.


A Standard to build Knowledge Graphs: 12 Facts about SKOS

@machinelearnbot

These days, many organisations have begun to develop their own knowledge graphs. One reason might be to build a solid basis for various machine learning and cognitive computing efforts. For many of those, it remains still unclear where to start. SKOS offers a simple way to start and opens many doors to extend a knowledge graph over time. The usage of open standards for data and knowledge models eliminates proprietary vendor lock-in.


KG^2: Learning to Reason Science Exam Questions with Contextual Knowledge Graph Embeddings

arXiv.org Machine Learning

The AI2 Reasoning Challenge (ARC), a new benchmark dataset for question answering (QA) has been recently released. ARC only contains natural science questions authored for human exams, which are hard to answer and require advanced logic reasoning. On the ARC Challenge Set, existing state-of-the-art QA systems fail to significantly outperform random baseline, reflecting the difficult nature of this task. In this paper, we propose a novel framework for answering science exam questions, which mimics human solving process in an open-book exam. To address the reasoning challenge, we construct contextual knowledge graphs respectively for the question itself and supporting sentences. Our model learns to reason with neural embeddings of both knowledge graphs. Experiments on the ARC Challenge Set show that our model outperforms the previous state-of-the-art QA systems.


The knowledge graph as the default data model for learning on heterogeneous knowledge - IOS Press

@machinelearnbot

In modern machine learning, raw data is the preferred input for our models. Where a decade ago data scientists were still engineering features, manually picking out the details we thought salient, they now prefer the data in their raw form. As long as we can assume that all relevant and irrelevant information is present in the input data, we can design deep models that build up intermediate representations to sift out relevant features. However, these models are often domain specific and tailored to the task at hand, and therefore unsuited for learning on heterogeneous knowledge: information of different types and from different domains. If we can develop methods that operate on this form of knowledge, we can dispense with a great deal more ad-hoc feature engineering and train deep models end-to-end in many more domains.


Global Bigdata Conference

#artificialintelligence

Like previous technology revolutions in Web and mobile, however, there will be huge dividends for those organizations who can harness this technology for competitive advantage.


Extrofitting: Enriching Word Representation and its Vector Space with Semantic Lexicons

arXiv.org Artificial Intelligence

We propose post-processing method for enriching not only word representation but also its vector space using semantic lexicons, which we call extrofitting. The method consists of 3 steps as follows: (i) Expanding 1 or more dimension(s) on all the word vectors, filling with their representative value. (ii) Transferring semantic knowledge by averaging each representative values of synonyms and filling them in the expanded dimension(s). These two steps make representations of the synonyms close together. (iii) Projecting the vector space using Linear Discriminant Analysis, which eliminates the expanded dimension(s) with semantic knowledge. When experimenting with GloVe, we find that our method outperforms Faruqui's retrofitting on some of word similarity task. We also report further analysis on our method in respect to word vector dimensions, vocabulary size as well as other well-known pretrained word vectors (e.g., Word2Vec, Fasttext).