Distributional Relational Networks

AAAI Conferences

This work introduces distributional relational networks (DRNs), a knowledge representation (KR) framework which focuses on allowing semantic approximations over large-scale and heterogeneous knowledge bases. The proposed model uses the distributional semantics information embedded in large text/data corpora to provide a comprehensive and principled solution for semantic approximation. DRNs can be applied to open domain knowledge bases and can be used as a KR model for commonsense reasoning. Experimental results show the suitability of DRNs as a semantically flexible KR framework.

Identifying and Explaining Discriminative Attributes

arXiv.org Artificial Intelligence

Identifying what is at the center of the meaning of a word and what discriminates it from other words is a fundamental natural language inference task. This paper describes an explicit word vector representation model (WVM) to support the identification of discriminative attributes. A core contribution of the paper is a quantitative and qualitative comparative analysis of different types of data sources and Knowledge Bases in the construction of explainable and explicit WVMs: (i) knowledge graphs built from dictionary definitions, (ii) entity-attribute-relationships graphs derived from images and (iii) commonsense knowledge graphs. Using a detailed quantitative and qualitative analysis, we demonstrate that these data sources have complementary semantic aspects, supporting the creation of explicit semantic vector spaces. The explicit vector spaces are evaluated using the task of discriminative attribute identification, showing comparable performance to the state-of-the-art systems in the task (F1-score = 0.69), while delivering full model transparency and explainability.

Open-Vocabulary Semantic Parsing with both Distributional Statistics and Formal Knowledge

AAAI Conferences

Traditional semantic parsers map language onto compositional, executable queries in a fixed schema. This mapping allows them to effectively leverage the information contained in large, formal knowledge bases (KBs, e.g., Freebase) to answer questions, but it is also fundamentally limiting---these semantic parsers can only assign meaning to language that falls within the KB's manually-produced schema. Recently proposed methods for open vocabulary semantic parsing overcome this limitation by learning execution models for arbitrary language, essentially using a text corpus as a kind of knowledge base. However, all prior approaches to open vocabulary semantic parsing replace a formal KB with textual information, making no use of the KB in their models. We show how to combine the disparate representations used by these two approaches, presenting for the first time a semantic parser that (1) produces compositional, executable representations of language, (2) can successfully leverage the information contained in both a formal KB and a large corpus, and (3) is not limited to the schema of the underlying KB. We demonstrate significantly improved performance over state-of-the-art baselines on an open-domain natural language question answering task.

Distributional Semantic Features as Semantic Primitives — Or Not

AAAI Conferences

We argue that >distributional semantics can serve as the basis for a semantic representation of words and phrases that serves many of the purposes semantic primitives were designed for, without running into many of their philosophical, empirical, and practical problems.

Modelling the Meaning of Argument Constructions with Distributional Semantics

AAAI Conferences

Current computational models of argument constructions typically represent their semantic content with hand-made formal structures. Here we present a distributional model implementing the idea that the meaning of a construction is intimately related to the semantics of its typical verbs. First, we identify the typical verbs occurring with a given syntactic construction and build their distributional vectors. We then calculate the weighted centroid of these vectors in order to derive the distributional signature of a construction. In order to assess the goodness of our approach, we replicated the priming effect described by Johnson and Golberg (2013) as a function of the semantic distance between a construction and its prototypical verbs. Additional support for our view comes from a regression analysis showing that our distributional information can be used to model behavioral data collected with a crowdsourced elicitation experiment.