Distributional-Relational Models: Scalable Semantics for Databases

AAAI Conferences

The crisp/brittle semantic model behind databases limits the scale in which data consumers can query, explore, integrate and process structured data. Approaches aiming to provide more comprehensive semantic models for databases, which are purely logic-based (e.g. as in Semantic Web databases) have major scalability limitations in the acquisition of structured semantic and commonsense data. This work describes a complementary semantic model for databases which has semantic approximation at its center. This model uses distributional semantic models (DSMs) to extend structured data semantics. DSMs support the automatic construction of semantic and commonsense models from large-scale unstructured text and provides a simple model to analyze similarities in the structured data. The combination of distributional and structured data semantics provides a simple and promising solution to address the challenges associated with the interaction and processing of structured data.


Identifying and Explaining Discriminative Attributes

arXiv.org Artificial Intelligence

Identifying what is at the center of the meaning of a word and what discriminates it from other words is a fundamental natural language inference task. This paper describes an explicit word vector representation model (WVM) to support the identification of discriminative attributes. A core contribution of the paper is a quantitative and qualitative comparative analysis of different types of data sources and Knowledge Bases in the construction of explainable and explicit WVMs: (i) knowledge graphs built from dictionary definitions, (ii) entity-attribute-relationships graphs derived from images and (iii) commonsense knowledge graphs. Using a detailed quantitative and qualitative analysis, we demonstrate that these data sources have complementary semantic aspects, supporting the creation of explicit semantic vector spaces. The explicit vector spaces are evaluated using the task of discriminative attribute identification, showing comparable performance to the state-of-the-art systems in the task (F1-score = 0.69), while delivering full model transparency and explainability.


Wikipedia-Based Distributional Semantics for Entity Relatedness

AAAI Conferences

Wikipedia provides an enormous amount of background knowledge to reason about the semantic relatedness between two entities. We propose Wikipedia-based Distributional Semantics for Entity Relatedness (DiSER), which represents the semantics of an entity by its distribution in the high dimensional concept space derived from Wikipedia. DiSER measures the semantic relatedness between two entities by quantifying the distance between the corresponding high-dimensional vectors. DiSER builds the model by taking the annotated entities only, therefore it improves over existing approaches, which do not distinguish between an entity and its surface form. We evaluate the approach on a benchmark that contains the relative entity relatedness scores for 420 entity pairs. Our approach improves the accuracy by 12% on state of the art methods for computing entity relatedness. We also show an evaluation of DiSER in the Entity Disambiguation task on a dataset of 50 sentences with highly ambiguous entity mentions. It shows an improvement of 10% in precision over the best performing methods. In order to provide the resource that can be used to find out all the related entities for a given entity, a graph is constructed, where the nodes represent Wikipedia entities and the relatedness scores are reflected by the edges. Wikipedia contains more than 4.1 millions entities, which required efficient computation of the relatedness scores between the corresponding 17 trillions of entity-pairs.


Distributional Semantic Features as Semantic Primitives — Or Not

AAAI Conferences

We argue that >distributional semantics can serve as the basis for a semantic representation of words and phrases that serves many of the purposes semantic primitives were designed for, without running into many of their philosophical, empirical, and practical problems.


Open-Vocabulary Semantic Parsing with both Distributional Statistics and Formal Knowledge

AAAI Conferences

Traditional semantic parsers map language onto compositional, executable queries in a fixed schema. This mapping allows them to effectively leverage the information contained in large, formal knowledge bases (KBs, e.g., Freebase) to answer questions, but it is also fundamentally limiting---these semantic parsers can only assign meaning to language that falls within the KB's manually-produced schema. Recently proposed methods for open vocabulary semantic parsing overcome this limitation by learning execution models for arbitrary language, essentially using a text corpus as a kind of knowledge base. However, all prior approaches to open vocabulary semantic parsing replace a formal KB with textual information, making no use of the KB in their models. We show how to combine the disparate representations used by these two approaches, presenting for the first time a semantic parser that (1) produces compositional, executable representations of language, (2) can successfully leverage the information contained in both a formal KB and a large corpus, and (3) is not limited to the schema of the underlying KB. We demonstrate significantly improved performance over state-of-the-art baselines on an open-domain natural language question answering task.