Goto

Collaborating Authors

 emql





Review for NeurIPS paper: Faithful Embeddings for Knowledge Base Queries

Neural Information Processing Systems

When vacuous sketches are used in the intermediate steps, e.g. in R1 in MetaQA model, what is the intermediate output? Is it the dense-sparse representation of the entities in top-k facts? Isn't that a problem when k is large? Won't this be an issue in case there is a template that requires intersection as well in addition to unions? 3. For a given query, EmQL ranks all the entities (or gives a distribution over entities) instead of explicitly giving a set as an answer.


Guessing What's Plausible But Remembering What's True: Accurate Neural Reasoning for Question-Answering

Sun, Haitian, Arnold, Andrew O., Bedrax-Weiss, Tania, Pereira, Fernando, Cohen, William W.

arXiv.org Machine Learning

Neural approaches to natural language processing (NLP) often fail at the logical reasoning needed for deeper language understanding. In particular, neural approaches to reasoning that rely on embedded \emph{generalizations} of a knowledge base (KB) implicitly model which facts that are \emph{plausible}, but may not model which facts are \emph{true}, according to the KB. While generalizing the facts in a KB is useful for KB completion, the inability to distinguish between plausible inferences and logically entailed conclusions can be problematic in settings like as KB question answering (KBQA). We propose here a novel KB embedding scheme that supports generalization, but also allows accurate logical reasoning with a KB. Our approach introduces two new mechanisms for KB reasoning: neural retrieval over a set of embedded triples, and "memorization" of highly specific information with a compact sketch structure. Experimentally, this leads to substantial improvements over the state-of-the-art on two KBQA benchmarks.