Goto

Collaborating Authors

Entailment Inference in a Natural Logic-like General Reasoner

AAAI Conferences

Recent work on entailment suggests that natural logics are well-suited to determining whether one sentence lexically entails another. We show how the EPILOG reasoning engine, designed for a natural language-like meaning representation (Episodic Logic, or EL), can be used to emulate natural logic inferences, while also enabling more general inferences such as ones from multiple premises, or ones based on world knowledge. Thus, to exploit the capabilities of EPILOG, we are working to populate its knowledge base with the kinds of lexical knowledge on which natural logics rely.


Bridging Knowledge Gaps in Neural Entailment via Symbolic Models

arXiv.org Artificial Intelligence

Most textual entailment models focus on lexical gaps between the premise text and the hypothesis, but rarely on knowledge gaps. We focus on filling these knowledge gaps in the Science Entailment task, by leveraging an external structured knowledge base (KB) of science facts. Our new architecture combines standard neural entailment models with a knowledge lookup module. To facilitate this lookup, we propose a fact-level decomposition of the hypothesis, and verifying the resulting sub-facts against both the textual premise and the structured KB. Our model, NSnet, learns to aggregate predictions from these heterogeneous data formats. On the SciTail dataset, NSnet outperforms a simpler combination of the two predictions by 3% and the base entailment model by 5%.


Semantic Inference at the Lexical-Syntactic Level

AAAI Conferences

Semantic inference is an important component in many natural language understanding applications. Classical approaches to semantic inference rely on complex logical representations. However, practical applications usually adopt shallower lexical or lexical-syntactic representations, but lack a principled inference framework. We propose a generic semantic inference framework that operates directly on syntactic trees. New trees are inferred by applying entailment rules, which provide a unified representation for varying types of inferences. Rules were generated by manual and automatic methods, covering generic linguistic structures as well as specific lexical-based inferences. Initial empirical evaluation in a Relation Extraction setting supports the validity of our approach.


A Probabilistic Classification Approach for Lexical Textual Entailment

AAAI Conferences

The textual entailment task - determining if a given text entails a given hypothesis - provides an abstraction of applied semantic inference. This paper describes first a general generative probabilistic setting for textual entailment. We then focus on the sub-task of recognizing whether the lexical concepts present in the hypothesis are entailed from the text. This problem is recast as one of text categorization in which the classes are the vocabulary words. We make novel use of Naïve Bayes to model the problem in an entirely unsupervised fashion. Empirical tests suggest that the method is effective and compares favorably with state-of-the-art heuristic scoring approaches.


Probabilistic Reasoning with Inconsistent Beliefs Using Inconsistency Measures

AAAI Conferences

The classical probabilistic entailment problem is to determine upper and lower bounds on the probability of formulas, given a consistent set of probabilistic assertions. We generalize this problem by omitting the consistency assumption and, thus, provide a general framework for probabilistic reasoning under inconsistency. To do so, we utilize inconsistency measures to determine probability functions that are closest to satisfying the knowledge base. We illustrate our approach on several examples and show that it has both nice formal and computational properties.