A Logic-Driven Framework for Consistency of Neural Models
Li, Tao, Gupta, Vivek, Mehta, Maitrey, Srikumar, Vivek
–arXiv.org Artificial Intelligence
Consequently, we have seen progressively improving performances on benchmarks such as GLUE (Wang et al., 2018). But, are models really becoming better? We take the position that, while tracking performance on a leaderboard is necessary to characterize model quality, it is not sufficient. Reasoning about language requires that a system has the ability not only to draw correct inferences about textual inputs, but also to be consistent its beliefs across various inputs. To illustrate this notion of consistency, let us consider the task of natural language inference (NLI) which seeks to identify whether a premise entails, contradicts or is unrelated to a hypothesis (Dagan et al., 2013).
arXiv.org Artificial Intelligence
Sep-12-2019