Don't Blame Distributional Semantics if it can't do Entailment
Westera, Matthijs, Boleda, Gemma
–arXiv.org Artificial Intelligence
Distributional semantics has emerged as a promising model of certain'conceptual' aspects of linguistic meaning (e.g., Landauer and Dumais 1997; Turney and Pantel 2010; Baroni and Lenci 2010; Lenci 2018) and as an indispensable component of applications in Natural Language Processing (e.g., reference resolution, machine translation, image captioning; especially since Mikolov et al. 2013). Yet its theoretical status within a general theory of meaning and of language and cognition more generally is not clear (e.g., Lenci 2008; Erk 2010; Boleda and Herbelot 2016; Lenci 2018). In particular, it is not clear whether distributional semantics can be understood as an actual model of expression meaning - what Lenci (2008) calls the'strong' view of distributional semantics - or merely as a model of something that correlates with expression meaning in certain partial ways - the'weak' view. In this paper we aim to resolve, in favor of the'strong' view, the question of what exactly distributional semantics models, what its role should be in an overall theory of language and cognition, and how its contribution to state of the art applications can be understood. We do so in part by clarifying its frequently discussed but still obscure relation to formal semantics. Our proposal relies crucially on the distinction between what linguistic expressions mean outside of any particular context, and what speakers mean by them in a particular context of utterance.
arXiv.org Artificial Intelligence
May-17-2019
- Country:
- Europe > United Kingdom
- England (0.14)
- North America > United States
- California (0.14)
- Europe > United Kingdom
- Genre:
- Overview (0.34)
- Research Report (0.40)
- Technology: