Higher-Order DisCoCat (Peirce-Lambek-Montague semantics)

Toumi, Alexis, de Felice, Giovanni

arXiv.org Artificial Intelligence 

DisCoCat [1, 2] (Categorical Compositional Distributional) models are structure-preserving maps which send grammatical types to vector spaces and grammatical structures to linear maps. Concretely, the meaning of words is given by tensors with shapes induced by their grammatical types; the meaning of sentences is given by contracting the tensor networks induced by their grammatical structure. String diagrams provide an intuitive graphical language to visualise and reason formally about the evaluation of DisCoCat models; which can be formalised in terms of functors F: G Vect from the category generated by a formal grammar G to the monoidal category Vect of vector spaces and linear maps with the tensor product [3, 2.5]. Although this functorial definition applies equally to any kind of formal grammar, most of the DisCoCat literature focuses on pregroup grammars and more generally on categorial grammars such as the Lambek calculus [4, 5] and combinatory categorial grammars (CCG) [6]. In that case, G is a closed monoidal category and the DisCoCat models F: G Vect map grammatical structures to the closed structure of Vect in a canonical way. In practice, this means that once the meaning of each word is computed from a dataset, the meaning of any new grammatical sentence can be computed automatically from its grammatical structure.