On the Computational Modeling of Meaning: Embodied Cognition Intertwined with Emotion

Kennington, Casey

arXiv.org Artificial Intelligence 

How can machines understand language? is a question that many have asked, and represents an important facet of artificial intelligence. Large language models like ChatGPT seem to understand language, but as has been pointed out (Bender and Koller, 2020; Bisk et al., 2020), even large, powerful language models trained on huge amounts of data are likely missing key information to allow them to reach the depth of understanding that humans have. What information are they missing, and, perhaps more importantly, what information do they have that enables them to understand, to the degree that they do? Current computational models of semantic meaning can be broken down into three paradigms: distributional paradigms where meaning is derived from how words are used in text (i.e., the notion that the meaning of a word depends on the "company it keeps," following Firth (1957)) meaningfulness of language lies in the fact that it is about the world (Dahlgren, 1976) and grounded paradigms are where aspects of the physical world are linked to language (i.e., the symbol grounding problem following Harnad (1990)) formal paradigms where meaning is a logical form (e.g., first order logic as in L.T.F.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found