Lexical semantics - Wikipedia

#artificialintelligence

Lexical semantics (also known as lexicosemantics), is a subfield of linguistic semantics. The units of analysis in lexical semantics are lexical units which include not only words but also sub-words or sub-units such as affixes and even compound words and phrases. Lexical units make up the catalogue of words in a language, the lexicon. Lexical semantics looks at how the meaning of the lexical units correlates with the structure of the language or syntax. This is referred to as syntax-semantic interface.[1] Lexical units, also referred to as syntactic atoms, can stand alone such as in the case of root words or parts of compound words or they necessarily attach to other units such as prefixes and suffixes do. The former are called free morphemes and the latter bound morphemes.[2]


NLP vs. NLU: from Understanding a Language to Its Processing

#artificialintelligence

By Sciforce, software solutions based on science-driven information technologies. As artificial intelligence progresses and technology becomes more sophisticated, we expect existing concepts to embrace this change -- or change themselves. Similarly, in the domain of computer-aided processing of natural languages, shall the concept of natural language processing give way to natural language understanding? Or is the relation between the two concepts more subtle and complicated than merely the linear progression of a technology? In this post, we'll scrutinize over the concepts of NLP and NLU and their niches in the AI-related technology.


The Biological Bases of Syntax-Semantics Interface in Natural Languages: Cognitive Modeling and Empirical Evidence

AAAI Conferences

We consider empirical evidence for event-structural analysis of language comprehension and production in spoken and signed languages, as well as possible biological bases for it. Finally, theoretical linguistic models, models of language processing, and cognitive architectures, which account for such event-structural basis of syntax-semantics (and, possibly, phonology interface in ASL) in human languages are discussed. Representation of events in human languages: linguistic universals meet language processing. The idea that human languages parse and formulate observable events in a logically restricted fashion is fairly old, dating back to Vendler's (1967) Aktionsart predicate classes (more recently developed by van Lambalgen and Hamm, 2005). Recent work by Van Valin (2007) claims that the most pervasive components of real-world events have made their way into the morphology of most of the world's languages (albeit in different form), qualifying them for the status of linguistic universals.


What's universal grammar? Evidence rebuts Chomsky's theory of language learning

#artificialintelligence

This article was originally published by Scientific American. The idea that we have brains hardwired with a mental template for learning grammar -- famously espoused by Noam Chomsky of the Massachusetts Institute of Technology -- has dominated linguistics for almost half a century. Recently, though, cognitive scientists and linguists have abandoned Chomsky's "universal grammar" theory in droves because of new research examining many different languages -- and the way young children learn to understand and speak the tongues of their communities. That work fails to support Chomsky's assertions. The research suggests a radically different view, in which learning of a child's first language does not rely on an innate grammar module. Instead the new research shows that young children use various types of thinking that may not be specific to language at all -- such as the ability to classify the world into categories (people or objects, for instance) and to understand the relations among things. These capabilities, coupled with a unique hu man ability to grasp what others intend to communicate, allow language to happen.


Evidence Rebuts Chomsky's Theory of Language Learning

#artificialintelligence

The idea that we have brains hardwired with a mental template for learning grammar--famously espoused by Noam Chomsky of the Massachusetts Institute of Technology--has dominated linguistics for almost half a century. Recently, though, cognitive scientists and linguists have abandoned Chomsky's "universal grammar" theory in droves because of new research examining many different languages--and the way young children learn to understand and speak the tongues of their communities. That work fails to support Chomsky's assertions. The research suggests a radically different view, in which learning of a child's first language does not rely on an innate grammar module. Instead the new research shows that young children use various types of thinking that may not be specific to language at all--such as the ability to classify the world into categories (people or objects, for instance) and to understand the relations among things. These capabilities, coupled with a unique hu man ability to grasp what others intend to communicate, allow language to happen. The new findings indicate that if researchers truly want to understand how children, and others, learn languages, they need to look outside of Chomsky's theory for guidance.