The Biological Bases of Syntax-Semantics Interface in Natural Languages: Cognitive Modeling and Empirical Evidence

AAAI Conferences

We consider empirical evidence for event-structural analysis of language comprehension and production in spoken and signed languages, as well as possible biological bases for it. Finally, theoretical linguistic models, models of language processing, and cognitive architectures, which account for such event-structural basis of syntax-semantics (and, possibly, phonology interface in ASL) in human languages are discussed. Representation of events in human languages: linguistic universals meet language processing. The idea that human languages parse and formulate observable events in a logically restricted fashion is fairly old, dating back to Vendler's (1967) Aktionsart predicate classes (more recently developed by van Lambalgen and Hamm, 2005). Recent work by Van Valin (2007) claims that the most pervasive components of real-world events have made their way into the morphology of most of the world's languages (albeit in different form), qualifying them for the status of linguistic universals.


What's universal grammar? Evidence rebuts Chomsky's theory of language learning

#artificialintelligence

This article was originally published by Scientific American. The idea that we have brains hardwired with a mental template for learning grammar -- famously espoused by Noam Chomsky of the Massachusetts Institute of Technology -- has dominated linguistics for almost half a century. Recently, though, cognitive scientists and linguists have abandoned Chomsky's "universal grammar" theory in droves because of new research examining many different languages -- and the way young children learn to understand and speak the tongues of their communities. That work fails to support Chomsky's assertions. The research suggests a radically different view, in which learning of a child's first language does not rely on an innate grammar module. Instead the new research shows that young children use various types of thinking that may not be specific to language at all -- such as the ability to classify the world into categories (people or objects, for instance) and to understand the relations among things. These capabilities, coupled with a unique hu man ability to grasp what others intend to communicate, allow language to happen.


Evidence Rebuts Chomsky's Theory of Language Learning

#artificialintelligence

The idea that we have brains hardwired with a mental template for learning grammar--famously espoused by Noam Chomsky of the Massachusetts Institute of Technology--has dominated linguistics for almost half a century. Recently, though, cognitive scientists and linguists have abandoned Chomsky's "universal grammar" theory in droves because of new research examining many different languages--and the way young children learn to understand and speak the tongues of their communities. That work fails to support Chomsky's assertions. The research suggests a radically different view, in which learning of a child's first language does not rely on an innate grammar module. Instead the new research shows that young children use various types of thinking that may not be specific to language at all--such as the ability to classify the world into categories (people or objects, for instance) and to understand the relations among things. These capabilities, coupled with a unique hu man ability to grasp what others intend to communicate, allow language to happen. The new findings indicate that if researchers truly want to understand how children, and others, learn languages, they need to look outside of Chomsky's theory for guidance.


Differential Use of Implicit Negative Evidence in Generative and Discriminative Language Learning

Neural Information Processing Systems

A classic debate in cognitive science revolves around understanding how children learn complex linguistic rules, such as those governing restrictions on verb alternations, without negative evidence. Traditionally, formal learnability arguments have been used to claim that such learning is impossible without the aid of innate language-specific knowledge. However, recently, researchers have shown that statistical models are capable of learning complex rules from only positive evidence. These two kinds of learnability analyses differ in their assumptions about the role of the distribution from which linguistic input is generated. The former analyses assume that learners seek to identify grammatical sentences in a way that is robust to the distribution from which the sentences are generated, analogous to discriminative approaches in machine learning. The latter assume that learners are trying to estimate a generative model, with sentences being sampled from that model. We show that these two learning approaches differ in their use of implicit negative evidence -- the absence of a sentence -- when learning verb alternations, and demonstrate that human learners can produce results consistent with the predictions of both approaches, depending on the context in which the learning problem is presented.


Combining Symbolic and Distributional Models of Meaning

AAAI Conferences

The are two main approaches to the representation of meaning in Computational Linguistics: a symbolic approach and a distributional approach. This paper considers the fundamental question of how these approaches might be combined. The proposal is to adapt a method from the Cognitive Science literature, in which symbolic and connectionist representations are combined using tensor products. Possible applications of this method for language processing are described. Finally, a potentially fruitful link between Quantum Mechanics, Computational Linguistics, and other related areas such as Information Retrieval and Machine Learning, is proposed.