Online Semantic Extraction by Backpropagation Neural Network with Various Syntactic Structure Representations

AAAI Conferences

The sub-symbolic approach on Natural Language Processing (NLP) is one of the mainstreams in Artificial Intelligence. Indeed, we have plenty of algorithms for variations of NLP such as syntactic structure representation or lexicon classification theoretically. The goal of these researches is obviously for developing a hybrid architecture which can process natural language as what human does. Thus, we propose an online intelligent system to extract the semantics (utterance interpretation) by applying a 3-layer back propagation neural network to classify the encoded syntactic structures into corresponding semantic frame types (e.g.

Implications of Recursive Distributed Representations

Neural Information Processing Systems

I will describe my recent results on the automatic development of fixedwidth recursivedistributed representations of variable-sized hierarchal data structures. One implication of this wolk is that certain types of AIstyle data-structures can now be represented in fixed-width analog vectors. Simple inferences can be perfonned using the type of pattern associations that neural networks excel at Another implication arises from noting that these representations become self-similar in the limit Once this door to chaos is opened.

Distributed Representation of Syntactic Structure by Tensor Product Representation and Non-Linear Compression

AAAI Conferences

In the early days, Symbolic and Sub-symbolic approaches were generally treated as 2 separated and competing fields in the realm of Artificial Intelligence. Apparently, neither of them seemed to be capable of attaining significant breakthrough on complicated task such as natural language understanding. Prince (1997) suggested that the integration of these 2 fields could possibly achieve graceful rewards when the combined effort was focused on the principle of optimization involving language grammar and cognitive architectures. In this paper, our scope of study is formal English which is one of the most common languages tackled in the research of Natural Language Processing (NLP).


AI Magazine

A workshop on high-level connectionist models was held in Las Cruces, New Mexico, on 9-11 April 1988 with support from the American Association for Artificial Intelligence and the Office of Naval Research. John Barnden and Jordan Pollack organized and hosted the workshop and will edit a book containing the proceedings and commentary. The book will be published by Ablex as the first volume in a series entitled Advances in Connectionist and Neural Computation Theory. The two fields are often posed as paradigmatic enemies, and a risk of severing them exists. Few connectionist results are published in the mainstream AI journals and conference proceedings other than those sponsored by the Cognitive Science Society, and many neural-network researchers and industrialists proceed without consideration of the problems (and progress) of AI.