Goto

Collaborating Authors

 Natural Language



Grammar Learning by a Self-Organizing Network

Neural Information Processing Systems

Michiro Negishi Dept. of Cognitive and Neural Systems, Boston University 111 Cummington Street Boston, MA 02215 email: negishi@cns.bu.edu Abstract This paper presents the design and simulation results of a selforganizing neural network which induces a grammar from example sentences. Input sentences are generated from a simple phrase structure grammar including number agreement, verb transitivity, and recursive noun phrase construction rules. The network induces a grammar explicitly in the form of symbol categorization rules and phrase structure rules. 1 Purpose and related works The purpose of this research is to show that a self-organizing network with a certain structure can acquire syntactic knowledge from only positive (i.e. There has been research on supervised neural network models of language acquisition tasks [Elman, 1991, Miikkulainen and Dyer, 1988, John and McClelland, 1988]. Unlike these supervised models, the current model self-organizes word and phrasal categories and phrase construction rules through mere exposure to input sentences, without any artificially defined task goals.




A Lexical Semantic and Statistical Approach to Lexical Collocation Extraction for Natural Language Generation

AI Magazine

's performance exceeded the testers' performance on a different data set given a 99-percent confidence interval containing the true population scores for these


The Seventh International Workshop on Natural Language Generation

AI Magazine

The Seventh International Workshop on Natural Language Generation was held from 21 to 24 June 1994 in Kennebunkport, Maine. Sixty-seven people from 13 countries attended this 4-day meeting on the study of natural language generation in computational linguistics and AI. The goal of the workshop was to introduce new, cutting-edge work to the community and provide an atmosphere in which discussion and exchange would flourish.


Monster Analogies

AI Magazine

Analogy has a rich history in Western civilization. Over the centuries, it has become reified in that analogical reasoning has sometimes been regarded as a fundamental cognitive process. In addition, it has become identified with a particular expressive format. The limitations of the modern view are illustrated by monster analogies, which show that analogy need not be regarded as something having a single form, format, or semantics. Analogy clearly does depend on the human ability to create and use well-defined or analytic formats for laying out propositions that express or imply meanings and perceptions. Beyond this dependence, research in cognitive science suggests that analogy relies on a number of genuinely fundamental cognitive capabilities, including semantic flexibility, the perception of resemblances and of distinctions, imagination, and metaphor. Extant symbolic models of analogical reasoning have various sorts of limitation, yet each model presents some important insights and plausible mechanisms. I argue that future efforts could be aimed at integration. This aim would include the incorporation of contextual information, the construction of semantic bases that are dynamic and knowledge rich, and the incorporation of multiple approaches to the problems of inference constraint.


The 1995 AAAI Spring Symposia Reports

AI Magazine

The Association for the Advancement of Artificial Intelligence held its 1995 Spring Symposium Series on March 27 to 29 at Stanford University. This article contains summaries of the nine symposia that were conducted: (1) Empirical Methods in Discourse Interpretation and Generation; (2) Extending Theories of Action: Formal Theory and Practical Applications; (3) Information Gathering from Heterogeneous, Distributed Environments; (4) Integrated Planning Applications; (5) Interactive Story Systems: Plot and Character; (6) Lessons Learned from Implemented Software Architectures for Physical Agents; (7) Representation and Acquisition of Lexical Knowledge: Polysemy, Ambiguity, and Generativity; (8) Representing Mental States and Mechanisms; and (9) Systematic Methods of Scientific Discovery.


The Role of Intelligent Systems in the National Information Infrastructure

AI Magazine

This report stems from a workshop that was organized by the Association for the Advancement of Artificial Intelligence (AAAI) and cosponsored by the Information Technology and Organizations Program of the National Science Foundation. The purpose of the workshop was twofold: first, to increase awareness among the artificial intelligence (AI) community of opportunities presented by the National Information Infrastructure (NII) activities, in particular, the Information Infrastructure and Tech-nology Applications (IITA) component of the High Performance Computing and Communications Program; and second, to identify key contributions of research in AI to the NII and IITA.


Robert F. Simmons In Memoriam

AI Magazine

Simmons's dream was that a person could have "a conversation with a book": The computer would read the book, and then the user at that time. William James' could have a conversation with the born in Quincy, Massachusetts, Principles of Psychology seemed to He married me to be a highwater mark for answered from the computer's understanding Patricia Enderson in 1950, and they psychologists who were interested of the book. He investigated raised five children. The fact such as encyclopedia articles; produced in 1954 from the University of that the whole current of psychology higher-level grammars for Southern California. His dissertation has turned to the more expository text; and wrote programs was entitled "The Prediction of Accident rewarding (but to me less inspiring) that could summarize such text. Rates from Basic Design Features study of more easily He later became interested in logic of USAF Aircraft."