Goto

Collaborating Authors

 Natural Language


Incremental Parsing by Modular Recurrent Connectionist Networks

Neural Information Processing Systems

We present a novel, modular, recurrent connectionist network architecture of complexwhich learns to robustly perform incremental parsing sentences. From sequential input, one word at a time, our networks learn to do semantic role assignment, noun phrase attachment, and clause structure recognition for sentences with passive constructions and center embedded clauses. The networks make syntactic and semantic predictions at every point in time, and previous predictions are revised as expectations are affirmed or violated with the arrival of new information. Our networks induce their own "grammar rules" for dynamically transforming an input sequence of words into a syntactic/semantic interpretation.


Future Directions in Natural Language Processing: The Bolt Beranek and Newman Natural Language Symposium

AI Magazine

The Workshop on Future Directions in NLP was held at Bolt Beranek and Newman, Inc. (BBN), in Cambridge, Massachusetts, from 29 November to 1 December 1989. The workshop was organized and hosted by Madeleine Bates and Ralph Weischedel of the BBN Speech and Natural Language Department and sponsored by BBN's Science Development Program.


Future Directions in Natural Language Processing: The Bolt Beranek and Newman Natural Language Symposium

AI Magazine

The Workshop on Future Directions in NLP was held at Bolt Beranek and Newman, Inc. (BBN), in Cambridge, Massachusetts, from 29 November to 1 December 1989. The workshop was organized and hosted by Madeleine Bates and Ralph Weischedel of the BBN Speech and Natural Language Department and sponsored by BBN's Science Development Program.


Directions in AI Research and Applications at Siemens Corporate Research and Development

AI Magazine

Many barriers exist today that prevent effective industrial exploitation of current and future AI research. These barriers can only be removed by people who are working at the scientific forefront in AI and know potential industrial needs. The Knowledge Processing Laboratory's research and development concentrates in the following areas: (1) natural language interfaces to knowledge-based systems and databases; (2) theoretical and experimental work on qualitative modeling and nonmonotonic reasoning for future knowledge-based systems; (3) application-specific language design, in particular, Prolog extensions; and (4) desi gn and analysis of neural networks. This article gives the reader an overview of the main topics currently being pursued in each of these areas.






Can logic programming execute as fast as imperative programming?

Classics

The output is assembly code for the Berkeley Abstract Machine (BAM). Directives hold starting from the next predicate that is input. Clauses do not have to be contiguous in the input stream, however, the whole stream is read before compilation starts. This manual is organized into ten sections.


A Massively Parallel Self-Tuning Context-Free Parser

Neural Information Processing Systems

ABSTRACT The Parsing and Learning System(PALS) is a massively parallel self-tuning context-free parser. It is capable of parsing sentences of unbounded length mainly due to its parse-tree representation scheme. The system is capable of improving its parsing performance through the presentation of training examples. INTRODUCTION Recent PDP research[Rumelhart et al.- 1986; Feldman and Ballard, 1982; Lippmann, 1987] involving natural language processtng[Fanty, 1988; Selman, 1985; Waltz and Pollack, 1985] have unrealistically restricted sentences to a fixed length. A solution to this problem was presented in the system CONPARSE[Charniak and Santos.