Goto

Collaborating Authors

 Country


Learning Aspect Graph Representations from View Sequences

Neural Information Processing Systems

In our effort to develop a modular neural system for invariant learning andrecognition of 3D objects, we introduce here a new module architecture called an aspect network constructed around adaptive axo-axo-dendritic synapses. This builds upon our existing system (Seibert & Waxman, 1989) which processes 20 shapes and classifies t.hem into view categories (i.e., aspects) invariant to illumination, position, orientat.ion,


A Computational Basis for Phonology

Neural Information Processing Systems

Through a combination linguistic analysis, we are attempting to develop a computational basis for the nature of phonology. We present a connectionist architecture that performs multiple simultaneous insertion, deletion, and mutation operations on sequences of phonemes, and introduce a novel additional primitive, clustering. Clustering provides an interesting alternative to both iterative and relaxation accounts of assimilation processes such as vowel harmony. Our resulting model is efficient because it processes utterances entirely in parallel using only feed-forward circuitry.


Incremental Parsing by Modular Recurrent Connectionist Networks

Neural Information Processing Systems

We present a novel, modular, recurrent connectionist network architecture of complexwhich learns to robustly perform incremental parsing sentences. From sequential input, one word at a time, our networks learn to do semantic role assignment, noun phrase attachment, and clause structure recognition for sentences with passive constructions and center embedded clauses. The networks make syntactic and semantic predictions at every point in time, and previous predictions are revised as expectations are affirmed or violated with the arrival of new information. Our networks induce their own "grammar rules" for dynamically transforming an input sequence of words into a syntactic/semantic interpretation.


VLSI Implementation of a High-Capacity Neural Network Associative Memory

Neural Information Processing Systems

In this paper we describe the VLSI design and testing of a high capacity associative memory which we call the exponential correlation 3J.'-CMOSassociative memory (ECAM). The prototype programmable chip is capable of storing 32 memory patterns of 24 bits each. The high capacity of the ECAM is partly due to the use of special exponentiation neurons, which are implemented via MOS transistors in this design. The prototype chipsub-threshold of performing one associative recall in 3 J.'S.is capable 1 ARCHITECTURE Previously (Chiueh, 1989), we have proposed a general model for correlation-based associative memories, which includes a variant of the Hopfield memory and highorder correlation memories as special cases. This new exponential correlation associative (ECAM) possesses a very large storage capacity, which scalesmemory exponentially with the length of memory patterns (Chiueh, 1988).


Modeling Design Process

AI Magazine

This article discusses building a computable design process model, which is a prerequisite for realizing intelligent computer-aided design systems. First, we introduce general design theory, from which a descriptive model of design processes is derived. In this model, the concept of metamodels plays a crucial role in describing the evolutionary nature of design. Second, we show a cognitive design process model obtained by observing design processes using a protocol analysis method. We then discuss a computable model that can explain most parts of the cognitive model and also interpret the descriptive model. In the computable model, a design process is regarded as an iterative logical process realized by abduction, deduction, and circumscription. We implemented a design simulator that can trace design processes in which design specifications and design solutions are gradually revised as the design proceeds.


Guest Editorial: Design for AI Researchers

AI Magazine

Design has long been an area of particular interest for AI researchers. Herbert Simon's 1968 Karl Taylor Compton lectures on the sciences of the artificial included substantial material on design. However, only recently have design researchers embraced paradigms from AI and AI researchers chosen design as a domain to study.


Review of Simple Minds

AI Magazine

Of what are minds made? Internal mental representations? Matter? In this provocative and engaging work (Simple Minds, Cambridge, Mass.: The MIT Press, 1989, 266 pages, $25.00, ISBN 0-262-12140-9), Dan Lloyd seeks to provide answers that will bridge the gap between computational and connectionist models of the mind.


Letters to the Editor

AI Magazine

I appreciated very much the Spring 1990 issue of the AI Magazine on Robotic Assembly and Task Planning. It seems to me, however, that some good work that has been carried out on this subject in Europe during recent years has not been covered very much. Also commons on the low participation levels of women in the computer industry, suggestions for the inclusion of dissertation abstracts, comments on the Feldman article in the Fall 1990 issue, and a note about the discontinuance of plastic coverings on AI Magazine.



Creating a Scientific Community at the Interface Between Engineering Design and AI

AI Magazine

On January 13-14, 1990, a workshop organized by EDRC was held to discuss the topic of creating a scientific community at the interface between engineering design and AI, in order to identify problems and methods in the area that would facilitate the transfer and reuse of results. This report summarizes the workshop and follow-up sessions and identifies major trends in the field.