Goto

Collaborating Authors

Learning Aspect Graph Representations from View Sequences

Neural Information Processing Systems

In our effort to develop a modular neural system for invariant learning and recognition of 3D objects, we introduce here a new module architecture called an aspect network constructed around adaptive axo-axo-dendritic synapses. This builds upon our existing system (Seibert & Waxman, 1989) which processes 20 shapes and classifies t.hem into view categories (i.e., aspects) invariant to illumination, position, orientat.ion,


Development and Regeneration of Eye-Brain Maps: A Computational Model

Neural Information Processing Systems

We outline a computational model of the development and regeneration of specific eye-brain circuits. The model comprises a self-organizing map-forming network which uses local Hebb rules.


Connectionist Architectures for Multi-Speaker Phoneme Recognition

Neural Information Processing Systems

We present a number of Time-Delay Neural Network (TDNN) based architectures for multi-speaker phoneme recognition (/b,d,g/ task). We use speech of two females and four males to compare the performance of the various architectures against a baseline recognition rate of 95.9% for a single IDNN on the six-speaker /b,d,g/ task. This series of modular designs leads to a highly modular multi-network architecture capable of performing the six-speaker recognition task at the speaker dependent rate of 98.4%. In addition to its high recognition rate, the so-called "Meta-Pi" architecture learns - without direct supervision - to recognize the speech of one particular male speaker using internal models of other male speakers exclusively.


Maximum Likelihood Competitive Learning

Neural Information Processing Systems

One popular class of unsupervised algorithms are competitive algorithms. In the traditional view of competition, only one competitor, the winner, adapts for any given case. I propose to view competitive adaptation as attempting to fit a blend of simple probability generators (such as gaussians) to a set of data-points. The maximum likelihood fit of a model of this type suggests a "softer" form of competition, in which all competitors adapt in proportion to the relative probability that the input came from each competitor. I investigate one application of the soft competitive model, placement of radial basis function centers for function interpolation, and show that the soft model can give better performance with little additional computational cost. 1 INTRODUCTION Interest in unsupervised learning has increased recently due to the application of more sophisticated mathematical tools (Linsker, 1988; Plumbley and Fallside, 1988; Sanger, 1989) and the success of several elegant simulations of large scale selforganization (Linsker, 1986; Kohonen, 1982). One popular class of unsupervised algorithms are competitive algorithms, which have appeared as components in a variety of systems (Von der Malsburg, 1973; Fukushima, 1975; Grossberg, 1978). Generalizing the definition of Rumelhart and Zipser (1986), a competitive adaptive system consists of a collection of modules which are structurally identical except, possibly, for random initial parameter variation.


Effects of Firing Synchrony on Signal Propagation in Layered Networks

Neural Information Processing Systems

Spiking neurons which integrate to threshold and fire were used to study the transmission of frequency modulated (FM) signals through layered networks. Firing correlations between cells in the input layer were found to modulate the transmission of FM signals under certain dynamical conditions. A tonic level of activity was maintained by providing each cell with a source of Poissondistributed synaptic input. When the average membrane depolarization produced by the synaptic input was sufficiently below threshold, the firing correlations between cells in the input layer could greatly amplify the signal present in subsequent layers. When the depolarization was sufficiently close to threshold, however, the firing synchrony between cells in the initial layers could no longer effect the propagation of FM signals. In this latter case, integrateand-fire neurons could be effectively modeled by simpler analog elements governed by a linear input-output relation.



Model Based Image Compression and Adaptive Data Representation by Interacting Filter Banks

Neural Information Processing Systems

To achieve high-rate image data compression while maintainig a high quality reconstructed image, a good image model and an efficient way to represent the specific data of each image must be introduced. Based on the physiological knowledge of multi - channel characteristics and inhibitory interactions between them in the human visual system, a mathematically coherent parallel architecture for image data compression which utilizes the Markov random field Image model and interactions between a vast number of filter banks, is proposed.


Computational Efficiency: A Common Organizing Principle for Parallel Computer Maps and Brain Maps?

Neural Information Processing Systems

It is well-known that neural responses in particular brain regions are spatially organized, but no general principles have been developed that relate the structure of a brain map to the nature of the associated computation. On parallel computers, maps of a sort quite similar to brain maps arise when a computation is distributed across multiple processors. In this paper we will discuss the relationship between maps and computations on these computers and suggest how similar considerations might also apply to maps in the brain. 1 INTRODUCTION A great deal of effort in experimental and theoretical neuroscience is devoted to recording and interpreting spatial patterns of neural activity. A variety of map patterns have been observed in different brain regions and, presumably, these patterns reflect something about the nature of the neural computations being carried out in these regions. To date, however, there have been no general principles for interpreting the structure of a brain map in terms of properties of the associated computation.


Neural Network Visualization

Neural Information Processing Systems

We have developed graphics to visualize static and dynamic information in layered neural network learning systems. Emphasis was placed on creating new visuals that make use of spatial arrangements, size information, animation and color. We applied these tools to the study of back-propagation learning of simple Boolean predicates, and have obtained new insights into the dynamics of the learning process.


Generalized Hopfield Networks and Nonlinear Optimization

Neural Information Processing Systems

Purdue University Purdue University Purdue University W. Lafayette, IN. 47907 W. Lafayette, IN. 47907 W. Lafayette, IN. 47907 ABSTRACT A nonlinear neural framework, called the Generalized Hopfield network, is proposed, which is able to solve in a parallel distributed manner systems of nonlinear equations. The method is applied to the general nonlinear optimization problem. We demonstrate GHNs implementing the three most important optimization algorithms, namely the Augmented Lagrangian, Generalized Reduced Gradient and Successive Quadratic Programming methods. The study results in a dynamic view of the optimization problem and offers a straightforward model for the parallelization of the optimization computations, thus significantly extending the practical limits of problems that can be formulated as an optimization problem and which can gain from the introduction of nonlinearities in their structure (eg. The ability of networks of highly interconnected simple nonlinear analog processors (neurons) to solve complicated optimization problems was demonstrated in a series of papers by Hopfield and Tank (Hopfield, 1984), (Tank, 1986).