Goto

Collaborating Authors

 North America


An Auditory Localization and Coordinate Transform Chip

Neural Information Processing Systems

The localization and orientation to various novel or interesting events in the environment is a critical sensorimotor ability in all animals, predator or prey. In mammals, the superior colliculus (SC) plays a major role in this behavior, the deeper layers exhibiting topographicallymapped responses to visual, auditory, and somatosensory stimuli. Sensory information arriving from different modalitiesshould then be represented in the same coordinate frame. Auditory cues, in particular, are thought to be computed in head-based coordinates which must then be transformed to retinal coordinates.In this paper, an analog VLSI implementation for auditory localization in the azimuthal plane is described which extends thearchitecture proposed for the barn owl to a primate eye movement system where further transformation is required. This transformation is intended to model the projection in primates from auditory cortical areas to the deeper layers of the primate superior colliculus. This system is interfaced with an analog VLSI-based saccadic eye movement system also being constructed in our laboratory.


Coarse-to-Fine Image Search Using Neural Networks

Neural Information Processing Systems

The efficiency of image search can be greatly improved by using a coarse-to-fine search strategy with a multi-resolution image representation. However,if the resolution is so low that the objects have few distinguishing features,search becomes difficult. We show that the performance of search at such low resolutions can be improved by using context information, i.e., objects visible at low-resolution which are not the objects of interest but are associated with them. The networks can be given explicit context information as inputs, or they can learn to detect the context objects, in which case the user does not have to be aware of their existence. We also use Integrated Feature Pyramids, which represent high-frequencyinformation at low resolutions. The use of multiresolution searchtechniques allows us to combine information about the appearance of the objects on many scales in an efficient way. A natural fOlm of exemplar selection also arises from these techniques. We illustrate theseideas by training hierarchical systems of neural networks to find clusters of buildings in aerial photographs of farmland.


A Neural Model of Delusions and Hallucinations in Schizophrenia

Neural Information Processing Systems

We implement and study a computational model of Stevens' [19921 theory of the pathogenesis of schizophrenia. This theory hypothesizes thatthe onset of schizophrenia is associated with reactive synaptic regeneration occurring in brain regions receiving degenerating temporallobe projections. Concentrating on one such area, the frontal cortex, we model a frontal module as an associative memory neural network whose input synapses represent incoming temporal projections. We analyze how, in the face of weakened external input projections, compensatory strengthening of internal synaptic connections and increased noise levels can maintain memory capacities(which are generally preserved in schizophrenia). However, These compensatory changes adversely lead to spontaneous, biasedretrieval of stored memories, which corresponds to the occurrence of schizophrenic delusions and hallucinations without anyapparent external trigger, and for their tendency to concentrate onjust few central themes. Our results explain why these symptoms tend to wane as schizophrenia progresses, and why delayed therapeuticalintervention leads to a much slower response.




Template-Based Algorithms for Connectionist Rule Extraction

Neural Information Processing Systems

Casting neural network weights in symbolic terms is crucial for interpreting and explaining the behavior of a network. Additionally, in some domains, a symbolic description may lead to more robust generalization. We present a principled approach to symbolic rule extraction based on the notion of weight templates, parameterized regions of weight space corresponding to specific symbolic expressions. With an appropriate choice of representation, we show how template parameters may be efficiently identified and instantiated to yield the optimal match to a unit's actual weights. Depending on the requirements of the application domain, our method can accommodate arbitrary disjunctions and conjunctions with O(k) complexity, simple n-of-m expressions with O(k!)


Using a Saliency Map for Active Spatial Selective Attention: Implementation & Initial Results

Neural Information Processing Systems

School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract In many vision based tasks, the ability to focus attention on the important portions of a scene is crucial for good performance on the tasks. In this paper we present a simple method of achieving spatial selective attention through the use of a saliency map. The saliency map indicates which regions of the input retina are important for performing the task. The saliency map is created throughpredictive auto-encoding. The performance of this method is demonstrated on two simple tasks which have multiple very strong distracting featuresin the input retina. Architectural extensions and application directions for this model are presented. On some tasks this extra input can easily be ignored. Nonetheless, often the similarity between the important input features and the irrelevant features is great enough to interfere with task performance.


From Data Distributions to Regularization in Invariant Learning

Neural Information Processing Systems

For unbiased models the regulatizer reducesto the intuitive form that penalizes the mean squared difference between the network output for transformed and untransformed inputs - i.e. the error in satisfying the desired invariance. In general the regularizer includes a term that measures correlations between the error in fitting the data, and the error in satisfying the desired inva.riance. For infinitesimal transformations, the regularizer is equivalent (up to terms linear in the variance of the transformation parameters) to the tangent prop form given by Simard et a1.


Single Transistor Learning Synapses

Neural Information Processing Systems

The past few years have produced a number of efforts to design VLSI chips which "learn from experience." The first step toward this goal is developing a silicon analog for a synapse. We have successfully developed such a synapse using only 818 PaulHasler, Chris Diorio, Bradley A. Minch, Carver Mead Drain Gate


Learning Saccadic Eye Movements Using Multiscale Spatial Filters

Neural Information Processing Systems

Such sensors realize the simultaneous needfor wide field-of-view and good visual acuity. One popular class of space-variant sensors is formed by log-polar sensors which have a small area near the optical axis of greatly increased resolution (the fovea) coupled with a peripheral region that witnesses a gradual logarithmic falloff in resolution as one moves radially outward. These sensors are inspired by similar structures found in the primate retina where one finds both a peripheral region of gradually decreasing acuity and a circularly symmetric area centmlis characterized by a greater density of receptors and a disproportionate representation in the optic nerve [3]. The peripheral region, though of low visual acuity, is more sensitive to light intensity and movement. The existence of a region optimized for discrimination and recognition surrounded by a region geared towards detection thus allows the image of an object of interest detected in the outer region to be placed on the more analytic center for closer scrutiny. Such a strategy however necessitates the existence of (a) methods to determine which location in the periphery to foveate next, and (b) fast gaze-shifting mechanisms to achieve this 894 RajeshP.