Goto

Collaborating Authors

 Country


Theory of Self-Organization of Cortical Maps

Neural Information Processing Systems

We have mathematically shown that cortical maps in the primary sensory cortices can be reproduced by using three hypotheses which have physiological basis and meaning. Here, our main focus is on ocular.dominance


Associative Learning via Inhibitory Search

Neural Information Processing Systems

ALVIS is a reinforcement-based connectionist architecture that learns associative maps in continuous multidimensional environments. Thediscovered locations of positive and negative reinforcements arerecorded in "do be" and "don't be" subnetworks, respectively. The outputs of the subnetworks relevant to the current goalare combined and compared with the current location to produce an error vector. This vector is backpropagated through a motor-perceptual mapping network.



Simulation and Measurement of the Electric Fields Generated by Weakly Electric Fish

Neural Information Processing Systems

The weakly electric fish, Gnathonemus peters;;, explores its environment by generating pulsedelecbic fields and detecting small pertwbations in the fields resulting from nearby objects. Accordingly, the fISh detects and discriminates objects on the basis of a sequence of elecbic "images" whose temporal and spatial properties depend on the timing ofthe fish's electric organ discharge and its body position relative to objects in its environmenl Weare interested in investigating how these fish utilize timing and body-position during exploration to aid in object discrimination. We have developed a fmite-element simulation of the fish's self-generated electric fields so as to reconstruct the electrosensory consequencesof body position and electric organ discharge timing in the fish. This paper describes this finite-element simulation system and presents preliminary electric fieldmeasurements which are being used to tune the simulation.


Storing Covariance by the Associative Long-Term Potentiation and Depression of Synaptic Strengths in the Hippocampus

Neural Information Processing Systems

We have tested this assumption in the hippocampus, a cortical structure or the brain that is involved in long-term memory. A brier, high-frequency activation or excitatory synapses in the hippocampus produces an increase in synaptic strength known as long-term potentiation, or LTP (BUss and Lomo, 1973), that can last ror many days. LTP is known to be Hebbian since it requires the simultaneous release or neurotransmitter from presynaptic terminals coupled with postsynaptic depolarization (Kelso et al, 1986; Malinow and Miller, 1986; Gustatrson et al, 1987). However, a mechanism ror the persistent reduction or synaptic strength that could balance LTP has not yet been demonstrated. We studied theassociative interactions between separate inputs onto the same dendritic trees or hippocampal pyramidal cells or field CAl, and round that a low-frequency input which, by itselr, does not persistently change synaptic strength, can either increase (associative LTP) or decrease in strength (associative long-term depression or LTD) depending upon whether it is positively or negatively correlated in time with a second, high-frequency bursting input. LTP or synaptic strength is Hebbian, and LTD is anti-Hebbian since it is elicited by pairing presynaptic firing with postsynaptic hyperpolarizationsufficient to block postsynaptic activity.


An Analog VLSI Chip for Thin-Plate Surface Interpolation

Neural Information Processing Systems

Reconstructing a surface from sparse sensory data is a well-known problem iIi computer vision. This paper describes an experimental analog VLSI chip for smooth surface interpolation from sparse depth data. An eight-node ID network was designed in 3J.lm CMOS and successfully tested.


A Network for Image Segmentation Using Color

Neural Information Processing Systems

Otherwise it might ascribe different characteristics to the same object under different lights. But the first step in using color for recognition, segmentingthe scene into regions of different colors, does not require color constancy.


Learning Sequential Structure in Simple Recurrent Networks

Neural Information Processing Systems

The network uses the pattern of activation over a set of hidden units from time-step tl, together with element t, to predict element t 1. When the network is trained with strings from a particular finite-state grammar, it can learn to be a perfect finite-state recognizer for the grammar. Cluster analyses of the hidden-layer patterns of activation showed that they encode prediction-relevant information about the entire path traversed through the network. We illustrate the phases of learning with cluster analyses performed at different points during training. Several connectionist architectures that are explicitly constrained to capture sequential infonnation have been developed. Examples are Time Delay Networks (e.g.


Neural Networks for Model Matching and Perceptual Organization

Neural Information Processing Systems

We introduce an optimization approach for solving problems in computer visionthat involve multiple levels of abstraction. Our objective functions include compositional and specialization hierarchies. We cast vision problems as inexact graph matching problems, formulate graph matching in terms of constrained optimization, and use analog neural networks to perform the optimization. The method is applicable to perceptual groupingand model matching. Preliminary experimental results are shown.


Electronic Receptors for Tactile/Haptic Sensing

Neural Information Processing Systems

We discuss synthetic receptors for haptic sensing. These are based on magnetic field sensors (Hall effect structures) fabricated using standard CMOS technologies.