Goto

Collaborating Authors

 Country


A Network Mechanism for the Determination of Shape-From-Texture

Neural Information Processing Systems

We propose a computational model for how the cortex discriminates shape and depth from texture. The model consists of four stages: (1) extraction of local spatial frequency, (2) frequency characterization, (3) detection of texture compression by normalization, and (4) integration of the normalized frequency over space. The model accounts for a number of psychophysical observations including experiments based on novel random textures. These textures are generated from white noise and manipulated in Fourier domain in order to produce specific frequency spectra. Simulations with a range of stimuli, including real images, show qualitative and quantitative agreement with human perception. 1 INTRODUCTION There are several physical cues to shape and depth which arise from changes in projection as a surface curves away from view, or recedes in perspective.


Implementing Intelligence on Silicon Using Neuron-Like Functional MOS Transistors

Neural Information Processing Systems

We will present the implementation of intelligent electronic circuits realized for the first time using a new functional device called Neuron MOS Transistor (neuMOS or vMOS in short) simulating the behavior of biological neurons at a single transistor level. Search for the most resembling data in the memory cell array, for instance, can be automatically carried out on hardware without any software manipulation. Soft Hardware, which we named, can arbitrarily change its logic function in real time by external control signals without any hardware modification. Implementation of a neural network equipped with an on-chip self-learning capability is also described. Through the studies of vMOS intelligent circuit implementation, we noticed an interesting similarity in the architectures of vMOS logic circuitry and biological systems.


Directional Hearing by the Mauthner System

Neural Information Processing Systems

Eaton E. P. O. Biology University of Colorado Boulder, Co. 80309 Abstract We provide a computational description of the function of the Mauthner system.This is the brainstem circuit which initiates faststart escapes in teleost fish in response to sounds. Our simulations, usingbackpropagation in a realistically constrained feedforward network, have generated hypotheses which are directly interpretable interms of the activity of the auditory nerve fibers, the principle cells of the system and their associated inhibitory neurons. 1 INTRODUCTION 1.1 THE M.AUTHNER SYSTEM Much is known about the brainstem system that controls fast-start escapes in teleost fish. The most prominent feature of this network is the pair of large Mauthner cells whose axons cross the midline and descend down the spinal cord to synapse on primary motoneurons. The Mauthner system also includes inhibitory neurons, the PHP cells, which have a unique and intense field effect inhibition at the spikeinitiating zoneof the Mauthner cells (Faber and Korn, 1978). The Mauthner system is part of the full brainstem escape network which also includes two pairs of cells homologous to the Mauthner cell and other populations of reticulospinal neurons. With this network fish initiate escapes only from appropriate stimuli, turn away from the offending stimulus, and do so very rapidly with a latency around 15 msec in goldfish.


Processing of Visual and Auditory Space and Its Modification by Experience

Neural Information Processing Systems

Sejnowski Computational Neurobiology Lab The Salk: Institute San Diego, CA 92138 Visual spatial information is projected from the retina to the brain in a highly topographic fashion, so that 2-D visual space is represented in a simple retinotopic map. Auditory spatial information, by contrast, has to be computed from binaural time and intensity differences as well as from monaural spectral cues produced by the head and ears. Evaluation of these cues in the central nervous system leads to the generation of neurons that are sensitive to the location of a sound source in space ("spatial tuning") and, in some animal species, to auditory space maps where spatial location is encoded as a 2-D map just like in the visual system. The brain structures thought to be involved in the multimodal integration of visual and auditory spatial integration are the superior colliculus in the midbrain and the inferior parietal lobe in the cerebral cortex. It has been suggested for the owl that the visual system participates in setting up the auditory space map in the superior.


Connectionist Modeling and Parallel Architectures

Neural Information Processing Systems

University of Rochester) and ICSIM (lCSI Berkeley) allow the definition of unit types and complex connectivity patterns. On a very high level of abstraction, simulators like tleam (UCSD) allow the easy realization of predefined network architectures (feedforwardnetworks) and leaming algorithms such as backpropagation. Ben Gomes, International Computer Science Institute (Berkeley) introduced the Connectionist Supercomputer 1. The CNSl is a multiprocessor system designed for moderate precision fixed point operations used extensively in connectionist network calculations. Custom VLSI digital processors employ an on-chip vector coprocessor unit tailored for neural network calculations and controlled by RISC scalar CPU. One processor and associated commercial DRAM comprise a node, which is connected in a mesh topology with other nodes to establish a MIMD array. One edge of the communications meshis reserved for attaching various 110 devices, which connect via a custom network adaptor chip. The CNSl operates as a compute server and one 110 port is used for connecting to a host workstation. Users with mainstream connectionist applications can use CNSim, an object-oriented, graphical high-level interface to the CNSl environment.


A Local Algorithm to Learn Trajectories with Stochastic Neural Networks

Neural Information Processing Systems

This paper presents a simple algorithm to learn trajectories with a continuous time, continuous activation version of the Boltzmann machine. The algorithm takes advantage of intrinsic Brownian noise in the network to easily compute gradients using entirely local computations. The algorithm may be ideal for parallel hardware implementations. This paper presents a learning algorithm to train continuous stochastic networks to respond with desired trajectories in the output units to environmental input trajectories. This is a task, with potential applications to a variety of problems such as stochastic modeling of neural processes, artificial motor control, and continuous speech recognition.


Signature Verification using a "Siamese" Time Delay Neural Network

Neural Information Processing Systems

The aim of the project was to make a signature verification system based on the NCR 5990 Signature Capture Device (a pen-input tablet) and to use 80 bytes or less for signature feature storage in order that the features can be stored on the magnetic strip of a credit-card. Verification using a digitizer such as the 5990, which generates spatial coordinates as a function of time, is known as dynamic verification. Much research has been carried out on signature verification.


Optimal Signalling in Attractor Neural Networks

Neural Information Processing Systems

It is well known that a given cortical neuron can respond with a different firing pattern for the same synaptic input, depending on its firing history and on the effects of modulator transmitters (see [Connors and Gutnick, 1990] for a review). The time span of different channel conductances is very broad, and the influence of some ionic currents varies with the history of the membrane potential [Lytton, 1991]. Motivated by the history-dependent nature of neuronal firing, we continue.our


Feature Densities are Required for Computing Feature Correspondences

Neural Information Processing Systems

The feature correspondence problem is a classic hurdle in visual object-recognition concerned with determining the correct mapping between the features measured from the image and the features expected by the model. In this paper we show that determining good correspondences requires information about the joint probability density over the image features. We propose "likelihood based correspondence matching" as a general principle for selecting optimal correspondences. The approach is applicable to nonrigid models, allows nonlinear perspective transformations, and can optimally deal with occlusions and missing features.


An Analog VLSI Model of Central Pattern Generation in the Leech

Neural Information Processing Systems

The biological network is small and relatively well understood, and the silicon model can therefore span three levels of organization in the leech nervous system (neuron, ganglion, system); it represents one of the first comprehensive models of leech swimming operating in real-time. The circuit employs biophysically motivated analog neurons networked to form multiple biologically inspired silicon ganglia. These ganglia are coupled using known interganglionic connections. Thus the model retains the flavor of its biological counterpart, and though simplified, the output of the silicon circuit is similar to the output of the leech swim central pattern generator. The model operates on the same time-and spatial-scale as the leech nervous system and will provide an excellent platform with which to explore real-time adaptive locomotion in the leech and other "simple" invertebrate nervous systems.