North America
A Network Mechanism for the Determination of Shape-From-Texture
We propose a computational model for how the cortex discriminates shape and depth from texture. The model consists of four stages: (1) extraction of local spatial frequency, (2) frequency characterization, (3) detection of texture compression by normalization, and (4) integration of the normalized frequency over space. The model accounts for a number of psychophysical observations including experiments based on novel random textures. These textures are generated from white noise and manipulated in Fourier domain in order to produce specific frequency spectra. Simulations with a range of stimuli, including real images, show qualitative and quantitative agreement with human perception. 1 INTRODUCTION There are several physical cues to shape and depth which arise from changes in projection as a surface curves away from view, or recedes in perspective.
Directional Hearing by the Mauthner System
Guzik, Audrey L., Eaton, Robert C.
Eaton E. P. O. Biology University of Colorado Boulder, Co. 80309 Abstract We provide a computational description of the function of the Mauthner system.This is the brainstem circuit which initiates faststart escapes in teleost fish in response to sounds. Our simulations, usingbackpropagation in a realistically constrained feedforward network, have generated hypotheses which are directly interpretable interms of the activity of the auditory nerve fibers, the principle cells of the system and their associated inhibitory neurons. 1 INTRODUCTION 1.1 THE M.AUTHNER SYSTEM Much is known about the brainstem system that controls fast-start escapes in teleost fish. The most prominent feature of this network is the pair of large Mauthner cells whose axons cross the midline and descend down the spinal cord to synapse on primary motoneurons. The Mauthner system also includes inhibitory neurons, the PHP cells, which have a unique and intense field effect inhibition at the spikeinitiating zoneof the Mauthner cells (Faber and Korn, 1978). The Mauthner system is part of the full brainstem escape network which also includes two pairs of cells homologous to the Mauthner cell and other populations of reticulospinal neurons. With this network fish initiate escapes only from appropriate stimuli, turn away from the offending stimulus, and do so very rapidly with a latency around 15 msec in goldfish.
Processing of Visual and Auditory Space and Its Modification by Experience
Rauschecker, Josef P., Sejnowski, Terrence J.
Sejnowski Computational Neurobiology Lab The Salk: Institute San Diego, CA 92138 Visual spatial information is projected from the retina to the brain in a highly topographic fashion, so that 2-D visual space is represented in a simple retinotopic map. Auditory spatial information, by contrast, has to be computed from binaural time and intensity differences as well as from monaural spectral cues produced by the head and ears. Evaluation of these cues in the central nervous system leads to the generation of neurons that are sensitive to the location of a sound source in space ("spatial tuning") and, in some animal species, to auditory space maps where spatial location is encoded as a 2-D map just like in the visual system. The brain structures thought to be involved in the multimodal integration of visual and auditory spatial integration are the superior colliculus in the midbrain and the inferior parietal lobe in the cerebral cortex. It has been suggested for the owl that the visual system participates in setting up the auditory space map in the superior.
A Local Algorithm to Learn Trajectories with Stochastic Neural Networks
This paper presents a simple algorithm to learn trajectories with a continuous time, continuous activation version of the Boltzmann machine. The algorithm takes advantage of intrinsic Brownian noise in the network to easily compute gradients using entirely local computations. The algorithm may be ideal for parallel hardware implementations. This paper presents a learning algorithm to train continuous stochastic networks to respond with desired trajectories in the output units to environmental input trajectories. This is a task, with potential applications to a variety of problems such as stochastic modeling of neural processes, artificial motor control, and continuous speech recognition.
Signature Verification using a "Siamese" Time Delay Neural Network
Bromley, Jane, Guyon, Isabelle, LeCun, Yann, Säckinger, Eduard, Shah, Roopak
The aim of the project was to make a signature verification system based on the NCR 5990 Signature Capture Device (a pen-input tablet) and to use 80 bytes or less for signature feature storage in order that the features can be stored on the magnetic strip of a credit-card. Verification using a digitizer such as the 5990, which generates spatial coordinates as a function of time, is known as dynamic verification. Much research has been carried out on signature verification.
Optimal Signalling in Attractor Neural Networks
Meilijson, Isaac, Ruppin, Eytan
It is well known that a given cortical neuron can respond with a different firing pattern for the same synaptic input, depending on its firing history and on the effects of modulator transmitters (see [Connors and Gutnick, 1990] for a review). The time span of different channel conductances is very broad, and the influence of some ionic currents varies with the history of the membrane potential [Lytton, 1991]. Motivated by the history-dependent nature of neuronal firing, we continue.our
Feature Densities are Required for Computing Feature Correspondences
The feature correspondence problem is a classic hurdle in visual object-recognition concerned with determining the correct mapping between the features measured from the image and the features expected by the model. In this paper we show that determining good correspondences requires information about the joint probability density over the image features. We propose "likelihood based correspondence matching" as a general principle for selecting optimal correspondences. The approach is applicable to nonrigid models, allows nonlinear perspective transformations, and can optimally deal with occlusions and missing features.
An Analog VLSI Model of Central Pattern Generation in the Leech
The biological network is small and relatively well understood, and the silicon model can therefore span three levels of organization in the leech nervous system (neuron, ganglion, system); it represents one of the first comprehensive models of leech swimming operating in real-time. The circuit employs biophysically motivated analog neurons networked to form multiple biologically inspired silicon ganglia. These ganglia are coupled using known interganglionic connections. Thus the model retains the flavor of its biological counterpart, and though simplified, the output of the silicon circuit is similar to the output of the leech swim central pattern generator. The model operates on the same time-and spatial-scale as the leech nervous system and will provide an excellent platform with which to explore real-time adaptive locomotion in the leech and other "simple" invertebrate nervous systems.
Event-Driven Simulation of Networks of Spiking Neurons
A fast event-driven software simulator has been developed for simulating large networks of spiking neurons and synapses. The primitive network elements are designed to exhibit biologically realistic behaviors, such as spiking, refractoriness, adaptation, axonal delays, summation of post-synaptic current pulses, and tonic current inputs. The efficient event-driven representation allows large networks to be simulated in a fraction of the time that would be required for a full compartmental-model simulation. Corresponding analog CMOS VLSI circuit primitives have been designed and characterized, so that large-scale circuits may be simulated prior to fabrication. 1 Introduction Artificial neural networks typically use an abstraction of real neuron behaviour, in which the continuously varying mean firing rate of the neuron is presumed to carry the information about the neuron's time-varying state of excitation [1]. This useful simplification allows the neuron's state to be represented as a time-varying continuous-amplitude quantity.