Not enough data to create a plot.
Try a different view from the menu above.
Technology
Neural Dynamics of Motion Segmentation and Grouping
A neural network model of motion segmentation by visual cortex is described. Themodel clarifies how preprocessing of motion signals by a Motion Oriented Contrast Filter (MOC Filter) is joined to long-range cooperative motionmechanisms in a motion Cooperative Competitive Loop (CC Loop) to control phenomena such as as induced motion, motion capture, andmotion aftereffects. The total model system is a motion Boundary Contour System (BCS) that is computed in parallel with a static BCS before both systems cooperate to generate a boundary representation for three dimensional visual form perception. The present investigations clarify howthe static BCS can be modified for use in motion segmentation problems, notablyfor analyzing how ambiguous local movements (the aperture problem) on a complex moving shape are suppressed and actively reorganized intoa coherent global motion signal. 1 INTRODUCTION: WHY ARE STATIC AND MOTION BOUNDARY CONTOUR SYSTEMS NEEDED? Some regions, notably MT, of visual cortex are specialized for motion processing.
Further Studies of a Model for the Development and Regeneration of Eye-Brain Maps
Cowan, Jack D., Friedman, A. E.
We describe a computational model of the development and regeneration ofspecific eye-brain circuits. The model comprises a self-organizing map-forming network which uses local Hebb rules, constrained by (genetically determined) molecular markers. Various simulations of the development and regeneration of eye-brain maps in fish and frogs are described, in particular successful simulations of experiments by Schmidt-Cicerone-Easter; Meyer; and Yoon. 1 INTRODUCTION In a previous paper published in last years proceedings (Cowan & Friedman 1990) we outlined a new computational model for the development and regeneration of eye-brain maps. We indicated that such a model can simulate the results of a number of the more complicated surgical manipulations carried out on the visual pathways of goldfish and frogs. In this paper we describe in more detail some of these experiments, and our simulations of them.
Cholinergic Modulation May Enhance Cortical Associative Memory Function
Hasselmo, Michael E., Anderson, Brooke P., Bower, James M.
James M. Bower Computation and Neural Systems Caltech 216-76 Pasadena, CA 91125 Combining neuropharmacological experiments with computational modeling, wehave shown that cholinergic modulation may enhance associative memory function in piriform (olfactory) cortex. We have shown that the acetylcholine analogue carbachol selectively suppresses synaptic transmission betweencells within piriform cortex, while leaving input connections unaffected. When tested in a computational model of piriform cortex, this selective suppression, applied during learning, enhances associative memory performance.
Qualitative structure from motion
I have presented a qualitative approach to the problem of recovering object structure from motion information and discussed some of its computational, psychophysical and implementational aspects. The computation of qualitative shape, as represented bythe sign of the Gaussian curvature, can be performed by a field of simple operators, in parallel over the entire image. The performance of a qualitative shape detection module, implemented by an artificial neural network, appears to be similar to the performance of human subjects in an identical task.
RecNorm: Simultaneous Normalisation and Classification applied to Speech Recognition
Bridle, John S., Cox, Stephen J.
A particular form of neural network is described, which has terminals for acoustic patterns, class labels and speaker parameters. A method of training this network to "tune in" the speaker parameters to a particular speaker is outlined, based on a trick for converting a supervised network to an unsupervised mode. We describe experiments using this approach in isolated word recognition based on whole-word hidden Markov models. The results indicate an improvement over speaker-independent performance and,for unlabelled data, a performance close to that achieved on labelled data. 1 INTRODUCTION We are concerned to emulate some aspects of perception. In particular, the way that a stimulus which is ambiguous, perhaps because of unknown lighting conditions, can become unambiguous in the context of other such stimuli: the fact that they are subject to tbe same unknown conditions gives our perceptual apparatus enough constraints to solve tbe problem.
VLSI Implementation of TInMANN
Melton, Matt, Phan, Tan, Reeves, Doug, Bout, Dave Van den
A massively parallel, all-digital, stochastic architecture - TlnMAN N - is described which performs competitive and Kohonen types of learning. A VLSI design is shown for a TlnMANN neuron which fits within a small, inexpensive MOSIS TinyChip frame, yet which can be used to build larger networks of several hundred neurons. The neuron operates at a speed of 15 MHz which allows the network to process 290,000 training examples per second. Use of level sensitive scan logic provides the chip with 100% fault coverage, permitting very reliable neural systems to be built.
A Model of Distributed Sensorimotor Control in the Cockroach Escape Turn
Beer, R.D., Kacmarcik, G. J., Ritzmann, R.E., Chiel, H.J.
In response to a puff of wind, the American cockroach turns away and runs. The circuit underlying the initial turn of this escape response consists of three populations of individually identifiable nerve cells and appears to employ distributedrepresentations in its operation. We have reconstructed several neuronal and behavioral properties of this system using simplified neural network models and the backpropagation learning algorithm constrained byknown structural characteristics of the circuitry. In order to test and refine the model, we have also compared the model's responses to various lesions with the insect's responses to similar lesions.
How Receptive Field Parameters Affect Neural Learning
Mel, Bartlett W., Omohundro, Stephen M.
Omohundro ICSI 1947 Center St., Suite 600 Berkeley, CA 94704 We identify the three principle factors affecting the performance of learning bynetworks with localized units: unit noise, sample density, and the structure of the target function. We then analyze the effect of unit receptive fieldparameters on these factors and use this analysis to propose a new learning algorithm which dynamically alters receptive field properties during learning.