Well File:

Storing Covariance by the Associative Long-Term Potentiation and Depression of Synaptic Strengths in the Hippocampus

Neural Information Processing Systems

We have tested this assumption in the hippocampus, a cortical structure or the brain that is involved in long-term memory. A brier, high-frequency activation or excitatory synapses in the hippocampus produces an increase in synaptic strength known as long-term potentiation, or LTP (BUss and Lomo, 1973), that can last ror many days. LTP is known to be Hebbian since it requires the simultaneous release or neurotransmitter from presynaptic terminals coupled with postsynaptic depolarization (Kelso et al, 1986; Malinow and Miller, 1986; Gustatrson et al, 1987). However, a mechanism ror the persistent reduction or synaptic strength that could balance LTP has not yet been demonstrated. We studied theassociative interactions between separate inputs onto the same dendritic trees or hippocampal pyramidal cells or field CAl, and round that a low-frequency input which, by itselr, does not persistently change synaptic strength, can either increase (associative LTP) or decrease in strength (associative long-term depression or LTD) depending upon whether it is positively or negatively correlated in time with a second, high-frequency bursting input. LTP or synaptic strength is Hebbian, and LTD is anti-Hebbian since it is elicited by pairing presynaptic firing with postsynaptic hyperpolarizationsufficient to block postsynaptic activity.


An Analog VLSI Chip for Thin-Plate Surface Interpolation

Neural Information Processing Systems

Reconstructing a surface from sparse sensory data is a well-known problem iIi computer vision. This paper describes an experimental analog VLSI chip for smooth surface interpolation from sparse depth data. An eight-node ID network was designed in 3J.lm CMOS and successfully tested.


A Network for Image Segmentation Using Color

Neural Information Processing Systems

Otherwise it might ascribe different characteristics to the same object under different lights. But the first step in using color for recognition, segmentingthe scene into regions of different colors, does not require color constancy.


Learning Sequential Structure in Simple Recurrent Networks

Neural Information Processing Systems

The network uses the pattern of activation over a set of hidden units from time-step tl, together with element t, to predict element t 1. When the network is trained with strings from a particular finite-state grammar, it can learn to be a perfect finite-state recognizer for the grammar. Cluster analyses of the hidden-layer patterns of activation showed that they encode prediction-relevant information about the entire path traversed through the network. We illustrate the phases of learning with cluster analyses performed at different points during training. Several connectionist architectures that are explicitly constrained to capture sequential infonnation have been developed. Examples are Time Delay Networks (e.g.


Neural Networks for Model Matching and Perceptual Organization

Neural Information Processing Systems

We introduce an optimization approach for solving problems in computer visionthat involve multiple levels of abstraction. Our objective functions include compositional and specialization hierarchies. We cast vision problems as inexact graph matching problems, formulate graph matching in terms of constrained optimization, and use analog neural networks to perform the optimization. The method is applicable to perceptual groupingand model matching. Preliminary experimental results are shown.


Electronic Receptors for Tactile/Haptic Sensing

Neural Information Processing Systems

We discuss synthetic receptors for haptic sensing. These are based on magnetic field sensors (Hall effect structures) fabricated using standard CMOS technologies.



What Size Net Gives Valid Generalization?

Neural Information Processing Systems

We address the question of when a network can be expected to generalize from m random training examples chosen from some arbitrary probabilitydistribution, assuming that future test examples are drawn from the same distribution. Among our results are the following bounds on appropriate sample vs. network size.


Statistical Prediction with Kanerva's Sparse Distributed Memory

Neural Information Processing Systems

David Rogers Research Institute for Advanced Computer Science MS 230-5, NASA Ames Research Center Moffett Field, CA 94035 ABSTRACT A new viewpoint of the processing performed by Kanerva's sparse distributed memory (SDM) is presented. In conditions of near-or over-capacity, where the associative-memory behavior of the model breaksdown, the processing performed by the model can be interpreted asthat of a statistical predictor. Mathematical results are presented which serve as the framework for a new statistical viewpoint ofsparse distributed memory and for which the standard formulation ofSDM is a special case. This viewpoint suggests possible enhancements to the SDM model, including a procedure for improving the predictiveness of the system based on Holland's work with'Genetic Algorithms', and a method for improving the capacity of SDM even when used as an associative memory. OVERVIEW This work is the result of studies involving two seemingly separate topics that proved to share a common framework.


Scaling and Generalization in Neural Networks: A Case Study

Neural Information Processing Systems

The issues of scaling and generalization have emerged as key issues in current studies of supervised learning from examples in neural networks. Questions such as how many training patterns and training cycles are needed for a problem of a given size and difficulty, how to represent the inllUh and how to choose useful training exemplars, are of considerable theoretical and practical importance. Several intuitive rules of thumb have been obtained from empirical studies, but as yet there are few rigorous results.In this paper we summarize a study Qf generalization in the simplest possible case-perceptron networks learning linearly separable functions.The task chosen was the majority function (i.e. return a 1 if a majority of the input units are on), a predicate with a number ofuseful properties. We find that many aspects of.generalization in multilayer networks learning large, difficult tasks are reproduced in this simple domain, in which concrete numerical results and even some analytic understanding can be achieved.