Plotting

 North America


From Data Distributions to Regularization in Invariant Learning

Neural Information Processing Systems

For unbiased models the regulatizer reducesto the intuitive form that penalizes the mean squared difference between the network output for transformed and untransformed inputs - i.e. the error in satisfying the desired invariance. In general the regularizer includes a term that measures correlations between the error in fitting the data, and the error in satisfying the desired inva.riance. For infinitesimal transformations, the regularizer is equivalent (up to terms linear in the variance of the transformation parameters) to the tangent prop form given by Simard et a1.


Single Transistor Learning Synapses

Neural Information Processing Systems

The past few years have produced a number of efforts to design VLSI chips which "learn from experience." The first step toward this goal is developing a silicon analog for a synapse. We have successfully developed such a synapse using only 818 PaulHasler, Chris Diorio, Bradley A. Minch, Carver Mead Drain Gate


Learning Saccadic Eye Movements Using Multiscale Spatial Filters

Neural Information Processing Systems

Such sensors realize the simultaneous needfor wide field-of-view and good visual acuity. One popular class of space-variant sensors is formed by log-polar sensors which have a small area near the optical axis of greatly increased resolution (the fovea) coupled with a peripheral region that witnesses a gradual logarithmic falloff in resolution as one moves radially outward. These sensors are inspired by similar structures found in the primate retina where one finds both a peripheral region of gradually decreasing acuity and a circularly symmetric area centmlis characterized by a greater density of receptors and a disproportionate representation in the optic nerve [3]. The peripheral region, though of low visual acuity, is more sensitive to light intensity and movement. The existence of a region optimized for discrimination and recognition surrounded by a region geared towards detection thus allows the image of an object of interest detected in the outer region to be placed on the more analytic center for closer scrutiny. Such a strategy however necessitates the existence of (a) methods to determine which location in the periphery to foveate next, and (b) fast gaze-shifting mechanisms to achieve this 894 RajeshP.



A Model of the Neural Basis of the Rat's Sense of Direction

Neural Information Processing Systems

Several investigations have shed light on the effects of vestibular input and visual input on the head direction representation. In this paper, a model is formulated of the neural mechanisms underlying the head direction system. The model is built out of simple ingredients, depending on nothing more complicated than connectional specificity, attractor dynamics, Hebbian learning, and sigmoidal nonlinearities, but it behaves in a sophisticated way and is consistent with most of the observed properties ofreal head direction cells. In addition it makes a number of predictions that ought to be testable by reasonably straightforward experiments.



Interference in Learning Internal Models of Inverse Dynamics in Humans

Neural Information Processing Systems

Experiments were performed to reveal some of the computational properties of the human motor memory system. We show that as humans practice reaching movements while interacting with a novel mechanical environment, they learn an internal model of the inverse dynamics of that environment. The representation of the internal model in memory is such that there is interference when there is an attempt to learn a new inverse dynamics map immediately after an anticorrelated mapping was learned. We suggest that this interference is an indication that the same computational elements used to encode the first inverse dynamics map are being used to learn the second mapping. We predict that this leads to a forgetting of the initially learned skill. 1 Introduction In tasks where we use our hands to interact with a tool, our motor system develops a model of the dynamics of that tool and uses this model to control the coupled dynamics of our arm and the tool (Shadmehr and Mussa-Ivaldi 1994).


Grouping Components of Three-Dimensional Moving Objects in Area MST of Visual Cortex

Neural Information Processing Systems

A number of studies have described neurons in the dorsal part of the medial superior temporal (MSTd) monkey cortex that respond best to large expanding/contracting, rotating, or shifting patterns (Tanaka et al., 1986; Duffy & Wurtz, 1991a). Recently Graziano et al. (1994) found that MSTd cell responses correspond to a point in a multidimensional space of spiral motions, where the dimensions are these motion types. Combinationsof these motions are generated as an animal moves through its environment, whichsuggests that area MSTd could playa role in optical flow analysis. When an observer moves through a static environment, a singularity in the flow field known as the focus of expansion may be used to determine the direction of heading (Gibson, 1950; Warren & Hannon, 1988). Previous computational models of MSTd (Lappe & Rauschecker, 1993; Perrone & Stone, 1994) have shown how navigational information related to heading may be encoded by these cells.


FINANCIAL APPLICATIONS OF LEARNING FROM HINTS

Neural Information Processing Systems

In financial market applications, it is typical to have limited amount of relevant training data, with high noise levels in the data. The information content of such data is modest, and while the learning process can try to make the most of what it has, it cannot create new information on its own. This poses a fundamental limitation on the 412 YaserS.


Transformation Invariant Autoassociation with Application to Handwritten Character Recognition

Neural Information Processing Systems

When training neural networks by the classical backpropagation algorithm thewhole problem to learn must be expressed by a set of inputs and desired outputs. However, we often have high-level knowledge about the learning problem. In optical character recognition (OCR), for instance, weknow that the classification should be invariant under a set of transformations like rotation or translation. We propose a new modular classification system based on several autoassociative multilayer perceptrons whichallows the efficient incorporation of such knowledge. Results are reported on the NIST database of upper case handwritten letters and compared to other approaches to the invariance problem. 1 INCORPORATION OF EXPLICIT KNOWLEDGE The aim of supervised learning is to learn a mapping between the input and the output space from a set of example pairs (input, desired output). The classical implementation in the domain of neural networks is the backpropagation algorithm. If this learning set is sufficiently representative of the underlying data distributions, one hopes that after learning, the system is able to generalize correctly to other inputs of the same distribution.