Country
Multi-Digit Recognition Using a Space Displacement Neural Network
Matan, Ofer, Burges, Christopher J. C., LeCun, Yann, Denker, John S.
We present a feed-forward network architecture for recognizing an unconstrained handwritten multi-digit string. This is an extension of previous work on recognizing isolated digits. In this architecture a single digit recognizer is replicated over the input. The output layer of the network is coupled to a Viterbi alignment module that chooses the best interpretation of the input. Training errors are propagated through the Viterbi module.
A comparison between a neural network model for the formation of brain maps and experimental data
Obermayer, K., Schulten, K., Blasdel, G. G.
Recently, high resolution images of the simultaneous representation of orientation preference, orientation selectivity and ocular dominance have been obtained for large areas in monkey striate cortex by optical imaging [1-3]. These data allow for the first time a "local" as well as "global" description of the spatial patterns and provide strong evidence for correlations between orientation selectivity and ocular dominance. A quantitative analysis reveals that these correlations arise when a fivedimensional feature space (two dimensions for retinotopic space, one each for orientation preference, orientation specificity, and ocular dominance) is mapped into the two available dimensions of cortex while locally preserving topology. These results provide strong evidence for the concept of topology preserving maps which have been suggested as a basic design principle of striate cortex [4-7]. Monkey striate cortex contains a retinotopic map in which are embedded the highly repetitive patterns of orientation selectivity and ocular dominance. The retinotopic projection establishes a "global" order, while maps of variables describing other stimulus features, in particular line orientation and ocularity, dominate cortical organization locally. A large number of pattern models [8-12] as well as models of development [6,7,13-21] have been proposed to describe the spatial structure of these patterns and their development during ontogenesis. However, most models have not been compared with experimental data in detail. There are two reasons for this: (i) many model-studies were not elaborated enough to be experimentally testable and (ii) a sufficient amount of experimental data obtained from large areas of striate cortex was not available.
Information Processing to Create Eye Movements
Because eye muscles never cocontract and do not deal with external loads, one can write an equation that relates motoneuron firing rate to eye position and velocity - a very uncommon situation in the CNS. The semicircular canals transduce head velocity in a linear manner by using a high background discharge rate, imparting linearity to the premotor circuits that generate eye movements. This has allowed deducing some of the signal processing involved, including a neural network that integrates. These ideas are often summarized by block diagrams. Unfortunately, they are of little value in describing the behavior of single neurons - a fmding supported by neural network models.
Tangent Prop - A formalism for specifying selected invariances in an adaptive network
Simard, Patrice, Victorri, Bernard, LeCun, Yann, Denker, John
In many machine learning applications, one has access, not only to training data, but also to some high-level a priori knowledge about the desired behavior of the system. For example, it is known in advance that the output of a character recognizer should be invariant with respect to small spatial distortions of the input images (translations, rotations, scale changes, etcetera). We have implemented a scheme that allows a network to learn the derivative of its outputs with respect to distortion operators of our choosing. This not only reduces the learning time and the amount of training data, but also provides a powerful language for specifying what generalizations we wish the network to perform. 1 INTRODUCTION In machine learning, one very often knows more about the function to be learned than just the training data. An interesting case is when certain directional derivatives of the desired function are known at certain points.
Kernel Regression and Backpropagation Training With Noise
Koistinen, Petri, Holmstrรถm, Lasse
One method proposed for improving the generalization capability of a feedforward network trained with the backpropagation algorithm is to use artificial training vectors which are obtained by adding noise to the original training vectors. We discuss the connection of such backpropagation training with noise to kernel density and kernel regression estimation. We compare by simulated examples (1) backpropagation, (2) backpropagation with noise, and (3) kernel regression in mapping estimation and pattern classification contexts.
Hierarchical Transformation of Space in the Visual System
Pouget, Alexandre, Fisher, Stephen A., Sejnowski, Terrence J.
Neurons encoding simple visual features in area VI such as orientation, direction of motion and color are organized in retinotopic maps. However, recent physiological experiments have shown that the responses of many neurons in VI and other cortical areas are modulated by the direction of gaze. We have developed a neural network model of the visual cortex to explore the hypothesis that visual features are encoded in headcentered coordinates at early stages of visual processing. New experiments are suggested for testing this hypothesis using electrical stimulations and psychophysical observations.
Refining PID Controllers using Neural Networks
Scott, Gary M., Shavlik, Jude W., Ray, W. Harmon
We apply this method to the task of controlling the outflow and temperature of a water tank, producing statistically-significant gains in accuracy over both a standard neural network approach and a non-learning PID controller. Furthermore, using the PID knowledge to initialize the weights of the network produces statistically less variation in testset accuracy when compared to networks initialized with small random numbers.