Goto

Collaborating Authors

 Country


SIMPLIFYING NEURAL NETS BY DISCOVERING FLAT MINIMA

Neural Information Processing Systems

We present a new algorithm for finding low complexity networks with high generalization capability. The algorithm searches for large connected regions of so-called ''fiat'' minima of the error function. In the weight-space environment of a "flat" minimum, the error remains approximately constant. Using an MDL-based argument, flat minima can be shown to correspond to low expected overfitting. Although our algorithm requires the computation of second order derivatives, it has backprop's order of complexity. Experiments with feedforward and recurrent nets are described. In an application to stock market prediction, the method outperforms conventional backprop, weight decay, and "optimal brain surgeon".


The Electrotonic Transformation: a Tool for Relating Neuronal Form to Function

Neural Information Processing Systems

The spatial distribution and time course of electrical signals in neurons have important theoretical and practical consequences. Because it is difficult to infer how neuronal form affects electrical signaling, we have developed a quantitative yet intuitive approach to the analysis of electrotonus. This approach transforms the architecture of the cell from anatomical to electrotonic space, using the logarithm of voltage attenuation as the distance metric. We describe the theory behind this approach and illustrate its use. 1 INTRODUCTION The fields of computational neuroscience and artificial neural nets have enjoyed a mutually beneficial exchange of ideas. This has been most evident at the network level, where concepts such as massive parallelism, lateral inhibition, and recurrent excitation have inspired both the analysis of brain circuits and the design of artificial neural net architectures. Less attention has been given to how properties of the individual neurons or processing elements contribute to network function. Biological neurons and brain circuits have 70 Nicholas Carnevale, Kenneth Y. Tsai, Brenda J. Claiborne, Thomas H. Brown


Learning Stochastic Perceptrons Under k-Blocking Distributions

Neural Information Processing Systems

Such distributions represent an important stepbeyond the case where each input variable is statistically independent since the 2k-blocking family contains all the Markov distributions of order k. By stochastic perceptron we mean a perceptron which,upon presentation of input vector x, outputs 1 with probability fCLJi WiXi - B).


A Comparison of Discrete-Time Operator Models for Nonlinear System Identification

Neural Information Processing Systems

We present a unifying view of discrete-time operator models used in the context of finite word length linear signal processing. Comparisons are made between the recently presented gamma operator model, and the delta and rho operator models for performing nonlinear system identification and prediction using neural networks. A new model based on an adaptive bilinear transformation which generalizes all of the above models is presented.


Learning direction in global motion: two classes of psychophysically-motivated models

Neural Information Processing Systems

Perceptual learning is defined as fast improvement in performance and retention of the learned ability over a period of time. In a set of psychophysical experimentswe demonstrated that perceptual learning occurs for the discrimination of direction in stochastic motion stimuli. Here we model this learning using two approaches: a clustering model that learns to accommodate the motion noise, and an averaging model that learns to ignore the noise. Simulations of the models show performance similar to the psychophysical results. 1 Introduction Global motion perception is critical to many visual tasks: to perceive self-motion, to identify objects in motion, to determine the structure of the environment, and to make judgements for safe navigation. In the presence of noise, as in random dot kinematograms, efficient extraction of global motion involves considerable spatial integration. Newsome and Colleagues (1989) showed that neurons in the macaque middle temporal area (MT) are motion direction-selective, and perform global integration ofmotion in their large receptive fields. Psychophysical studies in humans have characterized the limits of spatial and temporal integration in motion (Watamaniuk et.aI, 1984) and the nature of the underlying motion computations (Vaina et.


Connectionist Speaker Normalization with Generalized Resource Allocating Networks

Neural Information Processing Systems

The paper presents a rapid speaker-normalization technique based on neural network spectral mapping. The neural network is used as a front-end of a continuous speech recognition system (speakerdependent, HMM-based)to normalize the input acoustic data from a new speaker. The spectral difference between speakers can be reduced using a limited amount of new acoustic data (40 phonetically richsentences). Recognition error of phone units from the acoustic-phonetic continuous speech corpus APASCI is decreased with an adaptability ratio of 25%. We used local basis networks of elliptical Gaussian kernels, with recursive allocation of units and online optimization of parameters (GRAN model). For this application, themodel included a linear term. The results compare favorably with multivariate linear mapping based on constrained orthonormal transformations.


Pulsestream Synapses with Non-Volatile Analogue Amorphous-Silicon Memories

Neural Information Processing Systems

This paper presents results from the first use of neural networks for the real-time feedback control of high temperature plasmas in a tokamak fusion experiment. The tokamak is currently the principal experimentaldevice for research into the magnetic confinement approachto controlled fusion. In the tokamak, hydrogen plasmas, at temperatures of up to 100 Million K, are confined by strong magnetic fields. Accurate control of the position and shape of the plasma boundary requires real-time feedback control of the magnetic field structure on a timescale of a few tens of microseconds. Softwaresimulations have demonstrated that a neural network approach can give significantly better performance than the linear technique currently used on most tokamak experiments. The practical application of the neural network approach requires high-speed hardware, for which a fully parallel implementation of the multilayer perceptron, using a hybrid of digital and analogue technology, has been developed.


Learning Prototype Models for Tangent Distance

Neural Information Processing Systems

Local algorithms such as K-nearest neighbor (NN) perform well in pattern recognition, eventhough they often assume the simplest distance on the pattern space. It has recently been shown (Simard et al. 1993) that the performance can be further improved by incorporating invariance to specific transformations in the underlying distance metric - the so called tangent distance. The resulting classifier, however, canbe prohibitively slow and memory intensive due to the large amount of prototypes that need to be stored and used in the distance comparisons. In this paper we address this problem for the tangent distance algorithm, by developing richmodels for representing large subsets of the prototypes. Our leading example of prototype model is a low-dimensional (12) hyperplane defined by a point and a set of basis or tangent vectors.


A Computational Model of Prefrontal Cortex Function

Neural Information Processing Systems

Accumulating data from neurophysiology and neuropsychology have suggested two information processing roles for prefrontal cortex (PFC):1) short-term active memory; and 2) inhibition. We present a new behavioral task and a computational model which were developed in parallel. The task was developed to probe both of these prefrontal functions simultaneously, and produces a rich set of behavioral data that act as constraints on the model. The model is implemented in continuous-time, thus providing a natural framework in which to study the temporal dynamics of processing in the task. We show how the model can be used to examine the behavioral consequencesof neuromodulation in PFC. Specifically, we use the model to make novel and testable predictions regarding the behavioral performance of schizophrenics, who are hypothesized to suffer from reduced dopaminergic tone in this brain area.


Instance-Based State Identification for Reinforcement Learning

Neural Information Processing Systems

This paper presents instance-based state identification, an approach to reinforcement learning and hidden state that builds disambiguating amountsof short-term memory online, and also learns with an order of magnitude fewer training steps than several previous approaches. Inspiredby a key similarity between learning with hidden state and learning in continuous geometrical spaces, this approach uses instance-based (or "memory-based") learning, a method that has worked well in continuous spaces. 1 BACKGROUND AND RELATED WORK When a robot's next course of action depends on information that is hidden from the sensors because of problems such as occlusion, restricted range, bounded field of view and limited attention, the robot suffers from hidden state. More formally, we say a reinforcement learning agent suffers from the hidden state problem if the agent's state representation is non-Markovian with respect to actions and utility. The hidden state problem arises as a case of perceptual aliasing: the mapping between statesof the world and sensations of the agent is not one-to-one [Whitehead, 1992]. If the agent's perceptual system produces the same outputs for two world states in which different actions are required, and if the agent's state representation consists only of its percepts, then the agent will fail to choose correct actions.