Not enough data to create a plot.
Try a different view from the menu above.
Pulsestream Synapses with Non-Volatile Analogue Amorphous-Silicon Memories
Holmes, A. J., Murray, Alan F., Churcher, Stephen, Hajto, J., Rose, M. J.
This paper presents results from the first use of neural networks for the real-time feedback control of high temperature plasmas in a tokamak fusion experiment. The tokamak is currently the principal experimentaldevice for research into the magnetic confinement approachto controlled fusion. In the tokamak, hydrogen plasmas, at temperatures of up to 100 Million K, are confined by strong magnetic fields. Accurate control of the position and shape of the plasma boundary requires real-time feedback control of the magnetic field structure on a timescale of a few tens of microseconds. Softwaresimulations have demonstrated that a neural network approach can give significantly better performance than the linear technique currently used on most tokamak experiments. The practical application of the neural network approach requires high-speed hardware, for which a fully parallel implementation of the multilayer perceptron, using a hybrid of digital and analogue technology, has been developed.
Learning Prototype Models for Tangent Distance
Hastie, Trevor, Simard, Patrice
Local algorithms such as K-nearest neighbor (NN) perform well in pattern recognition, eventhough they often assume the simplest distance on the pattern space. It has recently been shown (Simard et al. 1993) that the performance can be further improved by incorporating invariance to specific transformations in the underlying distance metric - the so called tangent distance. The resulting classifier, however, canbe prohibitively slow and memory intensive due to the large amount of prototypes that need to be stored and used in the distance comparisons. In this paper we address this problem for the tangent distance algorithm, by developing richmodels for representing large subsets of the prototypes. Our leading example of prototype model is a low-dimensional (12) hyperplane defined by a point and a set of basis or tangent vectors.
A Computational Model of Prefrontal Cortex Function
Braver, Todd S., Cohen, Jonathan D., Servan-Schreiber, David
Accumulating data from neurophysiology and neuropsychology have suggested two information processing roles for prefrontal cortex (PFC):1) short-term active memory; and 2) inhibition. We present a new behavioral task and a computational model which were developed in parallel. The task was developed to probe both of these prefrontal functions simultaneously, and produces a rich set of behavioral data that act as constraints on the model. The model is implemented in continuous-time, thus providing a natural framework in which to study the temporal dynamics of processing in the task. We show how the model can be used to examine the behavioral consequencesof neuromodulation in PFC. Specifically, we use the model to make novel and testable predictions regarding the behavioral performance of schizophrenics, who are hypothesized to suffer from reduced dopaminergic tone in this brain area.
Instance-Based State Identification for Reinforcement Learning
This paper presents instance-based state identification, an approach to reinforcement learning and hidden state that builds disambiguating amountsof short-term memory online, and also learns with an order of magnitude fewer training steps than several previous approaches. Inspiredby a key similarity between learning with hidden state and learning in continuous geometrical spaces, this approach uses instance-based (or "memory-based") learning, a method that has worked well in continuous spaces. 1 BACKGROUND AND RELATED WORK When a robot's next course of action depends on information that is hidden from the sensors because of problems such as occlusion, restricted range, bounded field of view and limited attention, the robot suffers from hidden state. More formally, we say a reinforcement learning agent suffers from the hidden state problem if the agent's state representation is non-Markovian with respect to actions and utility. The hidden state problem arises as a case of perceptual aliasing: the mapping between statesof the world and sensations of the agent is not one-to-one [Whitehead, 1992]. If the agent's perceptual system produces the same outputs for two world states in which different actions are required, and if the agent's state representation consists only of its percepts, then the agent will fail to choose correct actions.
Efficient Methods for Dealing with Missing Data in Supervised Learning
Tresp, Volker, Neuneier, Ralph, Ahmad, Subutai
Palo Alto, CA 94304 Abstract We present efficient algorithms for dealing with the problem of missing inputs(incomplete feature vectors) during training and recall. Our approach is based on the approximation of the input data distribution usingParzen windows. For recall, we obtain closed form solutions for arbitrary feedforward networks. For training, we show how the backpropagation step for an incomplete pattern can be approximated by a weighted averaged backpropagation step. The complexity of the solutions for training and recall is independent of the number of missing features.
A Rigorous Analysis of Linsker-type Hebbian Learning
Feng, J., Pan, H., Roychowdhury, V. P.
His simulations have shown that for appropriate parameter regimes, several structured connection patterns (e.g., centre-surround and oriented afferent receptive fields (aRFs)) occur progressively as the Hebbian evolution of the weights is carried out layer by layer. The behavior of Linsker's model is determined by the underlying nonlinear dynamics which are parameterized by a set of parameters originating from the Hebbian rule and the arbor density of the synapses.
Finding Structure in Reinforcement Learning
Thrun, Sebastian, Schwartz, Anton
Reinforcement learning addresses the problem of learning to select actions in order to maximize one's performance in unknown environments. To scale reinforcement learning to complex real-world tasks, such as typically studied in AI, one must ultimately be able to discover the structure in the world, in order to abstract away the myriad of details and to operate in more tractable problem spaces. This paper presents the SKILLS algorithm. SKILLS discovers skills, which are partially defined action policies that arise in the context of multiple, related tasks.
Advantage Updating Applied to a Differential Game
Harmon, Mance E., III, Leemon C. Baird, Klopf, A. Harry
An application of reinforcement learning to a linear-quadratic, differential game is presented. The reinforcement learning system uses a recently developed algorithm, the residual gradient form of advantage updating. The game is a Markov Decision Process (MDP) with continuous time, states, and actions, linear dynamics, and a quadratic cost function. The game consists of two players, a missile and a plane; the missile pursues the plane and the plane evades the missile. The reinforcement learning algorithm for optimal control is modified for differential games in order to find the minimax point, rather than the maximum. Simulation results are compared to the optimal solution, demonstrating that the simulated reinforcement learning system converges to the optimal answer. The performance of both the residual gradient and non-residual gradient forms of advantage updating and Q-learning are compared. The results show that advantage updating converges faster than Q-learning in all simulations.
Morphogenesis of the Lateral Geniculate Nucleus: How Singularities Affect Global Structure
Tzonev, Svilen, Schulten, Klaus, Malpeli, Joseph G.
The macaque lateral geniculate nucleus (LGN) exhibits an intricate lamination pattern, which changes midway through the nucleus at a point coincident with small gaps due to the blind spot in the retina. We present a three-dimensional model of morphogenesis in which local cell interactions cause a wave of development of neuronal receptive fieldsto propagate through the nucleus and establish two distinct lamination patterns. We examine the interactions between the wave and the localized singularities due to the gaps, and find that the gaps induce the change in lamination pattern. We explore critical factors which determine general LGN organization.