Not enough data to create a plot.
Try a different view from the menu above.
Country
Spike-based Learning Rules and Stabilization of Persistent Neural Activity
Xie, Xiaohui, Seung, H. Sebastian
We analyze the conditions under which synaptic learning rules based on action potential timing can be approximated by learning rules based on firing rates. In particular, we consider a form of plasticity in which synapses depress when a presynaptic spike is followed by a postsynaptic spike, and potentiate with the opposite temporal ordering. Such differential anti-Hebbian plasticity can be approximated under certain conditions by a learning rule that depends on the time derivative of the postsynaptic firing rate. Such a learning rule acts to stabilize persistent neural activity patterns in recurrent neural networks.
Evolving Learnable Languages
Tonkes, Bradley, Blair, Alan, Wiles, Janet
Recent theories suggest that language acquisition is assisted by the evolution of languages towards forms that are easily learnable. In this paper, we evolve combinatorial languages which can be learned by a recurrent neural network quickly and from relatively few examples. Additionally,we evolve languages for generalization in different "worlds", and for generalization from specific examples.
Constructing Heterogeneous Committees Using Input Feature Grouping: Application to Economic Forecasting
Liao, Yuansong, Moody, John E.
Yuansong Liao and John Moody Department of Computer Science, Oregon Graduate Institute, P.O.Box 91000, Portland, OR 97291-1000 Abstract The committee approach has been proposed for reducing model uncertainty and improving generalization performance. The advantage ofcommittees depends on (1) the performance of individual members and (2) the correlational structure of errors between members. This paper presents an input grouping technique for designing aheterogeneous committee. With this technique, all input variables are first grouped based on their mutual information. Statistically similarvariables are assigned to the same group.
Bayesian Modelling of fMRI lime Series
Hรธjen-Sรธrensen, Pedro A. d. F. R., Hansen, Lars Kai, Rasmussen, Carl Edward
We present a Hidden Markov Model (HMM) for inferring the hidden psychological state (or neural activity) during single trial tMRI activation experimentswith blocked task paradigms. Inference is based on Bayesian methodology, using a combination of analytical and a variety of Markov Chain Monte Carlo (MCMC) sampling techniques. The advantage ofthis method is that detection of short time learning effects between repeated trials is possible since inference is based only on single trial experiments.
Recognizing Evoked Potentials in a Virtual Environment
Bayliss, Jessica D., Ballard, Dana H.
Virtual reality (VR) provides immersive and controllable experimental environments.It expands the bounds of possible evoked potential (EP) experiments by providing complex, dynamic environments in order tostudy cognition without sacrificing environmental control. VR also serves as a safe dynamic testbed for brain-computer .interface
An Oscillatory Correlation Frame work for Computational Auditory Scene Analysis
Brown, Guy J., Wang, DeLiang L.
A neural model is described which uses oscillatory correlation to segregate speech from interfering sound sources. The core of the model is a two-layer neural oscillator network. A sound stream is represented by a synchronized population of oscillators, and different streams are represented by desynchronized oscillator populations. The model has been evaluated using a corpus of speech mixed with interfering sounds, and produces an improvement in signal-to-noise ratio for every mixture. 1 Introduction Speech is seldom heard in isolation: usually, it is mixed with other environmental sounds. Hence, the auditory system must parse the acoustic mixture reaching the ears in order to retrieve a description of each sound source, a process termed auditory scene analysis (ASA) [2] . Conceptually, ASA may be regarded as a two-stage process.
Policy Gradient Methods for Reinforcement Learning with Function Approximation
Sutton, Richard S., McAllester, David A., Singh, Satinder P., Mansour, Yishay
Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining apolicy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly represented by its own function approximator, independent ofthe value function, and is updated according to the gradient of expected reward with respect to the policy parameters. Williams's REINFORCE method and actor-critic methods are examples of this approach. Our main new result is to show that the gradient can be written in a form suitable for estimation from experience aided by an approximate action-value or advantage function. Using this result, we prove for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy.