Country
Self-regulation Mechanism of Temporally Asymmetric Hebbian Plasticity
Recent biological experimental findings have shown that the synaptic plasticitydepends on the relative timing of the pre-and postsynaptic spikeswhich determines whether Long Term Potentiation (LTP) occurs or Long Term Depression (LTD) does. The synaptic plasticity has been called "Temporally Asymmetric Hebbian plasticity (TAH)".Many authors have numerically shown that spatiotemporal patternscan be stored in neural networks.
Rao-Blackwellised Particle Filtering via Data Augmentation
Andrieu, Christophe, Freitas, Nando D., Doucet, Arnaud
SMC is often referred to as particle filtering (PF) in the context of computing filtering distributions for statistical inference and learning. It is known that the performance of PF often deteriorates in high-dimensional state spaces. In the past, we have shown that if a model admits partial analytical tractability, it is possible to combine PF with exact algorithms (Kalman filters, HMM filters, junction tree algorithm) to obtain efficient high dimensional filters (Doucet, de Freitas, Murphy and Russell 2000, Doucet, Godsill and Andrieu 2000). In particular, we exploited a marginalisation technique known as Rao-Blackwellisation (RB). Here, we attack a more complex model that does not admit immediate analytical tractability. This probabilistic model consists of Gaussian latent variables and binary observations.We show that by augmenting the model with artificial variables, it becomes possible to apply Rao-Blackwellisation and optimal sampling strategies. We focus on the problem of sequential binary classification (that is, when the data arrives one-at-a-time) using generic classifiers that consist of linear combinations of basis functions, whose coefficients evolve according to a Gaussian smoothness prior (Kitagawa and Gersch 1996). We have previously addressed this problem in the context of sequential fault detection in marine diesel engines (H0jen-S0rensen, de Freitas and Fog 2000). This application is of great importance as early detection of incipient faults can improve safety and efficiency, as well as, help to reduce downtime andplant maintenance in many industrial and transportation environments.
Multi Dimensional ICA to Separate Correlated Sources
Vollgraf, Roland, Obermayer, Klaus
There are two linear transformations to be considered, one operating inside thechannels (0) and one operating between the different channels (W). The two transformations are estimated in two adjacent leA steps. There are mainly two advantages, that can be taken from the first transformation: (i) By arranging independence among the columns of the transformed patches, the average transinformation betweendifferent channels is decreased.
Information-Geometric Decomposition in Spike Analysis
Nakahara, Hiroyuki, Amari, Shun-ichi
We present an information-geometric measure to systematically investigate neuronal firing patterns, taking account not only of the second-order but also of higher-order interactions. We begin with the case of two neurons for illustration and show how to test whether or not any pairwise correlation in one period is significantly different from that in the other period. In order to test such a hypothesis ofdifferent firing rates, the correlation term needs to be singled out'orthogonally' to the firing rates, where the null hypothesis mightnot be of independent firing. This method is also shown to directly associate neural firing with behavior via their mutual information, which is decomposed into two types of information, conveyed by mean firing rate and coincident firing, respectively. Then, we show that these results, using the'orthogonal' decomposition, arenaturally extended to the case of three neurons and n neurons in general. 1 Introduction Based on the theory of hierarchical structure and related invariant decomposition of interactions by information geometry [3], the present paper briefly summarizes methods useful for systematically analyzing a population of neural firing [9].
A Rational Analysis of Cognitive Control in a Speeded Discrimination Task
Mozer, Michael C., Colagrosso, Michael D., Huber, David E.
We are interested in the mechanisms by which individuals monitor and adjust their performance of simple cognitive tasks. We model a speeded discrimination task in which individuals are asked to classify a sequence of stimuli (Jones & Braver, 2001). Response conflict arises when one stimulus class is infrequent relative to another, resulting in more errors and slower reaction times for the infrequent class. How do control processes modulatebehavior based on the relative class frequencies? We explain performance from a rational perspective that casts the goal of individuals as minimizing a cost that depends both on error rate and reaction time.With two additional assumptions of rationality--that class prior probabilities are accurately estimated and that inference is optimal subject to limitations on rate of information transmission--we obtain a good fit to overall RT and error data, as well as trial-by-trial variations in performance.
Fast, Large-Scale Transformation-Invariant Clustering
Frey, Brendan J., Jojic, Nebojsa
In previous work on "transformed mixtures of Gaussians" and "transformed hidden Markov models", we showed how the EM algorithm ina discrete latent variable model can be used to jointly normalize data (e.g., center images, pitch-normalize spectrograms) and learn a mixture model of the normalized data. The only input to the algorithm is the data, a list of possible transformations, and the number of clusters to find. The main criticism of this work was that the exhaustive computation of the posterior probabilities overtransformations would make scaling up to large feature vectors and large sets of transformations intractable. Here, we describe howa tremendous speedup is acheived through the use of a variational technique for decoupling transformations, and a fast Fourier transform method for computing posterior probabilities.
A Rotation and Translation Invariant Discrete Saliency Network
Williams, Lance R., Zweck, John W.
We describe a neural network which enhances and completes salient closed contours. Our work is different from all previous work in three important ways. First, like the input provided to V1 by LGN, the input toour computation is isotropic. That is, the input is composed of spots not edges. Second, our network computes a well defined function of the input based on a distribution of closed contours characterized by a random process. Third, even though our computation is implemented in a discrete network, its output is invariant to continuous rotations and translations of the input pattern.