Not enough data to create a plot.
Try a different view from the menu above.
Technology
Finite-Dimensional Approximation of Gaussian Processes
Ferrari-Trecate, Giancarlo, Williams, Christopher K. I., Opper, Manfred
Gaussian process (GP) prediction suffers from O(n3) scaling with the data set size n. By using a finite-dimensional basis to approximate the GP predictor, the computational complexity can be reduced. We derive optimalfinite-dimensional predictors under a number of assumptions, andshow the superiority of these predictors over the Projected Bayes Regression method (which is asymptotically optimal). We also show how to calculate the minimal model size for a given n. The calculations are backed up by numerical experiments.
Probabilistic Image Sensor Fusion
Sharma, Ravi K., Leen, Todd K., Pavel, Misha
We present a probabilistic method for fusion of images produced by multiple sensors. The approach is based on an image formation model in which the sensor images are noisy, locally linear functions of an underlying, true scene. A Bayesian framework then provides for maximum likelihood or maximum a posteriori estimates of the true scene from the sensor images. Maximum likelihood estimates of the parameters of the image formation model involve (local) second order image statistics, and thus are related to local principal component analysis. We demonstrate the efficacy of the method on images from visible-band and infrared sensors. 1 Introduction Advances in sensing devices have fueled the deployment of multiple sensors in several computational vision systems [1, for example].
Analyzing and Visualizing Single-Trial Event-Related Potentials
Jung, Tzyy-Ping, Makeig, Scott, Westerfield, Marissa, Townsend, Jeanne, Courchesne, Eric, Sejnowski, Terrence J.
Event-related potentials (ERPs), are portions of electroencephalographic (EEG)recordings that are both time-and phase-locked to experimental events. ERPs are usually averaged to increase their signal/noise ratio relative to non-phase locked EEG activity, regardlessof the fact that response activity in single epochs may vary widely in time course and scalp distribution. This study applies a linear decomposition tool, Independent Component Analysis (ICA)[1], to multichannel single-trial EEG records to derive spatial filters that decompose single-trial EEG epochs into a sum of temporally independent and spatially fixed components arising from distinct or overlapping brain or extra-brain networks. Our results on normal and autistic subjects show that ICA can separate artifactual,stimulus-locked, response-locked, and.
Information Maximization in Single Neurons
Stemmler, Martin, Koch, Christof
Information from the senses must be compressed into the limited range of firing rates generated by spiking nerve cells. Optimal compression uses all firing rates equally often, implying that the nerve cell's response matches the statistics of naturally occurring stimuli. Since changing the voltage-dependent ionic conductances in the cell membrane alters the flow of information, an unsupervised, non-Hebbian, developmental learning rule is derived to adapt the conductances in Hodgkin-Huxley model neurons. By maximizing the rate of information transmission, each firing rate within the model neuron's limited dynamic range is used equally often. An efficient neuronal representation of incoming sensory information should take advantage ofthe regularity and scale invariance of stimulus features in the natural world. In the case of vision, this regularity is reflected in the typical probabilities of encountering particular visual contrasts, spatial orientations, or colors [1]. Given these probabilities, an optimized neural code would eliminate any redundancy, while devoting increased representation tocommonly encountered features. At the level of a single spiking neuron, information about a potentially large range of stimuli is compressed into a finite range of firing rates, since the maximum firing rate of a neuron is limited. Optimizing the information transmission through a single neuron in the presence of uniform, additive noise has an intuitive interpretation: the most efficient representation of the input uses every firing rate with equal probability.
Mechanisms of Generalization in Perceptual Learning
Zili Lin Rutgers University, Newark DaphnaWeinshall Hebrew University, Israel Abstract The learning of many visual perceptual tasks has been shown to be specific to practiced stimuli, while new stimuli require re-Iearning from scratch. Here we demonstrate generalization using a novel paradigm in motion discrimination where learning has been previously shownto be specific. We trained subjects to discriminate the directions of moving dots, and verified the previous results that learning does not transfer from the trained direction to a new one. However, by tracking the subjects' performance across time in the new direction, we found that their rate of learning doubled. Therefore, learning generalized in a task previously considered too difficult for generalization.
Synergy and Redundancy among Brain Cells of Behaving Monkeys
While it is unlikely that complete information from any macroscopic neural tissue will ever be available, some interesting insight can be obtained from simultaneously recorded cells in the cortex of behaving animals. The question we address in this study is the level of synergy, or the level of cooperation, among brain cells, as determined by the information they provide about the observed behavior of the animal.
The Bias-Variance Tradeoff and the Randomized GACV
Wahba, Grace, Lin, Xiwu, Gao, Fangyu, Xiang, Dong, Klein, Ronald, Klein, Barbara
We propose a new in-sample cross validation based method (randomized GACV) for choosing smoothing or bandwidth parameters that govern the bias-variance or fit-complexity tradeoff in'soft' classification. Soft classification refersto a learning procedure which estimates the probability that an example with a given attribute vector is in class 1 vs class O. The target for optimizing the the tradeoff is the Kullback-Liebler distance between the estimated probability distribution and the'true' probability distribution,representing knowledge of an infinite population.
A Theory of Mean Field Approximation
I present a theory of mean field approximation based on information geometry. Thistheory includes in a consistent way the naive mean field approximation, as well as the TAP approach and the linear response theorem instatistical physics, giving clear information-theoretic interpretations to them. 1 INTRODUCTION Many problems of neural networks, such as learning and pattern recognition, can be cast into a framework of statistical estimation problem. How difficult it is to solve a particular problem depends on a statistical model one employs in solving the problem. For Boltzmann machines[ 1] for example, it is computationally very hard to evaluate expectations of state variables from the model parameters. Mean field approximation[2], which is originated in statistical physics, has been frequently used in practical situations in order to circumvent this difficulty.