Country
Conditional Visual Tracking in Kernel Space
Sminchisescu, Cristian, Kanujia, Atul, Li, Zhiguo, Metaxas, Dimitris
We present a conditional temporal probabilistic framework for reconstructing 3Dhuman motion in monocular video based on descriptors encoding image silhouette observations. For computational efficiency we restrict visual inference to low-dimensional kernel induced nonlinear state spaces. Our methodology (kBME) combines kernel PCA-based nonlinear dimensionality reduction (kPCA) and Conditional Bayesian Mixture of Experts (BME) in order to learn complex multivalued predictors betweenobservations and model hidden states. This is necessary for accurate, inverse, visual perception inferences, where several probable, distant3D solutions exist due to noise or the uncertainty of monocular perspectiveprojection. Low-dimensional models are appropriate because many visual processes exhibit strong nonlinear correlations in both the image observations and the target, hidden state variables. The learned predictors are temporally combined within a conditional graphical modelin order to allow a principled propagation of uncertainty. We study several predictors and empirically show that the proposed algorithm positivelycompares with techniques based on regression, Kernel Dependency Estimation (KDE) or PCA alone, and gives results competitive tothose of high-dimensional mixture predictors at a fraction of their computational cost. We show that the method successfully reconstructs the complex 3D motion of humans in real monocular video sequences.
Temporally changing synaptic plasticity
Tamosiunaite, Minija, Porr, Bernd, Wรถrgรถtter, Florentin
Recent experimental results suggest that dendritic and back-propagating spikes can influence synaptic plasticity in different ways [1]. In this study we investigate how these signals could temporally interact at dendrites leading to changing plasticity properties at local synapse clusters. Similar toa previous study [2], we employ a differential Hebbian plasticity rule to emulate spike-timing dependent plasticity. We use dendritic (D-) and back-propagating (BP-) spikes as post-synaptic signals in the learning ruleand investigate how their interaction will influence plasticity. We will analyze a situation where synapse plasticity characteristics change in the course of time, depending on the type of post-synaptic activity momentarily elicited.Starting with weak synapses, which only elicit local D-spikes, a slow, unspecific growth process is induced. As soon as the soma begins to spike this process is replaced by fast synaptic changes as the consequence of the much stronger and sharper BP-spike, which now dominates the plasticity rule. This way a winner-take-all-mechanism emerges in a two-stage process, enhancing the best-correlated inputs. These results suggest that synaptic plasticity is a temporal changing process bywhich the computational properties of dendrites or complete neurons canbe substantially augmented.
Saliency Based on Information Maximization
A model of bottom-up overt attention is proposed based on the principle of maximizing information sampled from a scene. The proposed operation is based on Shannon's self-information measure and is achieved in a neural circuit, which is demonstrated as having close ties with the circuitry existent in the primate visual cortex. It is further shown that the proposed saliency measure may be extended to address issues that currently elude explanation in the domain of saliency based models. Results on natural images are compared with experimental eye tracking data revealing the efficacy of the model in predicting the deployment of overt attention as compared with existing efforts. 1 Introduction There has long been interest in the nature of eye movements and fixation behavior following early studies by Buswell [I] and Yarbus [2]. However, a complete description of the mechanisms underlying these peculiar fixation patterns remains elusive.
Transfer learning for text classification
Linear text classification algorithms work by computing an inner product betweena test document vector and a parameter vector. In many such algorithms, including naive Bayes and most TFIDF variants, the parameters aredetermined by some simple, closed-form, function of training set statistics; we call this mapping mapping from statistics to parameters, the parameter function. Much research in text classification over the last few decades has consisted of manual efforts to identify better parameter functions. Inthis paper, we propose an algorithm for automatically learning this function from related classification problems. The parameter function foundby our algorithm then defines a new learning algorithm for text classification, which we can apply to novel classification tasks. We find that our learned classifier outperforms existing methods on a variety of multiclass text classification tasks.
Silicon growth cones map silicon retina
We demonstrate the first fully hardware implementation of retinotopic self-organization, from photon transduction to neural map formation. A silicon retina transduces patterned illumination into correlated spike trains that drive a population of silicon growth cones to automatically wire a topographic mapping by migrating toward sources of a diffusible guidance cue that is released by postsynaptic spikes. We varied the pattern ofillumination to steer growth cones projected by different retinal ganglion cell types to self-organize segregated or coordinated retinotopic maps.
Optimizing spatio-temporal filters for improving Brain-Computer Interfacing
Dornhege, Guido, Blankertz, Benjamin, Krauledat, Matthias, Losch, Florian, Curio, Gabriel, Mรผller, Klaus-Robert
Brain-Computer Interface (BCI) systems create a novel communication channel from the brain to an output device by bypassing conventional motor output pathways of nerves and muscles. Therefore they could provide a new communication and control option for paralyzed patients. Modern BCI technology is essentially based on techniques for the classification ofsingle-trial brain signals. Here we present a novel technique that allows the simultaneous optimization of a spatial and a spectral filter enhancing discriminability of multi-channel EEG single-trials. The evaluation of60 experiments involving 22 different subjects demonstrates the superiority of the proposed algorithm. Apart from the enhanced classification, thespatial and/or the spectral filter that are determined by the algorithm can also be used for further analysis of the data, e.g., for source localization of the respective brain rhythms.
A Theoretical Analysis of Robust Coding over Noisy Overcomplete Channels
Doi, Eizaburo, Balcan, Doru C., Lewicki, Michael S.
Biological sensory systems are faced with the problem of encoding a high-fidelity sensory signal with a population of noisy, low-fidelity neurons. Thisproblem can be expressed in information theoretic terms as coding and transmitting a multidimensional, analog signal over a set of noisy channels. Previously, we have shown that robust, overcomplete codes can be learned by minimizing the reconstruction error with a constraint onthe channel capacity. Here, we present a theoretical analysis that characterizes the optimal linear coder and decoder for one-and twodimensional data.The analysis allows for an arbitrary number of coding units, thus including both under-and over-complete representations, and provides a number of important insights into optimal coding strategies. In particular, we show how the form of the code adapts to the number of coding units and to different data and noise conditions to achieve robustness. Wealso report numerical solutions for robust coding of highdimensional imagedata and show that these codes are substantially more robust compared against other image codes such as ICA and wavelets.
How fast to work: Response vigor, motivation and tonic dopamine
Niv, Yael, Daw, Nathaniel D., Dayan, Peter
Reinforcement learning models have long promised to unify computational, psychological and neural accounts of appetitively conditioned behavior. However, the bulk of data on animal conditioning comes from free-operant experiments measuring how fast animals will work for reinforcement. Existing reinforcement learning (RL) models are silent about these tasks, because they lack any notion of vigor. They thus fail to address the simple observation that hungrier animals will work harder for food, as well as stranger facts such as their sometimes greater productivity even when working for irrelevant outcomes such as water. Here, we develop an RL framework for free-operant behavior, suggesting that subjects choose how vigorously to perform selected actions by optimally balancing the costs and benefits of quick responding.
Sequence and Tree Kernels with Statistical Feature Mining
This paper proposes a new approach to feature selection based on a statistical feature mining technique for sequence and tree kernels. Since natural language data take discrete structures, convolution kernels, such as sequence and tree kernels, are advantageous for both the concept and accuracy of many natural language processing tasks. However, experiments have shown that the best results can only be achieved when limited small substructures are dealt with by these kernels. This paper discusses this issue of convolution kernels and then proposes a statistical feature selection that enable us to use larger substructures effectively. The proposed method, in order to execute efficiently, can be embedded into an original kernel calculation process by using substructure mining algorithms. Experiments on real NLP tasks confirm the problem in the conventional method and compare the performance of a conventional method to that of the proposed method.