Country
Algorithmic Complexity Bounds on Future Prediction Errors
Chernov, A., Hutter, M., Schmidhuber, J.
We bound the future loss when predicting any (computably) stochastic sequence online. Solomonoff finitely bounded the total deviation of his universal predictor $M$ from the true distribution $mu$ by the algorithmic complexity of $mu$. Here we assume we are at a time $t>1$ and already observed $x=x_1...x_t$. We bound the future prediction performance on $x_{t+1}x_{t+2}...$ by a new variant of algorithmic complexity of $mu$ given $x$, plus the complexity of the randomness deficiency of $x$. The new complexity is monotone in its condition in the sense that this complexity can only decrease if the condition is prolonged. We also briefly discuss potential generalizations to Bayesian model classes and to classification problems.
A Backtracking-Based Algorithm for Computing Hypertree-Decompositions
Hypertree decompositions of hypergraphs are a generalization of tree decompositions of graphs. The corresponding hypertree-width is a measure for the cyclicity and therefore tractability of the encoded computation problem. Many NP-hard decision and computation problems are known to be tractable on instances whose structure corresponds to hypergraphs of bounded hypertree-width. Intuitively, the smaller the hypertree-width, the faster the computation problem can be solved. In this paper, we present the new backtracking-based algorithm det-k-decomp for computing hypertree decompositions of small width. Our benchmark evaluations have shown that det-k-decomp significantly outperforms opt-k-decomp, the only exact hypertree decomposition algorithm so far. Even compared to the best heuristic algorithm, we obtained competitive results as long as the hypergraphs are not too large.
Phase Synchrony Rate for the Recognition of Motor Imagery in Brain-Computer Interface
Song, Le, Gordon, Evian, Gysels, Elly
Theseamplitude changes are most successfully captured by the method of Common Spatial Patterns (CSP) and widely used in braincomputer interfaces(BCI). BCI methods based on amplitude information, however, have not incoporated the rich phase dynamics in the EEG rhythm. This study reports on a BCI method based on phase synchrony rate (SR). SR, computed from binarized phase locking value, describes the number of discrete synchronization events within a window. Statistical nonparametrictests show that SRs contain significant differences between 2types of motor imageries. Classifiers trained on SRs consistently demonstrate satisfactory results for all 5 subjects. It is further observed that, for 3 subjects, phase is more discriminative than amplitude in the first 1.5-2.0
Conditional Visual Tracking in Kernel Space
Sminchisescu, Cristian, Kanujia, Atul, Li, Zhiguo, Metaxas, Dimitris
We present a conditional temporal probabilistic framework for reconstructing 3Dhuman motion in monocular video based on descriptors encoding image silhouette observations. For computational efficiency we restrict visual inference to low-dimensional kernel induced nonlinear state spaces. Our methodology (kBME) combines kernel PCA-based nonlinear dimensionality reduction (kPCA) and Conditional Bayesian Mixture of Experts (BME) in order to learn complex multivalued predictors betweenobservations and model hidden states. This is necessary for accurate, inverse, visual perception inferences, where several probable, distant3D solutions exist due to noise or the uncertainty of monocular perspectiveprojection. Low-dimensional models are appropriate because many visual processes exhibit strong nonlinear correlations in both the image observations and the target, hidden state variables. The learned predictors are temporally combined within a conditional graphical modelin order to allow a principled propagation of uncertainty. We study several predictors and empirically show that the proposed algorithm positivelycompares with techniques based on regression, Kernel Dependency Estimation (KDE) or PCA alone, and gives results competitive tothose of high-dimensional mixture predictors at a fraction of their computational cost. We show that the method successfully reconstructs the complex 3D motion of humans in real monocular video sequences.
Temporally changing synaptic plasticity
Tamosiunaite, Minija, Porr, Bernd, Wörgötter, Florentin
Recent experimental results suggest that dendritic and back-propagating spikes can influence synaptic plasticity in different ways [1]. In this study we investigate how these signals could temporally interact at dendrites leading to changing plasticity properties at local synapse clusters. Similar toa previous study [2], we employ a differential Hebbian plasticity rule to emulate spike-timing dependent plasticity. We use dendritic (D-) and back-propagating (BP-) spikes as post-synaptic signals in the learning ruleand investigate how their interaction will influence plasticity. We will analyze a situation where synapse plasticity characteristics change in the course of time, depending on the type of post-synaptic activity momentarily elicited.Starting with weak synapses, which only elicit local D-spikes, a slow, unspecific growth process is induced. As soon as the soma begins to spike this process is replaced by fast synaptic changes as the consequence of the much stronger and sharper BP-spike, which now dominates the plasticity rule. This way a winner-take-all-mechanism emerges in a two-stage process, enhancing the best-correlated inputs. These results suggest that synaptic plasticity is a temporal changing process bywhich the computational properties of dendrites or complete neurons canbe substantially augmented.
Saliency Based on Information Maximization
A model of bottom-up overt attention is proposed based on the principle of maximizing information sampled from a scene. The proposed operation is based on Shannon's self-information measure and is achieved in a neural circuit, which is demonstrated as having close ties with the circuitry existent in the primate visual cortex. It is further shown that the proposed saliency measure may be extended to address issues that currently elude explanation in the domain of saliency based models. Results on natural images are compared with experimental eye tracking data revealing the efficacy of the model in predicting the deployment of overt attention as compared with existing efforts. 1 Introduction There has long been interest in the nature of eye movements and fixation behavior following early studies by Buswell [I] and Yarbus [2]. However, a complete description of the mechanisms underlying these peculiar fixation patterns remains elusive.
Transfer learning for text classification
Linear text classification algorithms work by computing an inner product betweena test document vector and a parameter vector. In many such algorithms, including naive Bayes and most TFIDF variants, the parameters aredetermined by some simple, closed-form, function of training set statistics; we call this mapping mapping from statistics to parameters, the parameter function. Much research in text classification over the last few decades has consisted of manual efforts to identify better parameter functions. Inthis paper, we propose an algorithm for automatically learning this function from related classification problems. The parameter function foundby our algorithm then defines a new learning algorithm for text classification, which we can apply to novel classification tasks. We find that our learned classifier outperforms existing methods on a variety of multiclass text classification tasks.