Technology
Generalizable Relational Binding from Coarse-coded Distributed Representations
O', Reilly, Randall C., Busby, R. S.
We present a model of binding of relationship information in a spatial domain (e.g., square above triangle) that uses low-order coarse-coded conjunctive representations instead of more popular temporal synchrony mechanisms. Supporters of temporal synchrony argue that conjunctive representations lack both efficiency (i.e., combinatorial numbers of units are required) and systematicity (i.e., the resulting representations are overly specific and thus do not support generalization to novel exemplars). Tocounter these claims, we show that our model: a) uses far fewer hidden units than the number of conjunctions represented, by using coarse-coded,distributed representations where each unit has a broad tuning curve through high-dimensional conjunction space, and b) is capable ofconsiderable generalization to novel inputs.
EM-DD: An Improved Multiple-Instance Learning Technique
We present a new multiple-instance (MI) learning technique (EM DD) that combines EM with the diverse density (DD) algorithm. EM-DD is a general-purpose MI algorithm that can be applied with boolean or real-value labels and makes real-value predictions. On the boolean Musk benchmarks, the EM-DD algorithm without any tuning significantly outperforms all previous algorithms. EM-DD is relatively insensitive to the number of relevant attributes in the data set and scales up well to large bag sizes. Furthermore, EM DD provides a new framework for MI learning, in which the MI problem is converted to a single-instance setting by using EM to estimate the instance responsible for the label of the bag. 1 Introduction The multiple-instance (MI) learning model has received much attention.
TAP Gibbs Free Energy, Belief Propagation and Sparsity
Csatรณ, Lehel, Opper, Manfred, Winther, Ole
The adaptive TAP Gibbs free energy for a general densely connected probabilistic model with quadratic interactions and arbritary single site constraints is derived. We show how a specific sequential minimization of the free energy leads to a generalization of Minka's expectation propagation. Lastly,we derive a sparse representation version of the sequential algorithm. The usefulness of the approach is demonstrated on classification anddensity estimation with Gaussian processes and on an independent componentanalysis problem.
Fragment Completion in Humans and Machines
Jacobs, David, Rokers, Bas, Rudra, Archisman, Liu, Zili
Partial information can trigger a complete memory. At the same time, human memory is not perfect. A cue can contain enough information to specify an item in memory, but fail to trigger that item. In the context of word memory, we present experiments that demonstrate some basic patterns in human memory errors. We use cues that consist of word fragments. Weshow that short and long cues are completed more accurately than medium length ones and study some of the factors that lead to this behavior. We then present a novel computational model that shows some of the flexibility and patterns of errors that occur in human memory.
Speech Recognition with Missing Data using Recurrent Neural Nets
In the'missing data' approach to improving the robustness of automatic speech recognition to added noise, an initial process identifies spectraltemporal regionswhich are dominated by the speech source. The remaining regions are considered to be'missing'. In this paper we develop a connectionist approach to the problem of adapting speech recognition to the missing data case, using Recurrent Neural Networks. In contrast to methods based on Hidden Markov Models, RNNs allow us to make use of long-term time constraints and to make the problems of classification with incomplete data and imputing missing values interact. We report encouraging results on an isolated digit recognition task.
A Maximum-Likelihood Approach to Modeling Multisensory Enhancement
Multisensory response enhancement (MRE) is the augmentation of the response of a neuron to sensory input of one modality by simultaneous inputfrom another modality. The maximum likelihood (ML) model presented here modifies the Bayesian model for MRE (Anastasio et al.) by incorporating a decision strategy to maximize the number of correct decisions. Thus the ML model can also deal with the important tasks of stimulus discrimination and identification inthe presence of incongruent visual and auditory cues. It accounts for the inverse effectiveness observed in neurophysiological recordingdata, and it predicts a functional relation between uni-and bimodal levels of discriminability that is testable both in neurophysiological and behavioral experiments.
Estimating the Reliability of ICA Projections
Meinecke, Frank C., Ziehe, Andreas, Kawanabe, Motoaki, Mรผller, Klaus-Robert
When applying unsupervised learning techniques like ICA or temporal decorrelation,a key question is whether the discovered projections arereliable. In other words: can we give error bars or can we assess the quality of our separation? We use resampling methods totackle these questions and show experimentally that our proposed variance estimations are strongly correlated to the separation error.We demonstrate that this reliability estimation can be used to choose the appropriate ICA-model, to enhance significantly theseparation performance, and, most important, to mark the components that have a actual physical meaning.
KLD-Sampling: Adaptive Particle Filters
Over the last years, particle filters have been applied with great success to a variety of state estimation problems. We present a statistical approach to increasing the efficiency of particle filters by adapting the size of sample sets on-the-fly. The key idea of the KLD-sampling method is to bound the approximation error introduced by the sample-based representation of the particle filter. The name KLD-sampling is due to the fact that we measure the approximation error by the Kullback-Leibler distance. Our adaptation approach chooses a small number of samples if the density is focused on a small part of the state space, and it chooses a large number of samples if the state uncertainty is high. Both the implementation and computation overhead of this approach are small. Extensive experiments using mobile robot localization as a test application show that our approach yields drastic improvements over particle filters with fixed sample set sizes and over a previously introduced adaptation technique.
Stabilizing Value Function Approximation with the BFBP Algorithm
Wang, Xin, Dietterich, Thomas G.
Our BFBP (Batch Fit to Best Paths) algorithm alternates between an exploration phase (during which trajectories are generated to try to find fragments of the optimal policy) and a function fitting phase (during which a function approximator is fit to the best known paths from start states to terminal states). An advantage of this approach is that batch value-function fitting is a global process, which allows it to address the tradeoffs in function approximation that cannot be handled by local, online algorithms.