Not enough data to create a plot.
Try a different view from the menu above.
Technology
Discriminative Binaural Sound Localization
Ben-reuven, Ehud, Singer, Yoram
Time difference of arrival (TDOA) is commonly used to estimate the azimuth ofa source in a microphone array. The most common methods to estimate TDOA are based on finding extrema in generalized crosscorrelation waveforms.In this paper we apply microphone array techniques to a manikin head. By considering the entire cross-correlation waveform we achieve azimuth prediction accuracy that exceeds extrema locating methods. We do so by quantizing the azimuthal angle and treating the prediction problem as a multiclass categorization task. We demonstrate the merits of our approach by evaluating the various approaches onSony's AIBO robot.
Learning to Classify Galaxy Shapes Using the EM Algorithm
Kirshner, Sergey, Cadez, Igor V., Smyth, Padhraic, Kamath, Chandrika
We describe the application of probabilistic model-based learning to the problem of automatically identifying classes of galaxies, based on both morphological and pixel intensity characteristics. The EM algorithm can be used to learn how to spatially orient a set of galaxies so that they are geometrically aligned. We augment this "ordering-model" with a mixture model on objects, and demonstrate how classes of galaxies can be learned in an unsupervised manner using a two-level EM algorithm. The resulting models provide highly accurate classi£cation of galaxies in cross-validation experiments.
The Stability of Kernel Principal Components Analysis and its Relation to the Process Eigenspectrum
Williams, Christopher, Shawe-taylor, John S.
I. Williams School of Informatics University of Edinburgh c.k.i.williams ed.ac.uk Abstract In this paper we analyze the relationships between the eigenvalues of the m x m Gram matrix K for a kernel k(·, .) We bound the differences betweenthe two spectra and provide a performance bound on kernel peA. 1 Introduction Over recent years there has been a considerable amount of interest in kernel methods for supervised learning (e.g. Support Vector Machines and Gaussian Process predict ion)and for unsupervised learning (e.g. In this paper we study the stability of the subspace of feature space extracted by kernel peA with respect to the sample of size m, and relate this to the feature space that would be extracted in the infinite sample-size limit. This analysis essentially "lifts" into (a potentially infinite dimensional) feature space an analysis which can also be carried out for peA, comparing the k-dimensional eigenspace extracted from a sample covariance matrix and the k-dimensional eigenspace extracted from the population covariance matrix, and comparing the residuals from the k-dimensional compression for the m-sample and the population.
Expected and Unexpected Uncertainty: ACh and NE in the Neocortex
Experimental and theoretical studies suggest that these different forms of variability play different behavioral, neural and computational roles, and may be reported by different (notably neuromodulatory) systems. Here, we refine ourprevious theory of acetylcholine's role in cortical inference in the (oxymoronic) terms of expected uncertainty, and advocate a theory for norepinephrine in terms of unexpected uncertainty. We suggest that norepinephrine reports the radical divergence of bottom-up inputs from prevailing top-down interpretations, to influence inference and plasticity. We illustrate this proposal using an adaptive factor analysis model.
Dopamine Induced Bistability Enhances Signal Processing in Spiny Neurons
Gruber, Aaron J., Solla, Sara A., Houk, James C.
Single unit activity in the striatum of awake monkeys shows a marked dependence on the expected reward that a behavior will elicit. We present a computational model of spiny neurons, the principal neurons of the striatum, to assess the hypothesis that direct neuromodulatoryeffects of dopamine through the activation of D1 receptors mediate the reward dependency of spiny neuron activity. Dopamine release results in the amplification of key ion currents, leading to the emergence of bistability, which not only modulates the peak firing rate but also introduces a temporal and state dependence of the model's response, thus improving the detectability oftemporally correlated inputs. 1 Introduction The classic notion of the basal ganglia as being involved in purely motor processing has expanded over the years to include sensory and cognitive functions. A surprising newfinding is that much of this activity shows a motivational component. For instance, striatal activity related to visual stimuli is dependent on the type of reinforcement (primary vs secondary) that a behavior will elicit [1].
Fast Transformation-Invariant Factor Analysis
Kannan, Anitha, Jojic, Nebojsa, Frey, Brendan
Dimensionality reduction techniques such as principal component analysis andfactor analysis are used to discover a linear mapping between high dimensional data samples and points in a lower dimensional subspace. In [6], Jojic and Frey introduced mixture of transformation-invariant component analyzers (MTCA) that can account for global transformations suchas translations and rotations, perform clustering and learn local appearance deformations by dimensionality reduction.
An Asynchronous Hidden Markov Model for Audio-Visual Speech Recognition
An EM algorithm to train the model is presented, as well as a Viterbi decoder that can be used to obtain theoptimal state sequence as well as the alignment between the two sequences. One such task, which will be presented in this paper, is multimodal speech recognition usingboth a microphone and a camera recording a speaker simultaneously while he (she) speaks. It is indeed well known that seeing the speaker's face in addition tohearing his (her) voice can often improve speech intelligibility, particularly in noisy environments [7), mainly thanks to the complementarity of the visual and acoustic signals. While in the former solution, the alignment between the two sequences is decided a priori, in the latter, there is no explicit learning of the joint probability of the two sequences. In fact, the model enables to desynchronize the streams by temporarily stretching one of them in order to obtain a better match between the corresponding frames.The model can thus be directly applied to the problem of audiovisual speech recognition where sometimes lips start to move before any sound is heard for instance.
Robust Novelty Detection with Single-Class MPM
Ghaoui, Laurent E., Jordan, Michael I., Lanckriet, Gert R.
This algorithm-the "single-class minimax probability machine(MPM)"- is built on a distribution-free methodology that minimizes the worst-case probability of a data point falling outside of a convex set, given only the mean and covariance matrix of the distribution and making no further distributional assumptions. Wepresent a robust approach to estimating the mean and covariance matrix within the general two-class MPM setting, and show how this approach specializes to the single-class problem. We provide empirical results comparing the single-class MPM to the single-class SVM and a two-class SVM method. 1 Introduction Novelty detection is an important unsupervised learning problem in which test data are to be judged as having been generated from the same or a different process as that which generated the training data.
Half-Lives of EigenFlows for Spectral Clustering
Chennubhotla, Chakra, Jepson, Allan D.
Using a Markov chain perspective of spectral clustering we present an algorithm to automatically find the number of stable clusters in a dataset. The Markov chain's behaviour is characterized by the spectral properties of the matrix of transition probabilities, from which we derive eigenflows along with their halflives. An eigenflow describes the flow of probability massdue to the Markov chain, and it is characterized by its eigenvalue, orequivalently, by the halflife of its decay as the Markov chain is iterated. A ideal stable cluster is one with zero eigenflow and infinite half-life.The key insight in this paper is that bottlenecks between weakly coupled clusters can be identified by computing the sensitivity of the eigenflow's halflife to variations in the edge weights. We propose a novel EIGENCUTS algorithm to perform clustering that removes these identified bottlenecks in an iterative fashion.
Source Separation with a Sensor Array using Graphical Models and Subband Filtering
Source separation is an important problem at the intersection of several fields, including machine learning, signal processing, and speech technology. Herewe describe new separation algorithms which are based on probabilistic graphical models with latent variables. In contrast with existing methods, these algorithms exploit detailed models to describe source properties. They also use subband filtering ideas to model the reverberant environment, and employ an explicit model for background and sensor noise. We leverage variational techniques to keep the computational complexityper EM iteration linear in the number of frames.