Goto

Collaborating Authors

 Technology


A Neuromorphic Monaural Sound Localizer

Neural Information Processing Systems

We describe the first single microphone sound localization system and its inspiration from theories of human monaural sound localization. Reflectionsand diffractions caused by the external ear (pinna) allow humans to estimate sound source elevations using only one ear. Our single microphone localization model relies on a specially shaped reflecting structure that serves the role of the pinna. Specially designedanalog VLSI circuitry uses echo-time processing to localize the sound. A CMOS integrated circuit has been designed, fabricated, and successfully demonstrated on actual sounds. 1 Introduction The principal cues for human sound localization arise from time and intensity differences betweenthe signals received at the two ears. For low-frequency components of sounds (below 1500Hz for humans), the phase-derived interaural time difference (lTD) can be used to localize the sound source. For these frequencies, the sound wavelength is at least several times larger than the head and the amount of shadowing (whichdepends on the wavelength of the sound compared with the dimensions of the head) is negligible.


Replicator Equations, Maximal Cliques, and Graph Isomorphism

Neural Information Processing Systems

We present a new energy-minimization framework for the graph isomorphism problem which is based on an equivalent maximum clique formulation. The approach is centered around a fundamental result proved by Motzkin and Straus in the mid-1960s, and recently expanded in various ways, which allows us to formulate the maximum cliqueproblem in terms of a standard quadratic program. To solve the program we use "replicator" equations, a class of simple continuous-and discrete-time dynamical systems developed in various branchesof theoretical biology. We show how, despite their inability to escape from local solutions, they nevertheless provide experimental results which are competitive with those obtained using moreelaborate mean-field annealing heuristics. 1 INTRODUCTION The graph isomorphism problem is one of those few combinatorial optimization problems which still resist any computational complexity characterization [6]. Despite decadesof active research, no polynomial-time algorithm for it has yet been found.




Approximate Learning of Dynamic Models

Neural Information Processing Systems

Inference is a key component in learning probabilistic models from partially observabledata. When learning temporal models, each of the many inference phases requires a traversal over an entire long data sequence; furthermore,the data structures manipulated are exponentially large, making this process computationally expensive. In [2], we describe an approximate inference algorithm for monitoring stochastic processes, and prove bounds on its approximation error. In this paper, we apply this algorithm as an approximate forward propagation step in an EM algorithm for learning temporal Bayesian networks. We provide a related approximation forthe backward step, and prove error bounds for the combined algorithm.



Experimental Results on Learning Stochastic Memoryless Policies for Partially Observable Markov Decision Processes

Neural Information Processing Systems

Partially Observable Markov Decision Processes (pO"MOPs) constitute an important class of reinforcement learning problems which present unique theoretical and computational difficulties. In the absence of the Markov property, popular reinforcement learning algorithms such as Q-Iearning may no longer be effective, and memory-based methods which remove partial observability via state-estimation are notoriously expensive. An alternative approach is to seek a stochastic memoryless policy which for each observation of the environment prescribes a probability distribution over available actions that maximizes the average reward per timestep. A reinforcement learning algorithm which learns a locally optimal stochastic memoryless policy has been proposed by Jaakkola, Singh and Jordan, but not empirically verified. We present a variation of this algorithm, discuss its implementation, and demonstrate its viability using four test problems.


Exploratory Data Analysis Using Radial Basis Function Latent Variable Models

Neural Information Processing Systems

Two developments of nonlinear latent variable models based on radial basis functions are discussed: in the first, the use of priors or constraints on allowable models is considered as a means of preserving data structure in low-dimensional representations for visualisation purposes. Also, a resampling approach is introduced which makes more effective use of the latent samples in evaluating the likelihood.


Maximum-Likelihood Continuity Mapping (MALCOM): An Alternative to HMMs

Neural Information Processing Systems

We describe Maximum-Likelihood Continuity Mapping (MALCOM), an alternative to hidden Markov models (HMMs) for processing sequence data such as speech. While HMMs have a discrete "hidden" space constrained bya fixed finite-automaton architecture, MALCOM has a continuous hidden space-a continuity map-that is constrained only by a smoothness requirement on paths through the space. MALCOM fits into the same probabilistic framework for speech recognition as HMMs, but it represents a more realistic model of the speech production process. To evaluate the extent to which MALCOM captures speech production information, we generated continuous speech continuity maps for three speakers and used the paths through them to predict measured speech articulator data. The median correlation between the MALCOM paths obtained from only the speech acoustics and articulator measurements was 0.77 on an independent test set not used to train MALCOM or the predictor.


The Role of Lateral Cortical Competition in Ocular Dominance Development

Neural Information Processing Systems

Lateral competition within a layer of neurons sharpens and localizes the response to an input stimulus. Here, we investigate a model for the activity dependentdevelopment of ocular dominance maps which allows to vary the degree of lateral competition. For weak competition, it resembles acorrelation-based learning model and for strong competition, it becomes a self-organizing map. Thus, in the regime of weak competition thereceptive fields are shaped by the second order statistics of the input patterns, whereas in the regime of strong competition, the higher moments and "features" of the individual patterns become important. When correlated localized stimuli from two eyes drive the cortical development wefind (i) that a topographic map and binocular, localized receptive fields emerge when the degree of competition exceeds a critical value and (ii) that receptive fields exhibit eye dominance beyond a second criticalvalue. For anti-correlated activity between the eyes, the second orderstatistics drive the system to develop ocular dominance even for weak competition, but no topography emerges. Topography is established onlybeyond a critical degree of competition.