Goto

Collaborating Authors

 Technology


Search for Information Bearing Components in Speech

Neural Information Processing Systems

In this paper, we use mutual information to characterize the distributions ofphonetic and speaker/channel information in a timefrequency space. The mutual information (MI) between the phonetic label and one feature, and the joint mutual information (JMI) between the phonetic label and two or three features are estimated. The Miller's bias formulas for entropy and mutual information estimates areextended to include higher order terms. The MI and the JMI for speaker/channel recognition are also estimated. The results are complementary to those for phonetic classification. Our results show how the phonetic information is locally spread and how the speaker/channel information is globally spread in time and frequency.


A Neurodynamical Approach to Visual Attention

Neural Information Processing Systems

In this work, we formulate a hierarchical systemof interconnected modules consisting in populations of neurons formodeling the underlying mechanisms involved in selective visual attention. We demonstrate that our neural system for visual search works across the visual field in parallel but due to the different intrinsic dynamics can show the two experimentally observed modes of visual attention, namely: the serial and the parallel search mode. In other words, neither explicit model of a focus of attention nor saliencies maps are used. The focus of attention appears as an emergent property of the dynamic behavior of the system. The neural population dynamics are handled in the framework of the mean-field approximation. Consequently, thewhole process can be expressed as a system of coupled differential equations.


Better Generative Models for Sequential Data Problems: Bidirectional Recurrent Mixture Density Networks

Neural Information Processing Systems

This paper describes bidirectional recurrent mixture density networks, whichcan model multi-modal distributions of the type P(Xt Iyf) and P(Xt lXI, X2, ...,Xt-l, yf) without any explicit assumptions aboutthe use of context. These expressions occur frequently in pattern recognition problems with sequential data, for example in speech recognition. Experiments show that the proposed generativemodels give a higher likelihood on test data compared toa traditional modeling approach, indicating that they can summarize the statistical properties of the data better. 1 Introduction Many problems of engineering interest can be formulated as sequential data problems inan abstract sense as supervised learning from sequential data, where an input vector (dimensionality D) sequence X xf {X!,X2, .. .


Maximum Entropy Discrimination

Neural Information Processing Systems

We present a general framework for discriminative estimation based on the maximum entropy principle and its extensions. All calculations involvedistributions over structures and/or parameters rather than specific settings and reduce to relative entropy projections. This holds even when the data is not separable within the chosen parametric class, in the context of anomaly detection rather than classification, or when the labels in the training set are uncertain or incomplete. Support vector machines are naturally subsumed under thisclass and we provide several extensions. We are also able to estimate exactly and efficiently discriminative distributions over tree structures of class-conditional models within this framework.


Regular and Irregular Gallager-zype Error-Correcting Codes

Neural Information Processing Systems

The performance of regular and irregular Gallager-type errorcorrecting codeis investigated via methods of statistical physics. The transmitted codeword comprises products of the original message bitsselected by two randomly-constructed sparse matrices; the number of nonzero row/column elements in these matrices constitutes a family of codes. We show that Shannon's channel capacity may be saturated in equilibrium for many of the regular codes while slightly lower performance is obtained for others which may be of higher practical relevance. Decoding aspects are considered byemploying the TAP approach which is identical to the commonly used belief-propagation-based decoding. We show that irregular codes may saturate Shannon's capacity but with improved dynamical properties. 1 Introduction The ever increasing information transmission in the modern world is based on reliably communicatingmessages through noisy transmission channels; these can be telephone lines, deep space, magnetic storing media etc. Error-correcting codes play a significant role in correcting errors incurred during transmission; this is carried out by encoding the message prior to transmission and decoding the corrupted received code-word for retrieving the original message.


Approximate Inference A lgorithms for Two-Layer Bayesian Networks

Neural Information Processing Systems

We present a class of approximate inference algorithms for graphical models of the QMR-DT type. We give convergence rates for these algorithms andfor the Jaakkola and Jordan (1999) algorithm, and verify these theoretical predictions empirically.


Approximate Planning in Large POMDPs via Reusable Trajectories

Neural Information Processing Systems

We consider the problem of reliably choosing a near-best strategy from a restricted class of strategies TI in a partially observable Markov decision process(POMDP). We assume we are given the ability to simulate the POMDP, and study what might be called the sample complexity - that is, the amount of data one must generate in the POMDP in order to choose a good strategy. We prove upper bounds on the sample complexity showingthat, even for infinitely large and arbitrarily complex POMDPs, the amount of data needed can be finite, and depends only linearly on the complexity of the restricted strategy class TI, and exponentially onthe horizon time. This latter dependence can be eased in a variety of ways, including the application of gradient and local search algorithms.


A Geometric Interpretation of v-SVM Classifiers

Neural Information Processing Systems

We show that the recently proposed variant of the Support Vector machine (SVM) algorithm, known as v-SVM, can be interpreted as a maximal separation between subsets of the convex hulls of the data, which we call soft convex hulls. The soft convex hulls are controlled by choice of the parameter v. The proposed geometric interpretation of v-SVM also leads to necessary and sufficient conditions for the existence of a choice of v for which the v-SVM solution is nontrivial. 1 Introduction Recently, SchOlkopf et al. [I) introduced a new class of SVM algorithms, called v-SVM, for both regression estimation and pattern recognition. The basic idea is to remove the user-chosen error penalty factor C that appears in SVM algorithms by introducing a new variable p which, in the pattern recognition case, adds another degree of freedom to the margin. For a given normal to the separating hyperplane, the size of the margin increases linearly with p.


Acquisition in Autoshaping

Neural Information Processing Systems

However, most models have simply ignored these data; the few that have attempted toaddress them have failed by at least an order of magnitude. We discuss key data on the speed of acquisition, and show how to account for them using a statistically sound model of learning, in which differential reliabilities of stimuli playa crucial role. 1 Introduction Conditioning experiments probe the ways that animals make predictions about rewards and punishments and how those predictions are used to their advantage. Substantial quantitative data are available as to how pigeons and rats acquire conditioned responsesduring autoshaping, which is one of the simplest paradigms of classical conditioning.


Hierarchical Image Probability (H1P) Models

Neural Information Processing Systems

We formulate a model for probability distributions on image spaces. We show that any distribution of images can be factored exactly into conditional distributionsof feature vectors at one resolution (pyramid level) conditioned on the image information at lower resolutions. We would like to factor this over positions in the pyramid levels to make it tractable, but such factoring may miss long-range dependencies. To fix this, we introduce hiddenclass labels at each pixel in the pyramid. The result is a hierarchical mixture of conditional probabilities, similar to a hidden Markov model on a tree. The model parameters can be found with maximum likelihoodestimation using the EM algorithm. We have obtained encouraging preliminary results on the problems of detecting various objects inSAR images and target recognition in optical aerial images. 1 Introduction