Goto

Collaborating Authors

 Technology


Emergence of Topography and Complex Cell Properties from Natural Images using Extensions of ICA

Neural Information Processing Systems

Independent component analysis of natural images leads to emergence ofsimple cell properties, Le. linear filters that resemble wavelets or Gabor functions. In this paper, we extend ICA to explain further properties of VI cells.


Independent Factor Analysis with Temporally Structured Sources

Neural Information Processing Systems

We present a new technique for time series analysis based on dynamic probabilisticnetworks. In this approach, the observed data are modeled in terms of unobserved, mutually independent factors, as in the recently introduced technique of Independent Factor Analysis (IFA).However, unlike in IFA, the factors are not Li.d.; each factor has its own temporal statistical characteristics. We derive a family of EM algorithms that learn the structure of the underlying factors and their relation to the data. These algorithms perform source separation and noise reduction in an integrated manner, and demonstrate superior performance compared to IFA. 1 Introduction The technique of independent factor analysis (IFA) introduced in [1] provides a tool for modeling L'-dim data in terms of L unobserved factors. These factors are mutually independent and combine linearly with added noise to produce the observed data.


Algorithms for Independent Components Analysis and Higher Order Statistics

Neural Information Processing Systems

A latent variable generative model with finite noise is used to describe severaldifferent algorithms for Independent Components Analysis (lCA). In particular, the Fixed Point ICA algorithm is shown to be equivalent to the Expectation-Maximization algorithm for maximum likelihood under certain constraints, allowing the conditions for global convergence to be elucidated. The algorithms can also be explained by their generic behavior near a singular point where the size of the optimal generativebases vanishes. An expansion of the likelihood about this singular point indicates the role of higher order correlations in determining thefeatures discovered by ICA. The application and convergence of these algorithms are demonstrated on a simple illustrative example.


Search for Information Bearing Components in Speech

Neural Information Processing Systems

In this paper, we use mutual information to characterize the distributions ofphonetic and speaker/channel information in a timefrequency space. The mutual information (MI) between the phonetic label and one feature, and the joint mutual information (JMI) between the phonetic label and two or three features are estimated. The Miller's bias formulas for entropy and mutual information estimates areextended to include higher order terms. The MI and the JMI for speaker/channel recognition are also estimated. The results are complementary to those for phonetic classification. Our results show how the phonetic information is locally spread and how the speaker/channel information is globally spread in time and frequency.


A Neurodynamical Approach to Visual Attention

Neural Information Processing Systems

In this work, we formulate a hierarchical systemof interconnected modules consisting in populations of neurons formodeling the underlying mechanisms involved in selective visual attention. We demonstrate that our neural system for visual search works across the visual field in parallel but due to the different intrinsic dynamics can show the two experimentally observed modes of visual attention, namely: the serial and the parallel search mode. In other words, neither explicit model of a focus of attention nor saliencies maps are used. The focus of attention appears as an emergent property of the dynamic behavior of the system. The neural population dynamics are handled in the framework of the mean-field approximation. Consequently, thewhole process can be expressed as a system of coupled differential equations.


Better Generative Models for Sequential Data Problems: Bidirectional Recurrent Mixture Density Networks

Neural Information Processing Systems

This paper describes bidirectional recurrent mixture density networks, whichcan model multi-modal distributions of the type P(Xt Iyf) and P(Xt lXI, X2, ...,Xt-l, yf) without any explicit assumptions aboutthe use of context. These expressions occur frequently in pattern recognition problems with sequential data, for example in speech recognition. Experiments show that the proposed generativemodels give a higher likelihood on test data compared toa traditional modeling approach, indicating that they can summarize the statistical properties of the data better. 1 Introduction Many problems of engineering interest can be formulated as sequential data problems inan abstract sense as supervised learning from sequential data, where an input vector (dimensionality D) sequence X xf {X!,X2, .. .


Maximum Entropy Discrimination

Neural Information Processing Systems

We present a general framework for discriminative estimation based on the maximum entropy principle and its extensions. All calculations involvedistributions over structures and/or parameters rather than specific settings and reduce to relative entropy projections. This holds even when the data is not separable within the chosen parametric class, in the context of anomaly detection rather than classification, or when the labels in the training set are uncertain or incomplete. Support vector machines are naturally subsumed under thisclass and we provide several extensions. We are also able to estimate exactly and efficiently discriminative distributions over tree structures of class-conditional models within this framework.


Regular and Irregular Gallager-zype Error-Correcting Codes

Neural Information Processing Systems

The performance of regular and irregular Gallager-type errorcorrecting codeis investigated via methods of statistical physics. The transmitted codeword comprises products of the original message bitsselected by two randomly-constructed sparse matrices; the number of nonzero row/column elements in these matrices constitutes a family of codes. We show that Shannon's channel capacity may be saturated in equilibrium for many of the regular codes while slightly lower performance is obtained for others which may be of higher practical relevance. Decoding aspects are considered byemploying the TAP approach which is identical to the commonly used belief-propagation-based decoding. We show that irregular codes may saturate Shannon's capacity but with improved dynamical properties. 1 Introduction The ever increasing information transmission in the modern world is based on reliably communicatingmessages through noisy transmission channels; these can be telephone lines, deep space, magnetic storing media etc. Error-correcting codes play a significant role in correcting errors incurred during transmission; this is carried out by encoding the message prior to transmission and decoding the corrupted received code-word for retrieving the original message.


Approximate Inference A lgorithms for Two-Layer Bayesian Networks

Neural Information Processing Systems

We present a class of approximate inference algorithms for graphical models of the QMR-DT type. We give convergence rates for these algorithms andfor the Jaakkola and Jordan (1999) algorithm, and verify these theoretical predictions empirically.


Approximate Planning in Large POMDPs via Reusable Trajectories

Neural Information Processing Systems

We consider the problem of reliably choosing a near-best strategy from a restricted class of strategies TI in a partially observable Markov decision process(POMDP). We assume we are given the ability to simulate the POMDP, and study what might be called the sample complexity - that is, the amount of data one must generate in the POMDP in order to choose a good strategy. We prove upper bounds on the sample complexity showingthat, even for infinitely large and arbitrarily complex POMDPs, the amount of data needed can be finite, and depends only linearly on the complexity of the restricted strategy class TI, and exponentially onthe horizon time. This latter dependence can be eased in a variety of ways, including the application of gradient and local search algorithms.