Goto

Collaborating Authors

 Poczos, Barnabas


Copula-based Kernel Dependency Measures

arXiv.org Machine Learning

The paper presents a new copula based method for measuring dependence between random variables. Our approach extends the Maximum Mean Discrepancy to the copula of the joint distribution. We prove that this approach has several advantageous properties. Similarly to Shannon mutual information, the proposed dependence measure is invariant to any strictly increasing transformation of the marginal variables. This is important in many applications, for example in feature selection. The estimator is consistent, robust to outliers, and uses rank statistics only. We derive upper bounds on the convergence rate and propose independence tests too. We illustrate the theoretical contributions through a series of experiments in feature selection and low-dimensional embedding of distributions.


Nonparametric Divergence Estimation with Applications to Machine Learning on Distributions

arXiv.org Machine Learning

Low-dimensional embedding, manifold learning, clustering, classification, and anomaly detection are among the most important problems in machine learning. The existing methods usually consider the case when each instance has a fixed, finite-dimensional feature representation. Here we consider a different setting. We assume that each instance corresponds to a continuous probability distribution. These distributions are unknown, but we are given some i.i.d. samples from each distribution. Our goal is to estimate the distances between these distributions and use these distances to perform low-dimensional embedding, clustering/classification, or anomaly detection for the distributions. We present estimation algorithms, describe how to apply them for machine learning tasks on distributions, and show empirical results on synthetic data, real word images, and astronomical data sets.


Collaborative Filtering via Group-Structured Dictionary Learning

arXiv.org Machine Learning

Structured sparse coding and the related structured dictionary learning problems are novel research areas in machine learning. In this paper we present a new application of structured dictionary learning for collaborative filtering based recommender systems. Our extensive numerical experiments demonstrate that the presented technique outperforms its state-of-the-art competitors and has several advantages over approaches that do not put structured constraints on the dictionary elements.


A Cross-Entropy Method that Optimizes Partially Decomposable Problems: A New Way to Interpret NMR Spectra

AAAI Conferences

Some real-world problems are partially decomposable, in that they can be decomposed into a set of coupled sub- problems, that are each relatively easy to solve. However, when these sub-problem share some common variables, it is not sufficient to simply solve each sub-problem in isolation. We develop a technology for such problems, and use it to address the challenge of finding the concentrations of the chemicals that appear in a complex mixture, based on its one-dimensional 1H Nuclear Magnetic Resonance (NMR) spectrum. As each chemical involves clusters of spatially localized peaks, this requires finding the shifts for the clusters and the concentrations of the chemicals, that collectively pro- duce the best match to the observed NMR spectrum. Here, each sub-problem requires finding the chemical concentrations and cluster shifts that can appear within a limited spectrum range; these are coupled as these limited regions can share many chemicals, and so must agree on the concentrations and cluster shifts of the common chemicals. This task motivates CEED: a novel extension to the Cross-Entropy stochastic optimization method constructed to address such partially decomposable problems. Our experimental results in the NMR task show that our CEED system is superior to other well-known optimization methods, and indeed produces the best-known results in this important, real-world application.


Undercomplete Blind Subspace Deconvolution

arXiv.org Machine Learning

We introduce the blind subspace deconvolution (BSSD) problem, which is the extension of both the blind source deconvolution (BSD) and the independent subspace analysis (ISA) tasks. We examine the case of the undercomplete BSSD (uBSSD). Applying temporal concatenation we reduce this problem to ISA. The associated `high dimensional' ISA problem can be handled by a recent technique called joint f-decorrelation (JFD). Similar decorrelation methods have been used previously for kernel independent component analysis (kernel-ICA). More precisely, the kernel canonical correlation (KCCA) technique is a member of this family, and, as is shown in this paper, the kernel generalized variance (KGV) method can also be seen as a decorrelation method in the feature space. These kernel based algorithms will be adapted to the ISA task. In the numerical examples, we (i) examine how efficiently the emerging higher dimensional ISA tasks can be tackled, and (ii) explore the working and advantages of the derived kernel-ISA methods.