Plotting

 Quang, Minh Ha


Learning a Single Index Model from Anisotropic Data with vanilla Stochastic Gradient Descent

arXiv.org Machine Learning

We investigate the problem of learning a Single Index Model (SIM)- a popular model for studying the ability of neural networks to learn features - from anisotropic Gaussian inputs by training a neuron using vanilla Stochastic Gradient Descent (SGD). While the isotropic case has been extensively studied, the anisotropic case has received less attention and the impact of the covariance matrix on the learning dynamics remains unclear. For instance, Mousavi-Hosseini et al. (2023b) proposed a spherical SGD that requires a separate estimation of the data covariance matrix, thereby oversimplifying the influence of covariance. In this study, we analyze the learning dynamics of vanilla SGD under the SIM with anisotropic input data, demonstrating that vanilla SGD automatically adapts to the data's covariance structure. Leveraging these results, we derive upper and lower bounds on the sample complexity using a notion of effective dimension that is determined by the structure of the covariance matrix instead of the input data dimension.


Finite sample approximations of exact and entropic Wasserstein distances between covariance operators and Gaussian processes

arXiv.org Machine Learning

This work studies finite sample approximations of the exact and entropic regularized Wasserstein distances between centered Gaussian processes and, more generally, covariance operators of functional random processes. We first show that these distances/divergences are fully represented by reproducing kernel Hilbert space (RKHS) covariance and cross-covariance operators associated with the corresponding covariance functions. Using this representation, we show that the Sinkhorn divergence between two centered Gaussian processes can be consistently and efficiently estimated from the divergence between their corresponding normalized finite-dimensional covariance matrices, or alternatively, their sample covariance operators. Consequently, this leads to a consistent and efficient algorithm for estimating the Sinkhorn divergence from finite samples generated by the two processes. For a fixed regularization parameter, the convergence rates are {\it dimension-independent} and of the same order as those for the Hilbert-Schmidt distance. If at least one of the RKHS is finite-dimensional, we obtain a {\it dimension-dependent} sample complexity for the exact Wasserstein distance between the Gaussian processes.


Convergence and finite sample approximations of entropic regularized Wasserstein distances in Gaussian and RKHS settings

arXiv.org Machine Learning

This work studies the convergence and finite sample approximations of entropic regularized Wasserstein distances in the Hilbert space setting. Our first main result is that for Gaussian measures on an infinite-dimensional Hilbert space, convergence in the 2-Sinkhorn divergence is {\it strictly weaker} than convergence in the exact 2-Wasserstein distance. Specifically, a sequence of centered Gaussian measures converges in the 2-Sinkhorn divergence if the corresponding covariance operators converge in the Hilbert-Schmidt norm. This is in contrast to the previous known result that a sequence of centered Gaussian measures converges in the exact 2-Wasserstein distance if and only if the covariance operators converge in the trace class norm. In the reproducing kernel Hilbert space (RKHS) setting, the {\it kernel Gaussian-Sinkhorn divergence}, which is the Sinkhorn divergence between Gaussian measures defined on an RKHS, defines a semi-metric on the set of Borel probability measures on a Polish space, given a characteristic kernel on that space. With the Hilbert-Schmidt norm convergence, we obtain {\it dimension-independent} convergence rates for finite sample approximations of the kernel Gaussian-Sinkhorn divergence, with the same order as the Maximum Mean Discrepancy. These convergence rates apply in particular to Sinkhorn divergence between Gaussian measures on Euclidean and infinite-dimensional Hilbert spaces. The sample complexity for the 2-Wasserstein distance between Gaussian measures on Euclidean space, while dimension-dependent and larger than that of the Sinkhorn divergence, is exponentially faster than the worst case scenario in the literature.


Entropic regularization of Wasserstein distance between infinite-dimensional Gaussian measures and Gaussian processes

arXiv.org Machine Learning

This work studies the entropic regularization formulation of the 2-Wasserstein distance on an infinite-dimensional Hilbert space, in particular for the Gaussian setting. We first present the Minimum Mutual Information property, namely the joint measures of two Gaussian measures on Hilbert space with the smallest mutual information are joint Gaussian measures. This is the infinite-dimensional generalization of the Maximum Entropy property of Gaussian densities on Euclidean space. We then give closed form formulas for the optimal entropic transport plan, 2-Wasserstein distance, and Sinkhorn divergence between two Gaussian measures on a Hilbert space, along with the fixed point equations for the barycenter of a set of Gaussian measures. Our formulations fully exploit the regularization aspect of the entropic formulation and are valid both in singular and nonsingular settings. In the infinite-dimensional setting, both the entropic 2-Wasserstein distance and Sinkhorn divergence are Fr\'echet differentiable, in contrast to the exact 2-Wasserstein distance, which is not differentiable. Our Sinkhorn barycenter equation is new and always has a unique solution. In contrast, the finite-dimensional barycenter equation for the entropic 2-Wasserstein distance fails to generalize to the Hilbert space setting. In the setting of reproducing kernel Hilbert spaces (RKHS), our distance formulas are given explicitly in terms of the corresponding kernel Gram matrices, providing an interpolation between the kernel Maximum Mean Discrepancy (MMD) and the kernel 2-Wasserstein distance.


Log-Hilbert-Schmidt metric between positive definite operators on Hilbert spaces

Neural Information Processing Systems

This paper introduces a novel mathematical and computational framework, namely {\it Log-Hilbert-Schmidt metric} between positive definite operators on a Hilbert space. This is a generalization of the Log-Euclidean metric on the Riemannian manifold of positive definite matrices to the infinite-dimensional setting. The general framework is applied in particular to compute distances between covariance operators on a Reproducing Kernel Hilbert Space (RKHS), for which we obtain explicit formulas via the corresponding Gram matrices. Empirically, we apply our formulation to the task of multi-category image classification, where each image is represented by an infinite-dimensional RKHS covariance operator. On several challenging datasets, our method significantly outperforms approaches based on covariance matrices computed directly on the original input features, including those using the Log-Euclidean metric, Stein and Jeffreys divergences, achieving new state of the art results.


Scalable Matrix-valued Kernel Learning for High-dimensional Nonlinear Multivariate Regression and Granger Causality

arXiv.org Machine Learning

We propose a general matrix-valued multiple kernel learning framework for high-dimensional nonlinear multivariate regression problems. This framework allows a broad class of mixed norm regularizers, including those that induce sparsity, to be imposed on a dictionary of vector-valued Reproducing Kernel Hilbert Spaces. We develop a highly scalable and eigendecomposition-free algorithm that orchestrates two inexact solvers for simultaneously learning both the input and output components of separable matrix-valued kernels. As a key application enabled by our framework, we show how high-dimensional causal inference tasks can be naturally cast as sparse function estimation problems, leading to novel nonlinear extensions of a class of Graphical Granger Causality techniques. Our algorithmic developments and extensive empirical studies are complemented by theoretical analyses in terms of Rademacher generalization bounds.