Goto

Collaborating Authors

 logarithmic sobolev inequality



Diffusion annealed Langevin dynamics: a theoretical study

Cattiaux, Patrick, Cordero-Encinar, Paula, Guillin, Arnaud

arXiv.org Machine Learning

The aim of this paper is to give a rigorous presentation of the recently introduced diffusion annealed Langevin dynamics [39]. This stochastic process is a score based generative model and provides an alternative to the well known overdamped Langevin process and its reversed in time version commonly used for sampling purpose. In particular, we will fill some gaps in the main arguments used for building the annealed Langevin dynamics discussed in [39, 30, 24]. We will not discuss its practical efficiency nor its numerical counterparts, that is we will not introduce nor discuss the corresponding discrete algorithms, presented in [24] by the second author, and the references therein. However, some quantitative aspects, useful for discretization schemes or important from the statistical point of view, are discussed in details. Also, for distributions like the gaussian, an important idea introduced in the papers on diffusion annealed Langevin dynamics consists in using a functional inequality (namely the Poincaré inequality) to control some covariance. This inequality is crucial in [24] for proving that the score of the intermediate distributions is Lipschitz continuous, which, as we explain in Section 2, ensures the existence and uniqueness of strong solutions for the annealed Langevin diffusion. As a matter of fact, heavy tailed base distributions are also particularly well suited for the model as will see in an example.



Understanding the Generalization Error of Markov algorithms through Poissonization

Dupuis, Benjamin, Haddouche, Maxime, Deligiannidis, George, Simsekli, Umut

arXiv.org Machine Learning

Using continuous-time stochastic differential equation (SDE) proxies to stochastic optimization algorithms has proven fruitful for understanding their generalization abilities. A significant part of these approaches are based on the so-called ``entropy flows'', which greatly simplify the generalization analysis. Unfortunately, such well-structured entropy flows cannot be obtained for most discrete-time algorithms, and the existing SDE approaches remain limited to specific noise and algorithmic structures. We aim to alleviate this issue by introducing a generic framework for analyzing the generalization error of Markov algorithms through `Poissonization', a continuous-time approximation of discrete-time processes with formal approximation guarantees. Through this approach, we first develop a novel entropy flow, which directly leads to PAC-Bayesian generalization bounds. We then draw novel links to modified versions of the celebrated logarithmic Sobolev inequalities (LSI), identify cases where such LSIs are satisfied, and obtain improved bounds. Beyond its generality, our framework allows exploiting specific properties of learning algorithms. In particular, we incorporate the noise structure of different algorithm types - namely, those with additional noise injections (noisy) and those without (non-noisy) - through various technical tools. This illustrates the capacity of our methods to achieve known (yet, Poissonized) and new generalization bounds.


Sharp detection of low-dimensional structure in probability measures via dimensional logarithmic Sobolev inequalities

Li, Matthew T. C., Cui, Tiangang, Li, Fengyi, Marzouk, Youssef, Zahm, Olivier

arXiv.org Machine Learning

Identifying low-dimensional structure in high-dimensional probability measures is an essential pre-processing step for efficient sampling. We introduce a method for identifying and approximating a target measure $\pi$ as a perturbation of a given reference measure $\mu$ along a few significant directions of $\mathbb{R}^{d}$. The reference measure can be a Gaussian or a nonlinear transformation of a Gaussian, as commonly arising in generative modeling. Our method extends prior work on minimizing majorizations of the Kullback--Leibler divergence to identify optimal approximations within this class of measures. Our main contribution unveils a connection between the \emph{dimensional} logarithmic Sobolev inequality (LSI) and approximations with this ansatz. Specifically, when the target and reference are both Gaussian, we show that minimizing the dimensional LSI is equivalent to minimizing the KL divergence restricted to this ansatz. For general non-Gaussian measures, the dimensional LSI produces majorants that uniformly improve on previous majorants for gradient-based dimension reduction. We further demonstrate the applicability of this analysis to the squared Hellinger distance, where analogous reasoning shows that the dimensional Poincar\'e inequality offers improved bounds.


Concentration inequalities for leave-one-out cross validation

Avelin, Benny, Viitasaari, Lauri

arXiv.org Machine Learning

In this article we prove that estimator stability is enough to show that leave-one-out cross validation is a sound procedure, by providing concentration bounds in a general framework. In particular, we provide concentration bounds beyond Lipschitz continuity assumptions on the loss or on the estimator. We obtain our results by relying on random variables with distribution satisfying the logarithmic Sobolev inequality, providing us a relatively rich class of distributions. We illustrate our method by considering several interesting examples, including linear regression, kernel density estimation, and stabilized/truncated estimators such as stabilized kernel regression. Mathematics Subject Classifications (2020): 62R07, 62G05, 60F15 Keywords: Leave-one-out cross validation, concentration inequalities, logarithmic Sobolev inequality, sub-Gaussian random variables 1. Introduction It is customary in many statistical and machine learning methods to use train-validation, where the data is split into a training set and a validation set. Then the training set is used for estimating (training) the model, while the validation set is used to measure the performance of the fitted model.


On counterfactual inference with unobserved confounding

Shah, Abhin, Dwivedi, Raaz, Shah, Devavrat, Wornell, Gregory W.

arXiv.org Artificial Intelligence

Given an observational study with $n$ independent but heterogeneous units, our goal is to learn the counterfactual distribution for each unit using only one $p$-dimensional sample per unit containing covariates, interventions, and outcomes. Specifically, we allow for unobserved confounding that introduces statistical biases between interventions and outcomes as well as exacerbates the heterogeneity across units. Modeling the conditional distribution of the outcomes as an exponential family, we reduce learning the unit-level counterfactual distributions to learning $n$ exponential family distributions with heterogeneous parameters and only one sample per distribution. We introduce a convex objective that pools all $n$ samples to jointly learn all $n$ parameter vectors, and provide a unit-wise mean squared error bound that scales linearly with the metric entropy of the parameter space. For example, when the parameters are $s$-sparse linear combination of $k$ known vectors, the error is $O(s\log k/p)$. En route, we derive sufficient conditions for compactly supported distributions to satisfy the logarithmic Sobolev inequality. As an application of the framework, our results enable consistent imputation of sparsely missing covariates.


Fast Conditional Mixing of MCMC Algorithms for Non-log-concave Distributions

Cheng, Xiang, Wang, Bohan, Zhang, Jingzhao, Zhu, Yusong

arXiv.org Artificial Intelligence

MCMC algorithms offer empirically efficient tools for sampling from a target distribution $\pi(x) \propto \exp(-V(x))$. However, on the theory side, MCMC algorithms suffer from slow mixing rate when $\pi(x)$ is non-log-concave. Our work examines this gap and shows that when Poincar\'e-style inequality holds on a subset $\mathcal{X}$ of the state space, the conditional distribution of MCMC iterates over $\mathcal{X}$ mixes fast to the true conditional distribution. This fast mixing guarantee can hold in cases when global mixing is provably slow. We formalize the statement and quantify the conditional mixing rate. We further show that conditional mixing can have interesting implications for sampling from mixtures of Gaussians, parameter estimation for Gaussian mixture models and Gibbs-sampling with well-connected local minima.


Mean-Field Analysis of Two-Layer Neural Networks: Global Optimality with Linear Convergence Rates

Zhang, Jingwei, Huang, Xunpeng, Yu, Jincheng

arXiv.org Artificial Intelligence

Gradient-based optimization is a fundamental tool in machine learning and has witnessed great empirical success for training neural networks, despite the highly non-convexity landscape of the objective. However, theoretical understanding of nonconvex optimization in neural networks is quite limited. Until recently, there has been much work that explains the success of gradientbased optimization in overparametrized neural networks, that is neural networks with massive hidden units. Under the overparametrization condition, the learning problem can be translated into minimizing a convex functional and hence circumventing the difficulties of analyzing non-convex objectives. It's worth mentioning that there has been much broader interest in analyzing the convergence of machine learning algorithms by formulating it as the problem of minimizing some (usually convex) functional of a measure, such as variational inference (Liu and Wang (2016); Liu (2017); Chewi et al. (2020)), generative adversatial networks (Johnson and Zhang (2019); Nitanda and Suzuki (2020)) and learning infinite-width neural networks (Chizat and Bach (2018); Mei et al. (2018); Nguyen and Pham (2020); Fang et al. (2021)). The key idea is by approximating the learning dynamics of model parameters by the optimization on the space of probability of measures over the model parameters under the overparametrization condition.


Riemannian Langevin Algorithm for Solving Semidefinite Programs

Li, Mufan Bill, Erdogdu, Murat A.

arXiv.org Machine Learning

We propose a Langevin diffusion-based algorithm for non-convex optimization and sampling on a product manifold of spheres. Under a logarithmic Sobolev inequality, we establish a guarantee for finite iteration convergence to the Gibbs distribution in terms of Kullback-Leibler divergence. We show that with an appropriate temperature choice, the suboptimality gap to the global minimum is guaranteed to be arbitrarily small with high probability. As an application, we analyze the proposed Langevin algorithm for solving the Burer-Monteiro relaxation of a semidefinite program (SDP). In particular, we establish a logarithmic Sobolev inequality for the Burer-Monteiro problem when there are no spurious local minima; hence implying a fast escape from saddle points. Combining the results, we then provide a global optimality guarantee for the SDP and the Max-Cut problem. More precisely, we show the Langevin algorithm achieves $\epsilon$-multiplicative accuracy with high probability in $\widetilde{\Omega}( n^2 \epsilon^{-3} )$ iterations, where $n$ is the size of the cost matrix.