Plotting

 Marzouk, Youssef


Sharp detection of low-dimensional structure in probability measures via dimensional logarithmic Sobolev inequalities

arXiv.org Machine Learning

Identifying low-dimensional structure in high-dimensional probability measures is an essential pre-processing step for efficient sampling. We introduce a method for identifying and approximating a target measure $\pi$ as a perturbation of a given reference measure $\mu$ along a few significant directions of $\mathbb{R}^{d}$. The reference measure can be a Gaussian or a nonlinear transformation of a Gaussian, as commonly arising in generative modeling. Our method extends prior work on minimizing majorizations of the Kullback--Leibler divergence to identify optimal approximations within this class of measures. Our main contribution unveils a connection between the \emph{dimensional} logarithmic Sobolev inequality (LSI) and approximations with this ansatz. Specifically, when the target and reference are both Gaussian, we show that minimizing the dimensional LSI is equivalent to minimizing the KL divergence restricted to this ansatz. For general non-Gaussian measures, the dimensional LSI produces majorants that uniformly improve on previous majorants for gradient-based dimension reduction. We further demonstrate the applicability of this analysis to the squared Hellinger distance, where analogous reasoning shows that the dimensional Poincar\'e inequality offers improved bounds.


Preserving linear invariants in ensemble filtering methods

arXiv.org Machine Learning

Formulating dynamical models for physical phenomena is essential for understanding the interplay between the different mechanisms and predicting the evolution of physical states. However, a dynamical model alone is often insufficient to address these fundamental tasks, as it suffers from model errors and uncertainties. One common remedy is to rely on data assimilation, where the state estimate is updated with observations of the true system. Ensemble filters sequentially assimilate observations by updating a set of samples over time. They operate in two steps: a forecast step that propagates each sample through the dynamical model and an analysis step that updates the samples with incoming observations. For accurate and robust predictions of dynamical systems, discrete solutions must preserve their critical invariants. While modern numerical solvers satisfy these invariants, existing invariant-preserving analysis steps are limited to Gaussian settings and are often not compatible with classical regularization techniques of ensemble filters, e.g., inflation and covariance tapering. The present work focuses on preserving linear invariants, such as mass, stoichiometric balance of chemical species, and electrical charges. Using tools from measure transport theory (Spantini et al., 2022, SIAM Review), we introduce a generic class of nonlinear ensemble filters that automatically preserve desired linear invariants in non-Gaussian filtering problems. By specializing this framework to the Gaussian setting, we recover a constrained formulation of the Kalman filter. Then, we show how to combine existing regularization techniques for the ensemble Kalman filter (Evensen, 1994, J. Geophys. Res.) with the preservation of the linear invariants. Finally, we assess the benefits of preserving linear invariants for the ensemble Kalman filter and nonlinear ensemble filters.


Nonlinear Bayesian optimal experimental design using logarithmic Sobolev inequalities

arXiv.org Machine Learning

The optimal experimental design (OED) problem arises in numerous settings, with applications ranging from combustion kinetics Huan & Marzouk (2013), sensor placement for weather prediction Krause et al. (2008), containment source identification Attia et al. (2018), to pharmaceutical trials Djuris et al. (2024). A commonly addressed version of the OED problem centers on the fundamental question of selecting an optimal subset of k observations from a total pool of n possible candidates, with the goal of learning the parameters of a statistical model for the observations. In the Bayesian framework, these parameters are endowed with a prior distribution to represent our state of knowledge before seeing the data. A posterior distribution on the parameters is obtained by conditioning on the observations. A commonly used experimental design criterion is then the mutual information (MI) between the parameters and the selected observations or, equivalently, the expected information gain from prior to posterior, which should be maximized.


Multitask methods for predicting molecular properties from heterogeneous data

arXiv.org Machine Learning

Data generation remains a bottleneck in training surrogate models to predict molecular properties. We demonstrate that multitask Gaussian process regression overcomes this limitation by leveraging both expensive and cheap data sources. In particular, we consider training sets constructed from coupled-cluster (CC) and density function theory (DFT) data. We report that multitask surrogates can predict at CC level accuracy with a reduction to data generation cost by over an order of magnitude. Of note, our approach allows the training set to include DFT data generated by a heterogeneous mix of exchange-correlation functionals without imposing any artificial hierarchy on functional accuracy. More generally, the multitask framework can accommodate a wider range of training set structures -- including full disparity between the different levels of fidelity -- than existing kernel approaches based on $\Delta$-learning, though we show that the accuracy of the two approaches can be similar. Consequently, multitask regression can be a tool for reducing data generation costs even further by opportunistically exploiting existing data sources.


Stable generative modeling using diffusion maps

arXiv.org Machine Learning

We consider the problem of sampling from an unknown distribution for which only a sufficiently large number of training samples are available. Such settings have recently drawn considerable interest in the context of generative modelling. In this paper, we propose a generative model combining diffusion maps and Langevin dynamics. Diffusion maps are used to approximate the drift term from the available training samples, which is then implemented in a discrete-time Langevin sampler to generate new samples. By setting the kernel bandwidth to match the time step size used in the unadjusted Langevin algorithm, our method effectively circumvents any stability issues typically associated with time-stepping stiff stochastic differential equations. More precisely, we introduce a novel split-step scheme, ensuring that the generated samples remain within the convex hull of the training samples. Our framework can be naturally extended to generate conditional samples. We demonstrate the performance of our proposed scheme through experiments on synthetic datasets with increasing dimensions and on a stochastic subgrid-scale parametrization conditional sampling problem.


Sampling in Unit Time with Kernel Fisher-Rao Flow

arXiv.org Machine Learning

We introduce a new mean-field ODE and corresponding interacting particle systems for sampling from an unnormalized target density or Bayesian posterior. The interacting particle systems are gradient-free, available in closed form, and only require the ability to sample from the reference density and compute the (unnormalized) target-to-reference density ratio. The mean-field ODE is obtained by solving a Poisson equation for a velocity field that transports samples along the geometric mixture of the two densities, which is the path of a particular Fisher-Rao gradient flow. We employ a reproducing kernel Hilbert space ansatz for the velocity field, which makes the Poisson equation tractable and enables us to discretize the resulting mean-field ODE over finite samples, as a simple interacting particle system. The mean-field ODE can be additionally be derived from a discrete-time perspective as the limit of successive linearizations of the Monge-Amp\`ere equations within a framework known as sample-driven optimal transport. We demonstrate empirically that our interacting particle systems can produce high-quality samples from distributions with varying characteristics.


Score Operator Newton transport

arXiv.org Artificial Intelligence

Generating samples from a complex (e.g., non-Gaussian, high-dimensional) probability distribution is a core computational challenge in diverse applications, ranging from computational statistics and machine learning to molecular simulation. A recurring setting is where the density ρ of the target distribution is specified up to a normalizing constant--for example, in Bayesian modeling, where ρ represents the posterior density. Here, evaluations of the score log ρ are often available as well, even for complex statistical models [Villa et al., 2021]. Alternatively, many new methods enable effective score estimation from data, without explicit density estimation; examples include score estimation from time series observations in chaotic dynamical systems [Chandramoorthy and Wang, 2022, Ni, 2020] and score-based modeling of image distributions [Song et al., 2020b,a]. In these settings, transport or "flow"-driven algorithms for generating samples have seen extensive success. The central idea is to construct a transport map from a simple, prescribed source distribution to the target distribution of interest. One class of transport approaches, e.g., as represented by variational inference with normalizing flows, involves constructing a parametric class of invertible maps and minimizing some statistical divergence between the pushforward (see Section 2) of the source by a member of this class and the target. A different, essentially nonparametric, class of transport approaches are based on particle systems, e.g., Stein variational gradient descent (SVGD)


Ensemble transport smoothing. Part II: Nonlinear updates

arXiv.org Machine Learning

Sequential Monte Carlo methods can characterize arbitrary distributions using sequential importance sampling and resampling, but typically require very large sample sizes to mitigate weight collapse [Snyder et al., 2008, 2015]. By contrast, ensemble Kalman-type methods avoid the use of weights, but are based on affine prior-to-posterior updates that are consistent only if all distributions involved are Gaussian. In the context of smoothing, such methods include the ensemble Kalman smoother (EnKS) [Evensen and Van Leeuwen, 2000], which has inspired numerous algorithmic variations such as the ensemble smoother with multiple data assimilation [Emerick and Reynolds, 2013] and the iterative ensemble Kalman smoother (iEnKS) [Bocquet and Sakov, 2014, Evensen et al., 2019], as well as backwards smoothers such as the ensemble Rauch-Tung-Striebel smoother (EnRTSS) [Raanes, 2016]. These two classes of methods occupy opposite ends of a spectrum that ranges from an emphasis on statistical generality at one end to an emphasis on computational efficiency at the other. This trade-off complicates design decisions for smoothing problems that are at once non-Gaussian and computationally expensive.


Ensemble transport smoothing. Part I: Unified framework

arXiv.org Machine Learning

Smoothers are algorithms for Bayesian time series re-analysis. Most operational smoothers rely either on affine Kalman-type transformations or on sequential importance sampling. These strategies occupy opposite ends of a spectrum that trades computational efficiency and scalability for statistical generality and consistency: non-Gaussianity renders affine Kalman updates inconsistent with the true Bayesian solution, while the ensemble size required for successful importance sampling can be prohibitive. This paper revisits the smoothing problem from the perspective of measure transport, which offers the prospect of consistent prior-to-posterior transformations for Bayesian inference. We leverage this capacity by proposing a general ensemble framework for transport-based smoothing. Within this framework, we derive a comprehensive set of smoothing recursions based on nonlinear transport maps and detail how they exploit the structure of state-space models in fully non-Gaussian settings. We also describe how many standard Kalman-type smoothing algorithms emerge as special cases of our framework. A companion paper (Ramgraber et al., 2023) explores the implementation of nonlinear ensemble transport smoothers in greater depth.


Diffusion map particle systems for generative modeling

arXiv.org Machine Learning

We propose a novel diffusion map particle system (DMPS) for generative modeling, based on diffusion maps and Laplacian-adjusted Wasserstein gradient descent (LAWGD). Diffusion maps are used to approximate the generator of the corresponding Langevin diffusion process from samples, and hence to learn the underlying data-generating manifold. On the other hand, LAWGD enables efficient sampling from the target distribution given a suitable choice of kernel, which we construct here via a spectral approximation of the generator, computed with diffusion maps. Our method requires no offline training and minimal tuning, and can outperform other approaches on data sets of moderate dimension.