Goto

Collaborating Authors

 decay



Diffusion Models with Heavy-Tailed Targets: Score Estimation and Sampling Guarantees

Yu, Yifeng, Yu, Lu

arXiv.org Machine Learning

Score-based diffusion models have become a powerful framework for generative modeling, with score estimation as a central statistical bottleneck. Existing guarantees for score estimation largely focus on light-tailed targets or rely on restrictive assumptions such as compact support, which are often violated by heavy-tailed data in practice. In this work, we study conventional (Gaussian) score-based diffusion models when the target distribution is heavy-tailed and belongs to a Sobolev class with smoothness parameter $β>0$. We consider both exponential and polynomial tail decay, indexed by a tail parameter $γ$. Using kernel density estimation, we derive sharp minimax rates for score estimation, revealing a qualitative dichotomy: under exponential tails, the rate matches the light-tailed case up to polylogarithmic factors, whereas under polynomial tails the rate depends explicitly on $γ$. We further provide sampling guarantees for the associated continuous reverse dynamics. In total variation, the generated distribution converges at the minimax optimal rate $n^{-β/(2β+d)}$ under exponential tails (up to logarithmic factors), and at a $γ$-dependent rate under polynomial tails. Whether the latter sampling rate is minimax optimal remains an open question. These results characterize the statistical limits of score estimation and the resulting sampling accuracy for heavy-tailed targets, extending diffusion theory beyond the light-tailed setting.


Normalization and effective learning rates in reinforcement learning

Neural Information Processing Systems

Normalization layers have recently experienced a renaissance in the deep reinforcement learning and continual learning literature, with several works highlighting diverse benefits such as improving loss landscape conditioning and combatting overestimation bias. However, normalization brings with it a subtle but important side effect: an equivalence between growth in the norm of the network parameters and decay in the effective learning rate. This becomes problematic in continual learning settings, where the resulting learning rate schedule may decay to near zero too quickly relative to the timescale of the learning problem. We propose to make the learning rate schedule explicit with a simple re-parameterization which we call Normalize-and-Project (NaP), which couples the insertion of normalization layers with weight projection, ensuring that the effective learning rate remains constant throughout training. This technique reveals itself as a powerful analytical tool to better understand learning rate schedules in deep reinforcement learning, and as a means of improving robustness to nonstationarity in synthetic plasticity loss benchmarks along with both the single-task and sequential variants of the Arcade Learning Environment. We also show that our approach can be easily applied to popular architectures such as ResNets and transformers while recovering and in some cases even slightly improving the performance of the base model in common stationary benchmarks.


Rethinking Weight Decay for Robust Fine-Tuning of Foundation Models

Neural Information Processing Systems

Modern optimizers such as AdamW, equipped with momentum and adaptive learning rate, are designed to escape local minima and explore the vast parameter space. This exploration is beneficial for finding good loss basins when training from scratch. It is not necessarily ideal when resuming from a powerful foundation model because it can lead to large deviations from the pre-trained initialization and, consequently, worse robustness and generalization. At the same time, strong regularization on all parameters can lead to under-fitting. We hypothesize that selectively regularizing the parameter space is the key to fitting and retraining the pre-trained knowledge. This paper proposes a new weight decay technique, Selective Projection Decay (SPD), that selectively imposes a strong penalty on certain layers while allowing others to change freely. Intuitively, SPD expands and contracts the parameter search space for layers with consistent and inconsistent loss reduction, respectively. Experimentally, when equipped with SPD, Adam consistently provides better in-distribution generalization and out-of-distribution robustness performance on multiple popular vision and language benchmarks.


Balance, Imbalance, and Rebalance: Understanding Robust Overfitting from a Minimax Game Perspective

Neural Information Processing Systems

Adversarial Training (AT) has become arguably the state-of-the-art algorithm for extracting robust features. However, researchers recently notice that AT suffers from severe robust overfitting problems, particularly after learning rate (LR) decay. In this paper, we explain this phenomenon by viewing adversarial training as a dynamic minimax game between the model trainer and the attacker. Specifically, we analyze how LR decay breaks the balance between the minimax game by empowering the trainer with a stronger memorization ability, and show such imbalance induces robust overfitting as a result of memorizing non-robust features. We validate this understanding with extensive experiments, and provide a holistic view of robust overfitting from the dynamics of both the two game players. This understanding further inspires us to alleviate robust overfitting by rebalancing the two players by either regularizing the trainer's capacity or improving the attack strength. Experiments show that the proposed ReBalanced Adversarial Training (ReBAT) can attain good robustness and does not suffer from robust overfitting even after very long training. Code is available at https://github.com/PKU-ML/ReBAT.


Generalization Error Rates in Kernel Regression: The Crossover from the Noiseless to Noisy Regime

Neural Information Processing Systems

In this manuscript we consider Kernel Ridge Regression (KRR) under the Gaussian design. Exponents for the decay of the excess generalization error of KRR have been reported in various works under the assumption of power-law decay of eigenvalues of the features co-variance. These decays were, however, provided for sizeably different setups, namely in the noiseless case with constant regularization and in the noisy optimally regularized case. Intermediary settings have been left substantially uncharted. In this work, we unify and extend this line of work, providing characterization of all regimes and excess error decay rates that can be observed in terms of the interplay of noise and regularization. In particular, we show the existence of a transition in the noisy setting between the noiseless exponents to its noisy values as the sample complexity is increased. Finally, we illustrate how this crossover can also be observed on real data sets.


Improved guarantees and a multiple-descent curve for Column Subset Selection and the Nystrom method

Neural Information Processing Systems

The Column Subset Selection Problem (CSSP) and the Nystrom method are among the leading tools for constructing small low-rank approximations of large datasets in machine learning and scientific computing. A fundamental question in this area is: how well can a data subset of size k compete with the best rank k approximation? We develop techniques which exploit spectral properties of the data matrix to obtain improved approximation guarantees which go beyond the standard worst-case analysis. Our approach leads to significantly better bounds for datasets with known rates of singular value decay, e.g., polynomial or exponential decay. Our analysis also reveals an intriguing phenomenon: the approximation factor as a function of k may exhibit multiple peaks and valleys, which we call a multiple-descent curve. A lower bound we establish shows that this behavior is not an artifact of our analysis, but rather it is an inherent property of the CSSP and Nystrom tasks. Finally, using the example of a radial basis function (RBF) kernel, we show that both our improved bounds and the multiple-descent curve can be observed on real datasets simply by varying the RBF parameter.


Towards Sharp Minimax Risk Bounds for Operator Learning

Adcock, Ben, Maier, Gregor, Parhi, Rahul

arXiv.org Machine Learning

A new paradigm in machine learning for scientific computing is focused on designing learning algorithms and methods for continuum problems. This paradigm is referred to as operator learning and has received considerable interest in the last few years [5,7,18,20,23-25,27,30,34,36]. The basic task may be posed as learning a map between infinite-dimensional function spaces, i.e., learning an operator F: X Y, where, for example, X and Y are real, separable Hilbert spaces. Operator learning naturally arises in many scientific problems where one wants to learn how a continuum model, often described by partial differential equations (PDEs), maps inputs, such as parameters or boundary conditions, to outputs, such as states or observables. A prototypical example to keep in mind is learning parameter-to-solution maps of parametric PDEs [1,2,11]. In contrast to more classical surrogate modeling, which typically focuses on learning finite-dimensional parameter-to-solution maps for some fixed discretization, operator learning directly aims to learn/approximate the continuum map F: X Y itself. Thus, the inputs and outputs are functions (not vectors) and the goal is to directly design discretization-invariant methods [7,23]. From a statistical perspective, this naturally leads to a nonparametric regression problem in which both the object of interest (the operator) and the observations (finite number of noisy samples) are infinite-dimensional.


On The Hidden Biases of Flow Matching Samplers

Lim, Soon Hoe

arXiv.org Machine Learning

The main goal of generative modeling is to use finitely many samples from a distribution to construct a sampling scheme capable of generating new samples from the same distribution. Among the families of existing generative models, flow matching (FM) [23, 24] is notable for its flexibility and simplicity. Given a target probability distribution, FM utilizes a parametric model (e.g., neural network) to learn the velocity vector field that defines a deterministic, continuous transformation (a normalizing flow) and transports a source probability distribution (e.g., standard Gaussian) to the target distribution. While the population formulation of FM often exhibits appealing structure--sometimes even admitting gradient-field velocities--practical models are trained on finite datasets and therefore optimize empirical objectives. This empirical setting substantially alters the geometry of the learned velocity field and the energetic properties of the resulting sampler. These notes aim to clarify how empirical FM behaves, how it differs from its population counterpart, and what implicit biases arise in the learned sampling dynamics. From now on, we assume that all the probability distributions/measures (except the empirical distribution) of the random variables considered are absolutely continuous (i.e., they have densities with respect to the Lebesgue measure), in which case we shall abuse the notation and use the same symbol to denote both the distribution and the density. To maintain the flow of the main text, we defer discussion of related work and all proofs of the theoretical results to the appendix.


Radial Attention: $O(n\log n)$ Sparse Attention with Energy Decay for Long Video Generation

Li, Xingyang, Li, Muyang, Cai, Tianle, Xi, Haocheng, Yang, Shuo, Lin, Yujun, Zhang, Lvmin, Yang, Songlin, Hu, Jinbo, Peng, Kelly, Agrawala, Maneesh, Stoica, Ion, Keutzer, Kurt, Han, Song

arXiv.org Artificial Intelligence

Recent advances in diffusion models have enabled high-quality video generation, but the additional temporal dimension significantly increases computational costs, making training and inference on long videos prohibitively expensive. In this paper, we identify a phenomenon we term Spatiotemporal Energy Decay in video diffusion models: post-softmax attention scores diminish as spatial and temporal distance between tokens increase, akin to the physical decay of signal or waves over space and time in nature. Motivated by this, we propose Radial Attention, a scalable sparse attention mechanism with $\mathcal{O}(n \log n)$ complexity that translates energy decay into exponentially decaying compute density, which is significantly more efficient than standard $\mathcal{O}(n^2)$ dense attention and more expressive than linear attention. Specifically, Radial Attention employs a simple, static attention mask where each token attends to spatially nearby tokens, with the attention window size shrinking with temporal distance. Moreover, it allows pre-trained video diffusion models to extend their generation length with efficient LoRA-based fine-tuning. Extensive experiments show that Radial Attention maintains video quality across Wan2.1-14B, HunyuanVideo, and Mochi 1, achieving up to a 1.9$\times$ speedup over the original dense attention. With minimal tuning, it enables video generation up to 4$\times$ longer while reducing training costs by up to 4.4$\times$ compared to direct fine-tuning and accelerating inference by up to 3.7$\times$ compared to dense attention inference. Code is released at \href{https://github.com/mit-han-lab/radial-attention}{https://github.com/mit-han-lab/radial-attention}.