Temporal Score Rescaling for Temperature Sampling in Diffusion and Flow Models
Xu, Yanbo, Wu, Yu, Park, Sungjae, Zhou, Zhizhuo, Tulsiani, Shubham
–arXiv.org Artificial Intelligence
Stanford University Figure 1: T emporal Score Rescaling (TSR) provides a mechanism to steer the sampling diversity of diffusion and flow models at inference. T op-left: Probability density evolution when sampling a 1D Gaussian mixture with DDPM, and the effects of TSR, which can control the sampling process to yield sharper or flatter distributions. T op-right, bottom: TSR can be applied to any pre-trained diffusion or flow model, improving performance across diverse domains such as pose prediction, depth estimation, and image generation. We present a mechanism to steer the sampling diversity of denoising diffusion and flow matching models, allowing users to sample from a sharper or broader distribution than the training distribution. We build on the observation that these models leverage (learned) score functions of noisy data distributions for sampling and show that rescaling these allows one to effectively control a'local' sampling temperature. Notably, this approach does not require any finetun-ing or alterations to training strategy, and can be applied to any off-the-shelf model and is compatible with both deterministic and stochastic samplers. We first validate our framework on toy 2D data, and then demonstrate its application for diffusion models trained across five disparate tasks - image generation, pose estimation, depth prediction, robot manipulation, and protein design. We find that across these tasks, our approach allows sampling from sharper (or flatter) distributions, yielding performance gains e.g., depth prediction models benefit from sampling more likely depth estimates, whereas image generation models perform better when sampling a slightly flatter distribution. Score-based generative models, such as denoising diffusion (Ho et al., 2020) and flow matching (Lipman et al., 2023; Liu et al., 2023b), have become ubiquitous across AI applications.
arXiv.org Artificial Intelligence
Oct-2-2025