Simultaneous Approximation of the Score Function and Its Derivatives by Deep Neural Networks
Yakovlev, Konstantin, Puchkin, Nikita
Score estimation, the task of learning the gradient of the log density, has become a crucial part of generative diffusion models [Song and Ermon, 2019, Song et al., 2021]. These models achieve state-of-the-art performance in a wide range of domains including images, audio and video synthesis [Dhariwal and Nichol, 2021, Kong et al., 2021, Ho et al., 2022]. To sample from the desired distribution, one needs to have an accurate score function estimator along the Ornstein-Uhlenbeck process. In the context of diffusion models the score estimation is done through the minimization of denoising score matching loss function over the class of neural networks [Song et al., 2021, Vincent, 2011, Oko et al., 2023]. Another recipe for score estimation is implicit score matching proposed by Hyv arinen [2005]. The proposed objective includes not only the score function, but also its Jacobian trace. A crucial research question is to determine the iteration complexity of the distribution estimation given inaccurate score function. The convergence theory of diffusion models has received much attention in the recent years. Some works [De Bortoli, 2022, Chen et al., 2023b, Benton et al., 2024, Li and Y an, 2024] study SDE-based samplers under the assumption that the score estimator is L
Dec-30-2025
- Country:
- Asia > Russia (0.04)
- Europe
- Italy > Calabria
- Catanzaro Province > Catanzaro (0.04)
- Russia (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Italy > Calabria
- Genre:
- Research Report > New Finding (0.48)
- Technology: