Goto

Collaborating Authors

 strong solution


Reflected diffusion models adapt to low-dimensional data

Holk, Asbjørn, Strauch, Claudia, Trottner, Lukas

arXiv.org Machine Learning

While the mathematical foundations of score-based generative models are increasingly well understood for unconstrained Euclidean spaces, many practical applications involve data restricted to bounded domains. This paper provides a statistical analysis of reflected diffusion models on the hypercube $[0,1]^D$ for target distributions supported on $d$-dimensional linear subspaces. A primary challenge in this setting is the absence of Gaussian transition kernels, which play a central role in standard theory in $\mathbb{R}^D$. By employing an easily implementable infinite series expansion of the transition densities, we develop analytic tools to bound the score function and its approximation by sparse ReLU networks. For target densities with Sobolev smoothness $α$, we establish a convergence rate in the $1$-Wasserstein distance of order $n^{-\frac{α+1-δ}{2α+d}}$ for arbitrarily small $δ> 0$, demonstrating that the generative algorithm fully adapts to the intrinsic dimension $d$. These results confirm that the presence of reflecting boundaries does not degrade the fundamental statistical efficiency of the diffusion paradigm, matching the almost optimal rates known for unconstrained settings.







Quantitative Propagation of Chaos for SGD in Wide Neural Networks S

Neural Information Processing Systems

Mean field approximation and propagation of chaos for mSGLD . . . . . . . . . . 4 S3 T echnical results 4 S4 Quantitative propagation of chaos 8 S4.1 Existence of strong solutions to the particle SDE . . . . . . . . . . . . . . . . . . If F = R, then we simply note C( E). S2.1 Presentation of the modified SGLD and its continuous counterpart The proof is postponed to Section S4.4 Consider now the mean-field SDE starting from a random variable W The proof is postponed to Section S4.4 Then, there exists L 0 such that the following hold. In what follows, we bound separately the two terms in the right-hand side.




Learning to steer with Brownian noise

Ankirchner, Stefan, Christensen, Sören, Kallsen, Jan, Borne, Philip Le, Perko, Stefan

arXiv.org Machine Learning

The modern theory of stochastic control typically assumes complete knowledge of the underlying system dynamics. While significant theoretical advancements have been made in this area, see Øksendal and Sulem 2019; Fleming and Soner 2006, the practical application of stochastic control often faces challenges when the system model is uncertain or unknown. In recent years, Reinforcement learning (RL) has emerged as a promising approach to address this issue, enabling agents to learn optimal control policies through trial-and-error interactions with the environment. However, RL's success often hinges on the availability of vast amounts of data, and the learned control policies can be difficult to interpret, especially when deep learning techniques are employed, see Sutton 2018. To bridge the gap between fully model-based and model-free approaches, research has increasingly focused on model-based reinforcement learning.