Goto

Collaborating Authors

 combining


Lipschitz regularity in Flow Matching and Diffusion Models: sharp sampling rates and functional inequalities

Stéphanovitch, Arthur

arXiv.org Machine Learning

Under general assumptions on the target distribution $p^\star$, we establish a sharp Lipschitz regularity theory for flow-matching vector fields and diffusion-model scores, with optimal dependence on time and dimension. As applications, we obtain Wasserstein discretization bounds for Euler-type samplers in dimension $d$: with $N$ discretization steps, the error achieves the optimal rate $\sqrt{d}/N$ up to logarithmic factors. Moreover, the constants do not deteriorate exponentially with the spatial extent of $p^\star$. We also show that the one-sided Lipschitz control yields a globally Lipschitz transport map from the standard Gaussian to $p^\star$, which implies Poincaré and log-Sobolev inequalities for a broad class of probability measures.


Homogenized Transformers

Koubbi, Hugo, Geshkovski, Borjan, Rigollet, Philippe

arXiv.org Machine Learning

We study a random model of deep multi-head self-attention in which the weights are resampled independently across layers and heads, as at initialization of training. Viewing depth as a time variable, the residual stream defines a discrete-time interacting particle system on the unit sphere. We prove that, under suitable joint scalings of the depth, the residual step size, and the number of heads, this dynamics admits a nontrivial homogenized limit. Depending on the scaling, the limit is either deterministic or stochastic with common noise; in the mean-field regime, the latter leads to a stochastic nonlinear Fokker--Planck equation for the conditional law of a representative token. In the Gaussian setting, the limiting drift vanishes, making the homogenized dynamics explicit enough to study representation collapse. This yields quantitative trade-offs between dimension, context length, and temperature, and identifies regimes in which clustering can be mitigated.






Supplement to " Rates of Estimation of Optimal Transport Maps using Plug-in Estimators via Barycentric Projections "

Neural Information Processing Systems

For the moment, it is worth noting that such sets of functions (e.g., Haar wavelets, Daubechies wavelets) are readily We are now in a position to present the main theorem of this subsection. To avoid repetition, we defer further discussions on the rates observed in Theorem A.1 to Remark 2.7 where a holistic In fact, by Proposition 1.1, there exists an optimal transport map Based on (B.2), the natural plug-in estimator of ρ Suppose that the same assumptions from Theorem 2.2 hold. B.2 Nonparametric independence testing: Optimal transport based Hilbert-Schmidt independence criterion Proposition B.2 shows that the test based on Further, when the sampling distribution is fixed, Proposition B.2 shows that In the following result (see Appendix C.2 for a proof), we show that if This section is devoted to proving our main results and is organized as follows: In Appendix C.1, we Further by Lemma D.2, we also have: ϕ Note that (C.10) immediately yields the following conclusions: S By (1.5) and some simple algebra, the following holds: null null null S Combining the above display with (C.9), we further have: null null null null 1 2 W Combining the above observation with Theorem 2.1, we have: lim sup For the next part, to simplify notation, let us begin with some notation. By using the exponential Markov's inequality coupled with the standard union Now by using [7, Theorem 2.10], we have P (B We are now in a position to complete the proof of Theorem 2.2 using steps I-III. Therefore, it is now enough to bound the right hand side of (C.17).