Goto

Collaborating Authors

 lipschitzness


A Appendix

Neural Information Processing Systems

In the following subsections, we provide theoretical derivations. In this subsection, we provide a formal description of the consistency property of score matching. Assumption A.4. (Compactness) The parameter space is compact. Assumption A.5. (Identifiability) There exists a set of parameters A.3 are the conditions that ensure A.7 lead to the uniform convergence property [ In the following Lemma A.9 and Proposition A.10, we examine the sufficient condition for We show that the sufficient conditions stated in Lemma A.9 can be satisfied using the Figure A1: An illustration of the relationship between the variables discussed in Proposition 4.1, Lemma A.12, and Lemma A.13. The properties of KL divergence and Fisher divergence presented in the last two rows are derived in Lemmas A.12 In this section, we provide formal derivations for Proposition 4.1, Lemma A.12, and Lemma A.13. Based on Remark A.14, the following holds: D In this section, we elaborate on the experimental setups and provide the detailed configurations for the experiments presented in Section 5 of the main manuscript.



Diffusion Models are Certifiably Robust Classifiers

Neural Information Processing Systems

Generative learning, recognized for its effective modeling of data distributions, offers inherent advantages in handling out-of-distribution instances, especially for enhancing robustness to adversarial attacks. Among these, diffusion classifiers, utilizing powerful diffusion models, have demonstrated superior empirical robustness. However, a comprehensive theoretical understanding of their robustness is still lacking, raising concerns about their vulnerability to stronger future attacks. In this study, we prove that diffusion classifiers possess $O(1)$ Lipschitzness, and establish their certified robustness, demonstrating their inherent resilience. To achieve non-constant Lipschitzness, thereby obtaining much tighter certified robustness, we generalize diffusion classifiers to classify Gaussian-corrupted data. This involves deriving the evidence lower bounds (ELBOs) for these distributions, approximating the likelihood using the ELBO, and calculating classification probabilities via Bayes' theorem. Experimental results show the superior certified robustness of these Noised Diffusion Classifiers (NDCs). Notably, we achieve over 80\% and 70\% certified robustness on CIFAR-10 under adversarial perturbations with \(\ell_2\) norms less than 0.25 and 0.5, respectively, using a single off-the-shelf diffusion model without any additional data.


Pathway to $O(\sqrt{d})$ Complexity bound under Wasserstein metric of flow-based models

Meng, Xiangjun, Wang, Zhongjian

arXiv.org Artificial Intelligence

We provide attainable analytical tools to estimate the error of flow-based generative models under the Wasserstein metric and to establish the optimal sampling iteration complexity bound with respect to dimension as $O(\sqrt{d})$. We show this error can be explicitly controlled by two parts: the Lipschitzness of the push-forward maps of the backward flow which scales independently of the dimension; and a local discretization error scales $O(\sqrt{d})$ in terms of dimension. The former one is related to the existence of Lipschitz changes of variables induced by the (heat) flow. The latter one consists of the regularity of the score function in both spatial and temporal directions. These assumptions are valid in the flow-based generative model associated with the Föllmer process and $1$-rectified flow under the Gaussian tail assumption. As a consequence, we show that the sampling iteration complexity grows linearly with the square root of the trace of the covariance operator, which is related to the invariant distribution of the forward process.


905056c1ac1dad141560467e0a99e1cf-Paper.pdf

Neural Information Processing Systems

Batch Normalization (BatchNorm) is a widely adopted technique that enables faster and more stable training of deep neural networks (DNNs). Despite its pervasiveness, the exact reasons for BatchNorm's effectiveness are still poorly understood.





In-Context Learning with Transformers: Softmax Attention Adapts to Function Lipschitzness

Neural Information Processing Systems

We explore the role of softmax attention in an ICL setting where each context encodes a regression task. We show that an attention unit learns a window that it uses to implement a nearest-neighbors predictor adapted to the landscape of the pretraining tasks.


905056c1ac1dad141560467e0a99e1cf-Paper.pdf

Neural Information Processing Systems

Batch Normalization (BatchNorm) is a widely adopted technique that enables faster and more stable training of deep neural networks (DNNs). Despite its pervasiveness, the exact reasons for BatchNorm's effectiveness are still poorly understood.