lipschitzness
A Appendix
In the following subsections, we provide theoretical derivations. In this subsection, we provide a formal description of the consistency property of score matching. Assumption A.4. (Compactness) The parameter space is compact. Assumption A.5. (Identifiability) There exists a set of parameters A.3 are the conditions that ensure A.7 lead to the uniform convergence property [ In the following Lemma A.9 and Proposition A.10, we examine the sufficient condition for We show that the sufficient conditions stated in Lemma A.9 can be satisfied using the Figure A1: An illustration of the relationship between the variables discussed in Proposition 4.1, Lemma A.12, and Lemma A.13. The properties of KL divergence and Fisher divergence presented in the last two rows are derived in Lemmas A.12 In this section, we provide formal derivations for Proposition 4.1, Lemma A.12, and Lemma A.13. Based on Remark A.14, the following holds: D In this section, we elaborate on the experimental setups and provide the detailed configurations for the experiments presented in Section 5 of the main manuscript.
- Asia > Taiwan (0.04)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- North America > United States > California > Santa Clara County > Santa Clara (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.34)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.34)
Diffusion Models are Certifiably Robust Classifiers
Generative learning, recognized for its effective modeling of data distributions, offers inherent advantages in handling out-of-distribution instances, especially for enhancing robustness to adversarial attacks. Among these, diffusion classifiers, utilizing powerful diffusion models, have demonstrated superior empirical robustness. However, a comprehensive theoretical understanding of their robustness is still lacking, raising concerns about their vulnerability to stronger future attacks. In this study, we prove that diffusion classifiers possess $O(1)$ Lipschitzness, and establish their certified robustness, demonstrating their inherent resilience. To achieve non-constant Lipschitzness, thereby obtaining much tighter certified robustness, we generalize diffusion classifiers to classify Gaussian-corrupted data. This involves deriving the evidence lower bounds (ELBOs) for these distributions, approximating the likelihood using the ELBO, and calculating classification probabilities via Bayes' theorem. Experimental results show the superior certified robustness of these Noised Diffusion Classifiers (NDCs). Notably, we achieve over 80\% and 70\% certified robustness on CIFAR-10 under adversarial perturbations with \(\ell_2\) norms less than 0.25 and 0.5, respectively, using a single off-the-shelf diffusion model without any additional data.
Pathway to $O(\sqrt{d})$ Complexity bound under Wasserstein metric of flow-based models
Meng, Xiangjun, Wang, Zhongjian
We provide attainable analytical tools to estimate the error of flow-based generative models under the Wasserstein metric and to establish the optimal sampling iteration complexity bound with respect to dimension as $O(\sqrt{d})$. We show this error can be explicitly controlled by two parts: the Lipschitzness of the push-forward maps of the backward flow which scales independently of the dimension; and a local discretization error scales $O(\sqrt{d})$ in terms of dimension. The former one is related to the existence of Lipschitz changes of variables induced by the (heat) flow. The latter one consists of the regularity of the score function in both spatial and temporal directions. These assumptions are valid in the flow-based generative model associated with the Föllmer process and $1$-rectified flow under the Gaussian tail assumption. As a consequence, we show that the sampling iteration complexity grows linearly with the square root of the trace of the covariance operator, which is related to the invariant distribution of the forward process.
- Europe > United Kingdom > North Sea > Southern North Sea (0.05)
- Asia > Singapore (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Asia > Middle East > Israel (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > France > Normandy > Seine-Maritime > Rouen (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Europe > France > Provence-Alpes-Côte d'Azur (0.04)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- (2 more...)
- Research Report > Experimental Study (1.00)
- Research Report > Promising Solution (0.67)
- Overview (0.67)
- Information Technology (0.67)
- Education (0.45)
- Europe > France > Normandy > Seine-Maritime > Rouen (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Europe > France > Provence-Alpes-Côte d'Azur (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- Asia > Middle East > Jordan (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.92)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- North America > Canada > Quebec > Montreal (0.04)