Goto

Collaborating Authors

 fid



Self-SupervisedGenerativeAdversarialCompression

Neural Information Processing Systems

Somemodelcompression methods have been successfully applied to image classification and detection or language models, but there has been very little work compressing generative adversarial networks(GANs) performing complextasks.



Return of Unconditional Generation: A Self-supervised Representation Generation Method

Neural Information Processing Systems

Unconditional generation--the problem of modeling data distribution without relying on human-annotated labels--is a long-standing and fundamental challenge in generative models, creating a potential of learning from large-scale unlabeled data. In the literature, the generation quality of an unconditional method has been much worse than that of its conditional counterpart. This gap can be attributed to the lack of semantic information provided by labels. In this work, we show that one can close this gap by generating semantic representations in the representation space produced by a self-supervised encoder. These representations can be used to condition the image generator.



A Derivations of Variance Controlled Diffusion

Neural Information Processing Systems

A.1 Proof of Proposition 4.1 Proposition 4.1 For any bounded measurable function τ(t): [0, T ] R, the following Reverse SDEs [ (1 + τ Eq. (20) is a reverse-time SDE running[ from T to 0, thus (there)are two additional minus ] signs in Eq. (21) before term A.2 Two Reparameterizations and Exact Solution under Exponential Integrator In this subsection, we will show the exact solution of SDE in both data prediction reparameterization and noise prediction reparameterization. The noise term in data prediction has smaller variance than noise prediction ones, implying the necessity of adopting data prediction reparameterization for the SDE sampler. The computation of variance uses the Itô Isometry, which is a crucial fact of Itô integral. Similar with Proposition 4.2, Eq. (37) can be solved analytically, which is shown in the following propositon: Following the derivation in Proposition 4.2, the mean of the Itô integral term is: [ A.2.4 Comparison between Data and Noise Reparameterizations In Table 1 we perform an ablation study on data and noise reparameterizations, the experiment results show that under the same magnitude of stochasticity, the proposed SA-Solver in data reparameterization has a better convergence which leads to better FID results under the same NFEs. In this subsection, we provide a theoretical view of this phenomenon.



Improving the Training of Rectified Flows

Neural Information Processing Systems

One approach for tackling this problem is rectified flows, which iteratively learn smooth ODE paths that are less susceptible to truncation error. However, rectified flows still require a relatively large number of function evaluations (NFEs). In this work, we propose improved techniques for training rectified flows, allowing them to compete with knowledge distillation methods even in the low NFE setting.