contragan
- Asia > Taiwan (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States (0.04)
- Asia > Japan > Honshū > Chūbu > Nagano Prefecture > Nagano (0.04)
- Africa > Central African Republic > Ombella-M'Poko > Bimbo (0.04)
ContraGAN: Contrastive Learning for Conditional Image Generation
Conditional image generation is the task of generating diverse images using class label information. Although many conditional Generative Adversarial Networks (GAN) have shown realistic results, such methods consider pairwise relations between the embedding of an image and the embedding of the corresponding label (data-to-class relations) as the conditioning losses. In this paper, we propose ContraGAN that considers relations between multiple image embeddings in the same batch (data-to-data relations) as well as the data-to-class relations by using a conditional contrastive loss. The discriminator of ContraGAN discriminates the authenticity of given samples and minimizes a contrastive objective to learn the relations between training images. Simultaneously, the generator tries to generate realistic images that deceive the authenticity and have a low contrastive loss.
- Asia > Taiwan (0.05)
- Asia > Middle East > Jordan (0.04)
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States (0.04)
- Asia > Japan > Honshū > Chūbu > Nagano Prefecture > Nagano (0.04)
- Africa > Central African Republic > Ombella-M'Poko > Bimbo (0.04)
ContraGAN (R1, R2, R3, R4), the novelty of the proposed 2C loss (R1, R2, R4), composability with modern
We thank the reviewers for the constructive comments. Every experiment and explanation in this rebuttal will be included in the paper. We will introduce the concept of data-to-data relations carefully. Our 2C loss can take advantage of the strengths of both losses. Compared with Eq. 7 loss, 2C loss considers cosine We conduct experiments to compare 2C loss with other losses.
Review for NeurIPS paper: ContraGAN: Contrastive Learning for Conditional Image Generation
Reviewers were split on this paper with three recommending accept and one recommending reject. The main concerns were missing experiments on ImageNet and lack of clarify on why the method should work, particularly with regard to how it stabilizes training. After the rebuttal, the reviewers and AC were more confident in the experimental results and recommend acceptance, but the authors are urged to 1) complete the full experiments on ImageNet, 2) analyze stability over multiple runs and provide some discussion of why the proposed method should help stability. Also please see the other detailed recommendations in the reviews.
ContraGAN: Contrastive Learning for Conditional Image Generation
Conditional image generation is the task of generating diverse images using class label information. Although many conditional Generative Adversarial Networks (GAN) have shown realistic results, such methods consider pairwise relations between the embedding of an image and the embedding of the corresponding label (data-to-class relations) as the conditioning losses. In this paper, we propose ContraGAN that considers relations between multiple image embeddings in the same batch (data-to-data relations) as well as the data-to-class relations by using a conditional contrastive loss. The discriminator of ContraGAN discriminates the authenticity of given samples and minimizes a contrastive objective to learn the relations between training images. Simultaneously, the generator tries to generate realistic images that deceive the authenticity and have a low contrastive loss.