Goto

Collaborating Authors

 self-supervised gan



Self-supervised GAN: Analysis and Improvement with Multi-class Minimax Game

Neural Information Processing Systems

Self-supervised (SS) learning is a powerful approach for representation learning using unlabeled data. Recently, it has been applied to Generative Adversarial Networks (GAN) training. Specifically, SS tasks were proposed to address the catastrophic forgetting issue in the GAN discriminator. In this work, we perform an in-depth analysis to understand how SS tasks interact with learning of generator. From the analysis, we identify issues of SS tasks which allow a severely mode-collapsed generator to excel the SS tasks.


Self-Supervised GANs with Label Augmentation

Neural Information Processing Systems

Recently, transformation-based self-supervised learning has been applied to generative adversarial networks (GANs) to mitigate catastrophic forgetting in the discriminator by introducing a stationary learning environment. However, the separate self-supervised tasks in existing self-supervised GANs cause a goal inconsistent with generative modeling due to the fact that their self-supervised classifiers are agnostic to the generator distribution. To address this problem, we propose a novel self-supervised GAN that unifies the GAN task with the self-supervised task by augmenting the GAN labels (real or fake) via self-supervision of data transformation. Specifically, the original discriminator and self-supervised classifier are unified into a label-augmented discriminator that predicts the augmented labels to be aware of both the generator distribution and the data distribution under every transformation, and then provide the discrepancy between them to optimize the generator. Theoretically, we prove that the optimal generator could converge to replicate the real data distribution. Empirically, we show that the proposed method significantly outperforms previous self-supervised and data augmentation GANs on both generative modeling and representation learning across benchmark datasets.


Reviews: Self-supervised GAN: Analysis and Improvement with Multi-class Minimax Game

Neural Information Processing Systems

Originality: The method is relatively new although it is similar to some conditional GAN works in the literature. The main idea is the analysis showing the limitations of prior GAN+SSL work and in proposing a scheme with better chances of succeeding (at least theoretically). Then experiments show that there is an improvement. It would be good to show more the analogies to prior conditional GAN work, and this would not hurt the contribution, rather it would better clarify its context and provide more links to practitioners (who could better understand it). Basically, the minimax game should use the same cost function for the optimization of the discriminator, the generator and the classifier.


Reviews: Self-supervised GAN: Analysis and Improvement with Multi-class Minimax Game

Neural Information Processing Systems

NeurIPS 2019 Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center "7259" "Self-supervised GAN: Analysis and Improvement with Multi-class Minimax Game" The paper addresses a problem in self supervised GAN, where the classes strictly have disjoint support. This is mitigated by introducing a new class for generated samples.



Self-Supervised GANs with Label Augmentation

Neural Information Processing Systems

Recently, transformation-based self-supervised learning has been applied to generative adversarial networks (GANs) to mitigate catastrophic forgetting in the discriminator by introducing a stationary learning environment. However, the separate self-supervised tasks in existing self-supervised GANs cause a goal inconsistent with generative modeling due to the fact that their self-supervised classifiers are agnostic to the generator distribution. To address this problem, we propose a novel self-supervised GAN that unifies the GAN task with the self-supervised task by augmenting the GAN labels (real or fake) via self-supervision of data transformation. Specifically, the original discriminator and self-supervised classifier are unified into a label-augmented discriminator that predicts the augmented labels to be aware of both the generator distribution and the data distribution under every transformation, and then provide the discrepancy between them to optimize the generator. Theoretically, we prove that the optimal generator could converge to replicate the real data distribution. Empirically, we show that the proposed method significantly outperforms previous self-supervised and data augmentation GANs on both generative modeling and representation learning across benchmark datasets.


Self-supervised GAN: Analysis and Improvement with Multi-class Minimax Game

Neural Information Processing Systems

Self-supervised (SS) learning is a powerful approach for representation learning using unlabeled data. Recently, it has been applied to Generative Adversarial Networks (GAN) training. Specifically, SS tasks were proposed to address the catastrophic forgetting issue in the GAN discriminator. In this work, we perform an in-depth analysis to understand how SS tasks interact with learning of generator. From the analysis, we identify issues of SS tasks which allow a severely mode-collapsed generator to excel the SS tasks.


ECGAN: Self-supervised generative adversarial network for electrocardiography

Simone, Lorenzo, Bacciu, Davide

arXiv.org Artificial Intelligence

High-quality synthetic data can support the development of effective predictive models for biomedical tasks, especially in rare diseases or when subject to compelling privacy constraints. These limitations, for instance, negatively impact open access to electrocardiography datasets about arrhythmias. This work introduces a self-supervised approach to the generation of synthetic electrocardiography time series which is shown to promote morphological plausibility. Our model (ECGAN) allows conditioning the generative process for specific rhythm abnormalities, enhancing synchronization and diversity across samples with respect to literature models. A dedicated sample quality assessment framework is also defined, leveraging arrhythmia classifiers. The empirical results highlight a substantial improvement against state-of-the-art generative models for sequences and audio synthesis.


Self-supervised GAN: Analysis and Improvement with Multi-class Minimax Game

Tran, Ngoc-Trung, Tran, Viet-Hung, Nguyen, Bao-Ngoc, Yang, Linxiao, Cheung, Ngai-Man (Man)

Neural Information Processing Systems

Self-supervised (SS) learning is a powerful approach for representation learning using unlabeled data. Recently, it has been applied to Generative Adversarial Networks (GAN) training. Specifically, SS tasks were proposed to address the catastrophic forgetting issue in the GAN discriminator. In this work, we perform an in-depth analysis to understand how SS tasks interact with learning of generator. From the analysis, we identify issues of SS tasks which allow a severely mode-collapsed generator to excel the SS tasks.