Goto

Collaborating Authors

 margingan



MarginGAN: Adversarial Training in Semi-Supervised Learning

Neural Information Processing Systems

A Margin Generative Adversarial Network (MarginGAN) is proposed for semi-supervised learning problems. Like Triple-GAN, the proposed MarginGAN consists of three components---a generator, a discriminator and a classifier, among which two forms of adversarial training arise. The discriminator is trained as usual to distinguish real examples from fake examples produced by the generator. The new feature is that the classifier attempts to increase the margin of real examples and to decrease the margin of fake examples. On the contrary, the purpose of the generator is yielding realistic and large-margin examples in order to fool the discriminator and the classifier simultaneously. Pseudo labels are used for generated and unlabeled examples in training. Our method is motivated by the success of large-margin classifiers and the recent viewpoint that good semi-supervised learning requires a ``bad'' GAN. Experiments on benchmark datasets testify that MarginGAN is orthogonal to several state-of-the-art methods, offering improved error rates and shorter training time as well.



Reviews: MarginGAN: Adversarial Training in Semi-Supervised Learning

Neural Information Processing Systems

The main contribution of this paper is in setting up a 3 player game for semi-supervised learning where the generator tries to maximize the margin of the examples it generates in competition with a classifier the traditional GAN approach of fooling a discriminator. This idea is novel to my knowledge. One small reservation I have with this method is that as the quality of the GAN and generated images increases the margin maximization for the classifier for generated examples becomes counter productive (as acknowledged by the authors) which requires careful early stopping. But this is standard practice with GANs and it should not be held against this paper. The paper is generally of high quality and significance but these could be improved by a broader treatment of related works.


Reviews: MarginGAN: Adversarial Training in Semi-Supervised Learning

Neural Information Processing Systems

The paper formulates semi-supervised learning as a 3 player game among a generator, a classifier, and a discriminator. The generator and discriminator compete to train realistic examples, as in usual GANs, and the key new idea is that the classifier tries to maximize the margin of real examples and minimize the margin of fake examples. The method both improves predictive performance and greatly reduces training time. The reviewers agree that it is a significant contribution.


MarginGAN: Adversarial Training in Semi-Supervised Learning

Neural Information Processing Systems

A Margin Generative Adversarial Network (MarginGAN) is proposed for semi-supervised learning problems. Like Triple-GAN, the proposed MarginGAN consists of three components---a generator, a discriminator and a classifier, among which two forms of adversarial training arise. The discriminator is trained as usual to distinguish real examples from fake examples produced by the generator. The new feature is that the classifier attempts to increase the margin of real examples and to decrease the margin of fake examples. On the contrary, the purpose of the generator is yielding realistic and large-margin examples in order to fool the discriminator and the classifier simultaneously.


MarginGAN: Adversarial Training in Semi-Supervised Learning

Dong, Jinhao, Lin, Tong

Neural Information Processing Systems

A Margin Generative Adversarial Network (MarginGAN) is proposed for semi-supervised learning problems. Like Triple-GAN, the proposed MarginGAN consists of three components---a generator, a discriminator and a classifier, among which two forms of adversarial training arise. The discriminator is trained as usual to distinguish real examples from fake examples produced by the generator. The new feature is that the classifier attempts to increase the margin of real examples and to decrease the margin of fake examples. On the contrary, the purpose of the generator is yielding realistic and large-margin examples in order to fool the discriminator and the classifier simultaneously. Pseudo labels are used for generated and unlabeled examples in training.