metagan
MetaGAN: An Adversarial Approach to Few-Shot Learning
In this paper, we propose a conceptually simple and general framework called MetaGAN for few-shot learning problems. Most state-of-the-art few-shot classification models can be integrated with MetaGAN in a principled and straightforward way. By introducing an adversarial generator conditioned on tasks, we augment vanilla few-shot classification models with the ability to discriminate between real and fake data. We argue that this GAN-based approach can help few-shot classifiers to learn sharper decision boundary, which could generalize better. We show that with our MetaGAN framework, we can extend supervised few-shot learning models to naturally cope with unsupervised data. Different from previous work in semi-supervised few-shot learning, our algorithms can deal with semi-supervision at both sample-level and task-level. We give theoretical justifications of the strength of MetaGAN, and validate the effectiveness of MetaGAN on challenging few-shot image classification benchmarks.
MetaGAN: An Adversarial Approach to Few-Shot Learning
In this paper, we propose a conceptually simple and general framework called MetaGAN for few-shot learning problems. Most state-of-the-art few-shot classification models can be integrated with MetaGAN in a principled and straightforward way. By introducing an adversarial generator conditioned on tasks, we augment vanilla few-shot classification models with the ability to discriminate between real and fake data. We argue that this GAN-based approach can help few-shot classifiers to learn sharper decision boundary, which could generalize better. We show that with our MetaGAN framework, we can extend supervised few-shot learning models to naturally cope with unsupervised data. Different from previous work in semi-supervised few-shot learning, our algorithms can deal with semi-supervision at both sample-level and task-level. We give theoretical justifications of the strength of MetaGAN, and validate the effectiveness of MetaGAN on challenging few-shot image classification benchmarks.
- North America > Canada > Quebec > Montreal (0.05)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Massachusetts > Plymouth County > Norwell (0.04)
- (2 more...)
- North America > Canada > Quebec > Montreal (0.05)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Massachusetts > Plymouth County > Norwell (0.04)
- (2 more...)
A Pseudo-code of OOD-MAML Algorithm 1 OOD-MAML with K-shot training samples
Omniglot (Lake et al., 2015) is a dataset of handwritten characters and contains 20 examples of 1623 characters. Omniglot is the most commonly used dataset in few-shot learning, and its images are resized to 28 28 (Finn et al., 2017; Santoro et al., 2016; Snell et al., 2017; Sung et al., 2018; Koch et al., 2015). As in other studies, we randomly select 1200 characters for meta-training and use the remaining for meta-testing. It contains a total of 60K images of 100 different classes, each of which comprises 600 RGB images. Ravi and Larochelle (2016) presented the protocol for mini ImageNet as per which all the images are downsampled to 84 84 and are divided into 64 classes for meta-training, 16 classes for meta-validation, and 20 for meta-testing. We followed this protocol but did not use the meta-validation set.
Reviews: MetaGAN: An Adversarial Approach to Few-Shot Learning
This paper proposes a method of improving upon existing meta-learning approaches by augmenting the training with a GAN setup. The basic idea has been explored in the context of semi-supervised learning: add an additional class to the classifier's outputs and train the classifier/discriminator to classify generated data as this additional fake class. This paper extends the reasoning for why it might work for semi supervised learning to why is might work for few-shot meta learning. The clarity of this paper could be greatly improved. They are presenting many different variants of few-shot learning in supervised and semi-supervised setting, and the notation is a bit tricky to follow initially.
MetaGAN: An Adversarial Approach to Few-Shot Learning
ZHANG, Ruixiang, Che, Tong, Ghahramani, Zoubin, Bengio, Yoshua, Song, Yangqiu
In this paper, we propose a conceptually simple and general framework called MetaGAN for few-shot learning problems. Most state-of-the-art few-shot classification models can be integrated with MetaGAN in a principled and straightforward way. By introducing an adversarial generator conditioned on tasks, we augment vanilla few-shot classification models with the ability to discriminate between real and fake data. We argue that this GAN-based approach can help few-shot classifiers to learn sharper decision boundary, which could generalize better. We show that with our MetaGAN framework, we can extend supervised few-shot learning models to naturally cope with unsupervised data.
MetaGAN: An Adversarial Approach to Few-Shot Learning
ZHANG, Ruixiang, Che, Tong, Ghahramani, Zoubin, Bengio, Yoshua, Song, Yangqiu
In this paper, we propose a conceptually simple and general framework called MetaGAN for few-shot learning problems. Most state-of-the-art few-shot classification models can be integrated with MetaGAN in a principled and straightforward way. By introducing an adversarial generator conditioned on tasks, we augment vanilla few-shot classification models with the ability to discriminate between real and fake data. We argue that this GAN-based approach can help few-shot classifiers to learn sharper decision boundary, which could generalize better. We show that with our MetaGAN framework, we can extend supervised few-shot learning models to naturally cope with unsupervised data. Different from previous work in semi-supervised few-shot learning, our algorithms can deal with semi-supervision at both sample-level and task-level. We give theoretical justifications of the strength of MetaGAN, and validate the effectiveness of MetaGAN on challenging few-shot image classification benchmarks.
- North America > Canada > Quebec > Montreal (0.05)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (3 more...)
MetaGAN: An Adversarial Approach to Few-Shot Learning
ZHANG, Ruixiang, Che, Tong, Ghahramani, Zoubin, Bengio, Yoshua, Song, Yangqiu
In this paper, we propose a conceptually simple and general framework called MetaGAN for few-shot learning problems. Most state-of-the-art few-shot classification models can be integrated with MetaGAN in a principled and straightforward way. By introducing an adversarial generator conditioned on tasks, we augment vanilla few-shot classification models with the ability to discriminate between real and fake data. We argue that this GAN-based approach can help few-shot classifiers to learn sharper decision boundary, which could generalize better. We show that with our MetaGAN framework, we can extend supervised few-shot learning models to naturally cope with unsupervised data. Different from previous work in semi-supervised few-shot learning, our algorithms can deal with semi-supervision at both sample-level and task-level. We give theoretical justifications of the strength of MetaGAN, and validate the effectiveness of MetaGAN on challenging few-shot image classification benchmarks.
- North America > Canada > Quebec > Montreal (0.05)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Massachusetts > Plymouth County > Norwell (0.04)
- (2 more...)