Adversarial Mixup Resynthesizers
Beckham, Christopher, Honari, Sina, Lamb, Alex, Verma, Vikas, Ghadiri, Farnoosh, Hjelm, R Devon, Pal, Christopher
In this paper, we explore new approaches to combining information encoded within the learned representations of autoencoders. We explore models that are capable of combining the attributes of multiple inputs such that a resynthesised output is trained to fool an adversarial discriminator for real versus synthesised data. Furthermore, we explore the use of such an architecture in the context of semi-supervised learning, where we learn a mixing function whose objective is to produce interpolations of hidden states, or masked combinations of latent representations that are consistent with a conditioned class label. We show quantitative and qualitative evidence that such a formulation is an interesting avenue of research. The autoencoder is a fundamental building block in unsupervised learning. Autoencoders are trained to reconstruct their inputs after being processed by two neural networks: an encoder which encodes the input to a high-level representation or bottleneck, and a decoder which performs the reconstruction using the representation as input.
Apr-4-2019
- Country:
- North America
- Canada > Quebec (0.14)
- United States (0.28)
- North America
- Genre:
- Research Report (0.82)
- Technology: