Goto

Collaborating Authors

Dive head first into advanced GANs: exploring self-attention and spectral norm

#artificialintelligence

Lately, Generative Models are drawing a lot of attention. Much of that comes from Generative Adversarial Networks (GANs). Invented by Goodfellow et al, GANs are a framework in which two players compete with one another. The two actors, the generator G and discriminator D, are both represented by function approximators. Moreover, they play different roles in the game.


Controllable Generative Adversarial Network

arXiv.org Machine Learning

Although it is recently introduced, in last few years, generative adversarial network (GAN) has been shown many promising results to generate realistic samples. However, it is hardly able to control generated samples since input variables for a generator are from a random distribution. Some attempts have been made to control generated samples from GAN, but they have not shown good performances with difficult problems. Furthermore, it is hardly possible to control the generator to concentrate on reality or distinctness. For example, with existing models, a generator for face image generation cannot be set to concentrate on one of the two objectives, i.e. generating realistic face and generating difference face according to input labels. Here, we propose controllable GAN (CGAN) in this paper. CGAN shows powerful performance to control generated samples; in addition, it can control the generator to concentrate on reality or distinctness. In this paper, CGAN is evaluated with CelebA datasets. We believe that CGAN can contribute to the research in generative neural network models.


A Study into the similarity in generator and discriminator in GAN architecture

arXiv.org Machine Learning

One popular generative model that has high-quality results is the Generative Adversarial Networks(GAN). This type of architecture consists of two separate networks that play against each other. The generator creates an output from the input noise that is given to it. The discriminator has the task of determining if the input to it is real or fake. This takes place constantly eventually leads to the generator modeling the target distribution. This paper includes a study into the actual weights learned by the network and a study into the similarity of the discriminator and generator networks. The paper also tries to leverage the similarity between these networks and shows that indeed both the networks may have a similar structure with experimental evidence with a novel shared architecture.


FCC-GAN: A Fully Connected and Convolutional Net Architecture for GANs

arXiv.org Machine Learning

Generative Adversarial Networks (GANs) are a powerful class of generative models. Despite their successes, the most appropriate choice of a GAN network architecture is still not well understood. GAN models for image synthesis have adopted a deep convolutional network architecture, which eliminates or minimizes the use of fully connected and pooling layers in favor of convolution layers in the generator and discriminator of GANs. In this paper, we demonstrate that a convolution network architecture utilizing deep fully connected layers and pooling layers can be more effective than the traditional convolution-only architecture, and we propose FCC-GAN, a fully connected and convolutional GAN architecture. Models based on our FCC-GAN architecture learn both faster than the conventional architecture and also generate higher quality of samples. We demonstrate the effectiveness and stability of our approach across four popular image datasets.


How to use the UpSampling2D and Conv2DTranspose Layers in Keras

#artificialintelligence

Generative Adversarial Networks, or GANs, are an architecture for training generative models, such as deep convolutional neural networks for generating images. The GAN architecture is comprised of both a generator and a discriminator model. The generator is responsible for creating new outputs, such as images, that plausibly could have come from the original dataset. The generator model is typically implemented using a deep convolutional neural network and results-specialized layers that learn to fill in features in an image rather than extract features from an input image. Two common types of layers that can be used in the generator model are a upsample layer (UpSampling2D) that simply doubles the dimensions of the input and the transpose convolutional layer (Conv2DTranspose) that performs an inverse convolution operation. In this tutorial, you will discover how to use UpSampling2D and Conv2DTranspose Layers in Generative Adversarial Networks when generating images. Discover how to develop DCGANs, conditional GANs, Pix2Pix, CycleGANs, and more with Keras in my new GANs book, with 29 step-by-step tutorials and full source code.