Generative Adversarial Nets (GANs) are emerging objects of study in machine learning, computer vision, natural language processing, and many other domains. In machine learning, study of such a framework has led to significant advances in adversarial defenses [28, 24] and machine security [4, 24]. In computer vision and natural language processing, GANs have resulted in improved performance over standard generative models for images and texts , such as variational autoencoder  and deep Boltzmann machine . A main technique to achieve this goal is to play a minimax two-player game between generator and discriminator under the design that the generator tries to confuse the discriminator with its generated contents and the discriminator tries to distinguish real images/texts from what the generator creates. Despite a large amount of variants of GANs, many fundamental questions remain unresolved. One of the longstanding challenges is designing universal, easy-to-implement architectures that alleviate the instability issue of GANs training. Ideally, GANs are supposed to solve the minimax optimization problem , but in practice alternating gradient descent methods do not clearly privilege minimax over maximin or vice versa (page 35, ), which may lead to instability in training if there exists a large discrepancy between the minimax and maximin objective values. The focus of this work is on improving the stability of such minimax game in the training process of GANs. 1 Under review as a conference paper at ICLR 2019
I experimented with generating faces of cats using Generative adversarial networks (GAN). I wanted to try DCGAN, WGAN and WGAN-GP in low and higher resolutions. I used the CAT dataset (yes this is a real thing!) for my training sample. This dataset has 10k pictures of cats. I centered the images on the kitty faces and I removed outliers (I did this from visual inspection, it took a couple of hours…).
The painting, if that is the right term, is one of a group of portraits of the fictional Belamy family created by Obvious, a Paris-based collective consisting of Hugo Caselles-Dupré, Pierre Fautrel and Gauthier Vernier. They are engaged in exploring the interface between art and artificial intelligence, and their method goes by the acronym GAN, which stands for'generative adversarial network'. 'The algorithm is composed of two parts,' says Caselles-Dupré. 'On one side is the Generator, on the other the Discriminator. We fed the system with a data set of 15,000 portraits painted between the 14th century to the 20th.
In a world filled with technology and artificial intelligence, it is becoming increasingly harder to distinguish between what is real and what is fake. Look at these two pictures below. Can you tell which one is a real-life photograph and which one is created by artificial intelligence? The crazy thing is that both of these images are actually fake, created by NVIDIA's new hyperrealistic face generator, which uses an algorithmic architecture called a generative adversarial network (GANs). Researching more into GANs and their applications in today's society, I found that they can be used everywhere, from text to image generation to even predicting the next frame in a video!
Astrophysicists are using artificial intelligence (AI) to create something like the technology in movies that magically sharpens fuzzy surveillance images: a network that could make a blurry galaxy image look like it was taken by a better telescope than it actually was. That could let astronomers squeeze out finer details from reams of observations. The system works by pitting two dueling neural networks against each other. One is a generator that concocts images, the other a discriminator that tries to spot any flaws that would give away the manipulation, forcing the generator to get better. The team took thousands of real images of galaxies, and then artificially degraded them.