Goto

Collaborating Authors

Generative Adversarial Networks: The Fight To See Which Neural Network Comes Out At Top

#artificialintelligence

In a world filled with technology and artificial intelligence, it is becoming increasingly harder to distinguish between what is real and what is fake. Look at these two pictures below. Can you tell which one is a real-life photograph and which one is created by artificial intelligence? The crazy thing is that both of these images are actually fake, created by NVIDIA's new hyperrealistic face generator, which uses an algorithmic architecture called a generative adversarial network (GANs). Researching more into GANs and their applications in today's society, I found that they can be used everywhere, from text to image generation to even predicting the next frame in a video!


Meow Generator

@machinelearnbot

I experimented with generating faces of cats using Generative adversarial networks (GAN). So I doubt that training a cat generator with 5 layers and 128 hidden nodes would be much of a problem. LSGAN is a slightly different approach where we try to minimize the squared distance between the Discrimination output and its assigned label; they recommend using: 1 for real images, 0 for fake images in Discriminator update and then 1 for fake images in Generator update.


Is Artificial Intelligence Set To Become Art's Next Medium? - South Florida Reporter

#artificialintelligence

The painting, if that is the right term, is one of a group of portraits of the fictional Belamy family created by Obvious, a Paris-based collective consisting of Hugo Caselles-Dupré, Pierre Fautrel and Gauthier Vernier. They are engaged in exploring the interface between art and artificial intelligence, and their method goes by the acronym GAN, which stands for'generative adversarial network'. 'The algorithm is composed of two parts,' says Caselles-Dupré. 'On one side is the Generator, on the other the Discriminator. We fed the system with a data set of 15,000 portraits painted between the 14th century to the 20th.


Improving the Realism of Synthetic Images - Apple

#artificialintelligence

Most successful examples of neural nets today are trained with supervision. However, to achieve high accuracy, the training sets need to be large, diverse, and accurately annotated, which is costly. An alternative to labelling huge amounts of data is to use synthetic images from a simulator. This is cheap as there is no labeling cost, but the synthetic images may not be realistic enough, resulting in poor generalization on real test images. To help close this performance gap, we've developed a method for refining synthetic images to make them look more realistic.


Hand-on Implementation of CycleGAN, Image-to-Image Translation using PyTorch

#artificialintelligence

A CycleGAN is designed for image-to-image translation, and it learns from unpaired training data. It gives us a way to learn the mapping between one image domain and another using an unsupervised approach. Jun-Yan Zhu original paper on the CycleGan can be found here who is Assistant Professor in the School of Computer Science of Carnegie Mellon University. These images do not come with the labels, i.e. the generator creates the training data X from the Y datasets. We do not have to extract all the corresponding features from the individual images.