Discriminator


Deep Learning in Not Probabilistic Induction – Intuition Machine – Medium

#artificialintelligence

There is a questionable assumption that is prevalent that Deep Learning is a form of probabilistic or statistical induction. We see this in DARPA's presentation of the 3 waves of AI. Statistical Learning -- Where programmers create statistical models for specific problem domains and train them on big data. This is a broad category that includes Bayesian methods, template based methods (i.e. SVM), tree based predictors, mathematical programming and Deep Learning.


GAN by Example using Keras on Tensorflow Backend – Towards Data Science

@machinelearnbot

Generative Adversarial Networks (GAN) is one of the most promising recent developments in Deep Learning. GAN, introduced by Ian Goodfellow in 2014, attacks the problem of unsupervised learning by training two deep networks, called Generator and Discriminator, that compete and cooperate with each other. In the course of training, both networks eventually learn how to perform their tasks. GAN is almost always explained like the case of a counterfeiter (Generative) and the police (Discriminator). Initially, the counterfeiter will show the police a fake money.


Brighter AI Uses Deep Learning to Shed Light on Nighttime Video Footage NVIDIA Blog

#artificialintelligence

From selfies to satellites, cameras are an integral part of life. They increasingly watch over our homes and streets, and keep businesses secure inside and out. But many factors -- rain, smog, poor lighting, etc. -- can reduce the quality of images. And from identifying a thief to checking on your baby via a monitor, these factors can impair the decisions people make based on camera footage. NVIDIA Inception partner Brighter AI is focused on a fix.


yunjey/StarGAN

@machinelearnbot

StarGAN can flexibly translate an input image to any desired target domain using only a single generator and a discriminator. The images are generated by StarGAN trained on the CelebA dataset. The images are generated by StarGAN trained on the RaFD dataset. The images are generated by StarGAN trained on both the CelebA and RaFD dataset. Overview of StarGAN, consisting of two modules, a discriminator D and a generator G. (a) D learns to distinguish between real and fake images and classify the real images to its corresponding domain.


Create Data from Random Noise with Generative Adversarial Networks

#artificialintelligence

Since I found out about generative adversarial networks (GANs), I've been fascinated by them. A GAN is a type of neural network that is able to generate new data from scratch. You can feed it a little bit of random noise as input, and it can produce realistic images of bedrooms, or birds, or whatever it is trained to generate. One thing all scientists can agree on is that we need more data. GANs, which can be used to produce new data in data-limited situations, can prove to be really useful.


Overview of GANs (Generative Adversarial Networks) – Part I

@machinelearnbot

The purpose of this article series is to provide an overview of GAN research and explain the nature of the contributions. I'm new to this area myself, so this will surely be incomplete, but hopefully it can provide some quick context to other newbies. For Part I we'll introduce GANs at a high level and summarize the original paper. Feel free to skip to Part II if you're already familiar with the basics. It's assumed you're familiar with the basics of neural networks.


Satoshi Iizuka - Globally and Locally Consistent Image Completion

@machinelearnbot

We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary resolutions by filling-in missing regions of any shape. To train this image completion network to be consistent, we use global and local context discriminators that are trained to distinguish real images from completed ones. The global discriminator looks at the entire image to assess if it is coherent as a whole, while the local discriminator looks only at a small area centered at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained to fool the both context discriminator networks, which requires it to generate images that are indistinguishable from real ones with regard to overall consistency as well as in details.


GAN playground: Experiment with Generative Adversarial Networks in your browser

@machinelearnbot

From Wikipedia, "Generative Adversarial Networks, or GANs, are a class of artifical intelligence algorithms used in unsupervised machine learning, implemented by a system of two neural networks contesting with each other in a zero-sum game framework. They were introduced by Ian Goodfellow et al. in 2014." In my own words, GANs are composed of two neural networks, each one trying to outcompete the other. The discriminator tries to figure out whether a given image is real or synthetically generated by the other neural network. The generator attempts to output images that are indistinguishable from real ones in an attempt to fool the discriminator.


Understanding Generative Adversarial Networks – Naoki Shibuya – Medium

#artificialintelligence

The above image is from one of the Siraj Raval's YouTube video on GAN. The video is good but when I saw the above image for the first time, I was a bit confused about what GAN really is. However, similar images are often used to explain GANs as they show the overall structure of such networks. In this article, I explain what GAN actually does using a simple project that generates hand-written digit images similar to the ones from the MNIST database. After reading this article, you should be able to understand the above picture very clearly.


[R] Improving WGAN by Allowing Generator to see Discriminator's Hidden States • r/MachineLearning

@machinelearnbot

WGAN has really paved the way for alot of GAN applications both in images and text. However, one problem I primarily see with training WGAN for text is that the generator fails to fully converge. That is, the wasserstein distance still remains large and despite numerous steps, the generator will not converge any further. To aid the generator, one idea is to allow the generator to see the Discriminator's preactivations from hidden layers and allow the generator to revise its outputs. The idea here is that the generator gets a chance to propose a sequence, see how the discriminator will evaluate it, and revise it sequence all in one differentiable calculation.