Generative Adversarial Network

#artificialintelligence 

You can do alternating training manually by literally following the algorithm, so that you have a Do loop whose body contains two calls to NetTrain, but that suffers from overhead at each alternation (this could be overcome with clever caching, but we haven't done that yet). An approximation of this is to build a single network and optimize the D and G losses simultaneously by using a negative learning rate for the generator. I have prototyped this, but only on a toy example. I encourage you to try how to do it, it didn't take us more than a few hours of playing around to make a simple GAN in which the data distribution is a gaussian, the discriminator is an MLP, and the generator is a single EmbeddingLayer (just a fixed set of samples that can be moved around by gradient updates).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found