Automatic feature engineering using Generative Adversarial Networks


The purpose of deep learning is to learn a representation of high dimensional and noisy data using a sequence of differentiable functions, i.e., geometric transformations, that can perhaps be used for supervised learning tasks among other tasks. It has had great success in discriminative models while generative models have not fared perhaps quite as well due to the limitations of explicit maximum likelihood estimation (MLE). Adversarial learning as presented in the Generative Adversarial Network (GAN) aims to overcome these problems by using implicit MLE.

Who Killed Albert Einstein? From Open Data to Murder Mystery Games Artificial Intelligence

This paper presents a framework for generating adventure games from open data. Focusing on the murder mystery type of adventure games, the generator is able to transform open data from Wikipedia articles, OpenStreetMap and images from Wikimedia Commons into WikiMysteries. Every WikiMystery game revolves around the murder of a person with a Wikipedia article and populates the game with suspects who must be arrested by the player if guilty of the murder or absolved if innocent. Starting from only one person as the victim, an extensive generative pipeline finds suspects, their alibis, and paths connecting them from open data, transforms open data into cities, buildings, non-player characters, locks and keys and dialog options. The paper describes in detail each generative step, provides a specific playthrough of one WikiMystery where Albert Einstein is murdered, and evaluates the outcomes of games generated for the 100 most influential people of the 20th century.

Story Generation and Aviation Incident Representation Artificial Intelligence

This working note discusses the topic of story generation, with a view to identifying the knowledge required to understand aviation incident narratives (which have structural similarities to stories), following the premise that to understand aviation incidents, one should at least be able to generate examples of them. We give a brief overview of aviation incidents and their relation to stories, and then describe two of our earlier attempts (using `scripts' and `story grammars') at incident generation which did not evolve promisingly. Following this, we describe a simple incident generator which did work (at a `toy' level), using a `world simulation' approach. This generator is based on Meehan's TALE-SPIN story generator (1977). We conclude with a critique of the approach.

From GAN to WGAN


This post explains the maths behind a generative adversarial network (GAN) model and why it is hard to be trained. Wasserstein GAN is intended to improve GANs' training by adopting a smooth metric for measuring the distance between two probability distributions.

Zero-sum games are turning AIs into powerful creative tools


In 2018, a new kind of AI will show off its ability to produce artworks that can not only imitate old masters but which can take off in startling new creative directions. Generative adversarial networks (GANs) bring a new level of sophistication to graphics. Not only can they produce totally convincing artificial images on demand ("Donald Trump on a skateboard being chased by a polar bear"), they can tweak existing images in subtle ways ("make it look like the Sun is shining"). A GAN involves two separate neural networks, a generator and a discriminator. The generator produces images and the discriminator rates them. For example, the generator might be fed a large database of images of dogs and attempt to produce its own imitation dog picture. The discriminator then tries to tell the difference between the fake dog and real ones, and feeds back to the generator. The generator rapidly gets better at producing dogs, and the discriminator becomes better at spotting fakes.

Activation Maximization Generative Adversarial Nets Artificial Intelligence

Class labels have been empirically shown useful in improving the sample quality of generative adversarial nets (GANs). In this paper, we mathematically study the properties of the current variants of GANs that make use of class label information. With class aware gradient and cross-entropy decomposition, we reveal how class labels and associated losses influence GAN's training. Based on that, we propose Activation Maximization Generative Adversarial Networks (AM-GAN) as an advanced solution. Comprehensive experiments have been conducted to validate our analysis and evaluate the effectiveness of our solution, where AM-GAN outperforms other strong baselines and achieves state-of-the-art Inception Score (8.91) on CIFAR-10. In addition, we demonstrate that, with the Inception ImageNet classifier, Inception Score mainly tracks the diversity of the generator, and there is, however, no reliable evidence that it can reflect the true sample quality. We thus propose a new metric, called AM Score, to provide more accurate estimation on the sample quality. Our proposed model also outperforms the baseline methods in the new metric.

[P] Time series generation with recurrent conditional GANs. Looking for insights. • r/MachineLearning


I am trying to implement a GAN models that generates time series (sine waves in this case), conditioned to previous timesteps. I intend to evaluate if a generator trained using adversarial loss has any advantages over training using MSE, similar to Lotter at al., 2015.