Goto

Collaborating Authors

 Unsupervised or Indirectly Supervised Learning


A Dozen Times Artificial Intelligence Startled The World

@machinelearnbot

Generative Adversarial Networks (GANs) are some of the most fascinating ways to "teach" computers to do human tasks.


Today's Deep Dive: Innovative Unsupervised Learning in AI

#artificialintelligence

Categorically, artificial intelligence (AI) can appear be an odd juxtaposition of order and disorder -- we direct the AI with algorithms, yet the system produces new insights seemingly magically.


Decoder from seq2seq for Generative Adversarial Networks

#artificialintelligence

I am currently researching on free text generation using Generative Adversarial Networks. Before somebody tells me that GANs doesn't work well with discrete data (at least when they are trained with gradient descent), it's true, and I know:D, still I am getting some results and I would like to continue this study line:D.


An intuitive introduction to Generative Adversarial Networks (GANs)

#artificialintelligence

Let's say there's a very cool party going on in your neighborhood that you really want to go to. But, there is a problem. To get into the party you need a special ticket -- that was long sold out.


[R] [1712.02950] CycleGAN, a Master of Steganography โ€ข r/MachineLearning

#artificialintelligence

This signal is used to reconstruct the original input perfectly even when the generated output doesn't appear to contain sufficient detail (like in map- image / image- map translation). They also showed that you can make a CycleGAN produce a chosen output for any arbitrary input with an imperceptible modification of the input.


Which Machine Learning Algorithm Should I Use?

@machinelearnbot

This resource is designed primarily for beginner to intermediate data scientists or analysts who are interested in identifying and applying machine learning algorithms to address the problems of their interest. A typical question asked by a beginner, when facing a wide variety of machine learning algorithms, is "which algorithm should I use?" Even an experienced data scientist cannot tell which algorithm will perform the best before trying different algorithms. We are not advocating a one and done approach, but we do hope to provide some guidance on which algorithms to try first depending on some clear factors. The machine learning algorithm cheat sheet helps you to choose from a variety of machine learning algorithms to find the appropriate algorithm for your specific problems.


Machine Learning Algorithms: Which One to Choose for Your Problem

#artificialintelligence

Supervised learning is the task of inferring a function from labeled training data. By fitting to the labeled training set, we want to find the most optimal model parameters to predict unknown labels on other objects (test set). If the label is a real number, we call the task regression. If the label is from the limited number of values, where these values are unordered, then it's classification. In unsupervised learning we have less information about objects, in particular, the train set is unlabeled.


Good Semi-supervised Learning That Requires a Bad GAN

Neural Information Processing Systems

Semi-supervised learning methods based on generative adversarial networks (GANs) obtained strong empirical results, but it is not clear 1) how the discriminator benefits from joint training with a generator, and 2) why good semi-supervised classification performance and a good generator cannot be obtained at the same time. Theoretically we show that given the discriminator objective, good semi-supervised learning indeed requires a bad generator, and propose the definition of a preferred generator. Empirically, we derive a novel formulation based on our analysis that substantially improves over feature matching GANs, obtaining state-of-the-art results on multiple benchmark datasets.


Bayesian GAN

Neural Information Processing Systems

Generative adversarial networks (GANs) can implicitly learn rich distributions over images, audio, and data which are hard to model with an explicit likelihood. We present a practical Bayesian formulation for unsupervised and semi-supervised learning with GANs. Within this framework, we use stochastic gradient Hamiltonian Monte Carlo to marginalize the weights of the generator and discriminator networks. The resulting approach is straightforward and obtains good performance without any standard interventions such as feature matching or mini-batch discrimination. By exploring an expressive posterior over the parameters of the generator, the Bayesian GAN avoids mode-collapse, produces interpretable and diverse candidate samples, and provides state-of-the-art quantitative results for semi-supervised learning on benchmarks including SVHN, CelebA, and CIFAR-10, outperforming DCGAN, Wasserstein GANs, and DCGAN ensembles.


Semi-Supervised Learning for Optical Flow with Generative Adversarial Networks

Neural Information Processing Systems

Convolutional neural networks (CNNs) have recently been applied to the optical flow estimation problem. As training the CNNs requires sufficiently large ground truth training data, existing approaches resort to synthetic, unrealistic datasets. On the other hand, unsupervised methods are capable of leveraging real-world videos for training where the ground truth flow fields are not available. These methods, however, rely on the fundamental assumptions of brightness constancy and spatial smoothness priors which do not hold near motion boundaries. In this paper, we propose to exploit unlabeled videos for semi-supervised learning of optical flow with a Generative Adversarial Network. Our key insight is that the adversarial loss can capture the structural patterns of flow warp errors without making explicit assumptions. Extensive experiments on benchmark datasets demonstrate that the proposed semi-supervised algorithm performs favorably against purely supervised and semi-supervised learning schemes.