Unsupervised or Indirectly Supervised Learning


Using unsupervised learning to improve prediction performance

#artificialintelligence

The TDA models have by far the richest functionality and are, unsurprisingly, what we use in our work. They include all the capabilities described above. TDA begins with a similarity measure on a data set X, and then constructs a graph for X which acts as a similarity map or similarity model for it. Each node in the graph corresponds to a sub-collection of X. Pairs of points which lie in the same node or in adjacent nodes are more similar to each other than pairs which lie in nodes far removed from each other in the graph structure. The graphical model can of course be visualized, but it has a great deal of other functionality.


What is a Generative Adversarial Network?

#artificialintelligence

This article was written by Hunter Heidenreich. Looking into what a generative adversarial network is to understand how they work. Before we even think about starting to talk about Generative Adversarial Networks (GANs), it is worth asking the question of what's in a generative model? Why do we even want to have such a thing? These questions can help seed our thought process to better engage with GANs.


Style-based GANs – Generating and Tuning Realistic Artificial Faces

#artificialintelligence

Generative Adversarial Networks (GAN) are a relatively new concept in Machine Learning, introduced for the first time in 2014. Their goal is to synthesize artificial samples, such as images, that are indistinguishable from authentic images. A common example of a GAN application is to generate artificial face images by learning from a dataset of celebrity faces. While GAN images became more realistic over time, one of their main challenges is controlling their output, i.e. changing specific features such pose, face shape and hair style in an image of a face. A new paper by NVIDIA, A Style-Based Generator Architecture for GANs (StyleGAN), presents a novel model which addresses this challenge.


These faces show how far AI image generation has advanced in just four years

#artificialintelligence

Developments in artificial intelligence move at a startling pace -- so much so that it's often difficult to keep track. But one area where progress is as plain as the nose on your AI-generated face is the use of neural networks to create fake images. In the image above you can see what four years of progress in AI image generation looks like. The crude black-and-white faces on the left are from 2014, published as part of a landmark paper that introduced the AI tool known as the generative adversarial network (GAN). The color faces on the right come from a paper published earlier this month, which uses the same basic method but is clearly a world apart in terms of image quality.


New machine learning algorithm breaks text CAPTCHAs easier than ever

ZDNet

Academics from UK and China have developed a new machine learning algorithm that can break text-based CAPTCHA systems with less effort, faster, and with higher accuracy than all previous methods. This new algorithm -developed by scientists from Lancaster University (UK), Northwest University (China), and Peking University (China)- is based on the concept of GAN, which stands for "Generative Adversarial Network." GANs are a special class of artificial intelligence algorithms that are useful in scenarios where the algorithm doesn't have access to large quantities of training data. Classing machine learning algorithms usually require millions of data points to train the algorithm in performing a task with the desired degree of accuracy. A GAN algorithm has the advantage that it can work with a much smaller batch of initial data points.


Bayesian CycleGAN via Marginalizing Latent Sampling

arXiv.org Machine Learning

Recent techniques built on Generative Adversarial Networks (GANs) like CycleGAN are able to learn mappings between domains from unpaired datasets through min-max optimization games between generators and discriminators. However, it remains challenging to stabilize training process and diversify generated results. To address these problems, we present a Bayesian extension of cyclic model and an integrated cyclic framework for inter-domain mappings. The proposed method stimulated by Bayesian GAN explores the full posteriors of Bayesian cyclic model (with latent sampling) and optimizes the model with maximum a posteriori (MAP) estimation. Hence, we name it {\tt Bayesian CycleGAN}. We perform the proposed Bayesian CycleGAN on multiple benchmark datasets, including Cityscapes, Maps, and Monet2photo. The quantitative and qualitative evaluations demonstrate the proposed method can achieve more stable training, superior performance and diversified images generating.


A Style-Based Generator Architecture for Generative Adversarial Networks

#artificialintelligence

Authors: Tero Karras (NVIDIA) Samuli Laine (NVIDIA) Timo Aila (NVIDIA) Abstract: We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.


3D human pose estimation in video with temporal convolutions and semi-supervised training

#artificialintelligence

In this work, we demonstrate that 3D poses in video can be effectively estimated with a fully convolutional model based on dilated temporal convolutions over 2D keypoints. We also introduce back-projection, a simple and effective semi-supervised training method that leverages unlabeled video data. In the supervised setting, our fully-convolutional model outperforms the previous best result from the literature by 6 mm mean per-joint position error on Human3.6M, Moreover, experiments with back-projection show that it comfortably outperforms previous state-of-the-art results in semi-supervised settings where labeled data is scarce. We build on the approach of state-of-the-art methods which formulate the problem as 2D keypoint detection followed by 3D pose estimation.


Smoothed Analysis in Unsupervised Learning via Decoupling

arXiv.org Machine Learning

Smoothed analysis is a powerful paradigm in overcoming worst-case intractability in unsupervised learning and high-dimensional data analysis. While polynomial time smoothed analysis guarantees have been obtained for worst-case intractable problems like tensor decompositions and learning mixtures of Gaussians, such guarantees have been hard to obtain for several other important problems in unsupervised learning. A core technical challenge is obtaining lower bounds on the least singular value for random matrix ensembles with dependent entries, that are given by low-degree polynomials of a few base underlying random variables. In this work, we address this challenge by obtaining high-confidence lower bounds on the least singular value of new classes of structured random matrix ensembles of the above kind. We then use these bounds to obtain polynomial time smoothed analysis guarantees for the following three important problems in unsupervised learning: 1. Robust subspace recovery, when the fraction $\alpha$ of inliers in the d-dimensional subspace $T \subset \mathbb{R}^n$ is at least $\alpha > (d/n)^\ell$ for any constant integer $\ell>0$. This contrasts with the known worst-case intractability when $\alpha< d/n$, and the previous smoothed analysis result which needed $\alpha > d/n$ (Hardt and Moitra, 2013). 2. Higher order tensor decompositions, where we generalize the so-called FOOBI algorithm of Cardoso to find order-$\ell$ rank-one tensors in a subspace. This allows us to obtain polynomially robust decomposition algorithms for $2\ell$'th order tensors with rank $O(n^{\ell})$. 3. Learning overcomplete hidden markov models, where the size of the state space is any polynomial in the dimension of the observations. This gives the first polynomial time guarantees for learning overcomplete HMMs in a smoothed analysis model.


Robust Semi-Supervised Learning when Labels are Missing at Random

arXiv.org Machine Learning

Semi-supervised learning methods are motivated by the relative paucity of labeled data and aim to utilize large sources of unlabeled data to improve predictive tasks. It has been noted, however, such improvements are not guaranteed in general in some cases the unlabeled data impairs the performance. A fundamental source of error comes from restrictive assumptions about the unlabeled features. In this paper, we develop a semi-supervised learning approach that relaxes such assumptions and is robust with respect to labels missing at random. The approach ensures that uncertainty about the classes is propagated to the unlabeled features in a robust manner. It is applicable using any generative model with associated learning algorithm. We illustrate the approach using both standard synthetic data examples and the MNIST data with unlabeled adversarial examples.