autoencoder


Build the right Autoencoder -- Tune and Optimize using PCA principles. Part I

#artificialintelligence

The article will be continued in Part II with detailed steps to optimize an Autoencoder. In Part II, we find that the optimizations improved the Autoencoder reconstruction error by more than 50%. This article assumes the reader has a basic understanding of PCA. If unfamiliar, please refer to Understanding PCA [2]. For simplicity, we compare a linear single layer Autoencoder with PCA.


AI's Latest Job? Designing Cool T-Shirts

#artificialintelligence

The T-shirts sold by Cross & Freckle, a New York–based fashion upstart, don't look revolutionary at first glance. They come in black or white, they're cut for a unisex fit, and they sell for $25. Each of them has a little design embroidered into the cotton that references staples of New York City life: pigeons, dollar pizza slices, subway rats. They were designed instead by a neural network, which crunched doodle data from millions of people and spit out the original art that makes up the embroidery. Cross & Freckle isn't the first company to use AI to generate art--people have been doing that for years.


Autoencoders: Deep Learning with TensorFlow's Eager Execution

#artificialintelligence

Deep Learning has revolutionized the Machine Learning scene in the last years. Can we apply it to image compression? How well can a Deep Learning algorithm reconstruct pictures of kittens? Today we'll find the answers to all of those questions. I've talked about Unsupervised Learning before: applying Machine Learning to discover patterns in unlabelled data.


Data Cleansing for Models Trained with SGD

arXiv.org Machine Learning

Data cleansing is a typical approach used to improve the accuracy of machine learning models, which, however, requires extensive domain knowledge to identify the influential instances that affect the models. In this paper, we propose an algorithm that can suggest influential instances without using any domain knowledge. With the proposed method, users only need to inspect the instances suggested by the algorithm, implying that users do not need extensive knowledge for this procedure, which enables even non-experts to conduct data cleansing and improve the model. The existing methods require the loss function to be convex and an optimal model to be obtained, which is not always the case in modern machine learning. To overcome these limitations, we propose a novel approach specifically designed for the models trained with stochastic gradient descent (SGD). The proposed method infers the influential instances by retracing the steps of the SGD while incorporating intermediate models computed in each step. Through experiments, we demonstrate that the proposed method can accurately infer the influential instances. Moreover, we used MNIST and CIFAR10 to show that the models can be effectively improved by removing the influential instances suggested by the proposed method.


LIA: Latently Invertible Autoencoder with Adversarial Learning

arXiv.org Machine Learning

Deep generative models play an increasingly important role in machine learning and computer vision. However there are two fundamental issues hindering real-world applications of these techniques: the learning difficulty of variational inference in Variational AutoEncoder (VAE) and the functional absence of encoding samples in Generative Adversarial Network (GAN). In this paper, we manage to address these issues in one framework by proposing a novel algorithm named Latently Invertible Autoencoder (LIA). A deep invertible network and its inverse mapping are symmetrically embedded in the latent space of VAE. Thus the partial encoder first transforms inputs to be feature vectors and then the distribution of these feature vectors is reshaped to approach a prior by the invertible network. The decoder proceeds in reverse order of composite mappings of the complete encoder. The two-stage stochasticity-free training is devised to train LIA via adversarial learning, in the sense that we first train a standard GAN whose generator is the decoder of LIA and then an autoencoder in the adversarial manner by detaching the invertible network from LIA. Experiments conducted on the FFHQ dataset validate the effectiveness of LIA for inference and generation tasks.


Introduction to Adversarial Autoencoders

#artificialintelligence

Generative Adversarial Networks (GAN) shook up the deep learning world. When they first appeared in 2014, they proposed a new and fresh approach to modeling and gave a possibility for new neural network architectures to emerge. Since standard GAN architecture is composed from two neural networks, we can play around and use different approaches for those networks and thus create new and shiny architectures. The idea is to make an appropriate model for your problem and generate data which can be used in a real-world business scenario. So far, we had a chance to see how to implement the standard GAN and Deep Convolutional GAN (combining CNN concepts with GAN concepts), but the zoo of GAN architectures grows on a daily basis.


Towards robust audio spoofing detection: a detailed comparison of traditional and learned features

arXiv.org Machine Learning

Automatic speaker verification, like every other biometric system, is vulnerable to spoofing attacks. Using only a few minutes of recorded voice of a genuine client of a speaker verification system, attackers can develop a variety of spoofing attacks that might trick such systems. Detecting these attacks using the audio cues present in the recordings is an important challenge. Most existing spoofing detection systems depend on knowing the used spoofing technique. With this research, we aim at overcoming this limitation, by examining robust audio features, both traditional and those learned through an autoencoder, that are generalizable over different types of replay spoofing. Furthermore, we provide a detailed account of all the steps necessary in setting up state-of-the-art audio feature detection, pre-, and postprocessing, such that the (non-audio expert) machine learning researcher can implement such systems. Finally, we evaluate the performance of our robust replay speaker detection system with a wide variety and different combinations of both extracted and machine learned audio features on the `out in the wild' ASVspoof 2017 dataset. This dataset contains a variety of new spoofing configurations. Since our focus is on examining which features will ensure robustness, we base our system on a traditional Gaussian Mixture Model-Universal Background Model. We then systematically investigate the relative contribution of each feature set. The fused models, based on both the known audio features and the machine learned features respectively, have a comparable performance with an Equal Error Rate (EER) of 12. The final best performing model, which obtains an EER of 10.8, is a hybrid model that contains both known and machine learned features, thus revealing the importance of incorporating both types of features when developing a robust spoofing prediction model.


A Novel Deep Neural Network Based Approach for Sparse Code Multiple Access

arXiv.org Machine Learning

Sparse code multiple access (SCMA) has been one of non-orthogonal multiple access (NOMA) schemes aiming to support high spectral efficiency and ubiquitous access requirements for 5G wireless communication networks. Conventional SCMA approaches are confronting remarkable challenges in designing low complexity high accuracy decoding algorithm and constructing optimum codebooks. Fortunately, the recent spotlighted deep learning technologies are of significant potentials in solving many communication engineering problems. Inspired by this, we explore approaches to improve SCMA performances with the help of deep learning methods. We propose and train a deep neural network (DNN) called DL-SCMA to learn to decode SCMA modulated signals corrupted by additive white Gaussian noise (AWGN). Putting encoding and decoding together, an autoencoder called AE-SCMA is established and trained to generate optimal SCMA codewords and reconstruct original bits. Furthermore, by manipulating the mapping vectors, an autoencoder is able to generalize SCMA, thus a dense code multiple access (DCMA) scheme is proposed. Simulations show that the DNN SCMA decoder significantly outperforms the conventional message passing algorithm (MPA) in terms of bit error rate (BER), symbol error rate (SER) and computational complexity, and AE-SCMA also demonstrates better performances via constructing better SCMA codebooks. The performance of deep learning aided DCMA is superior to the SCMA.


Deep Learning-Based Quantization of L-Values for Gray-Coded Modulation

arXiv.org Machine Learning

In this work, a deep learning-based quantization scheme for log-likelihood ratio (L-value) storage is introduced. We analyze the dependency between the average magnitude of different L-values from the same quadrature amplitude modulation (QAM) symbol and show they follow a consistent ordering. Based on this we design a deep autoencoder that jointly compresses and separately reconstructs each L-value, allowing the use of a weighted loss function that aims to more accurately reconstructs low magnitude inputs. Our method is shown to be competitive with state-of-the-art maximum mutual information quantization schemes, reducing the required memory footprint by a ratio of up to two and a loss of performance smaller than 0.1 dB with less than two effective bits per L-value or smaller than 0.04 dB with 2.25 effective bits. We experimentally show that our proposed method is a universal compression scheme in the sense that after training on an LDPC-coded Rayleigh fading scenario we can reuse the same network without further training on other channel models and codes while preserving the same performance benefits.


Learning data representation using modified autoencoder for the integrative analysis of multi-omics data

arXiv.org Machine Learning

In integrative analyses of omics data, it is often of interest to extract data embedding from one data type that best reflect relations with another data type. This task is traditionally fulfilled by linear methods such as canonical correlation and partial least squares. However, information contained in one data type pertaining to the other data type may not be in the linear form. Deep learning provides a convenient alternative to extract nonlinear information. Here we develop a method Autoencoder-based Integrative Multi-omics data Embedding (AIME) to extract such information. Using a real gene expression - methylation dataset, we show that AIME extracted meaningful information that the linear approach could not find. The R implementation is available at http://web1.sph.emory.edu/users/tyu8/AIME/.