Gradient Trader Part 1: The Surprising Usefulness of Autoencoders

@machinelearnbot

This post is about a simple tool in deep learning toolbox: Autoencoder. It can be applied to multi-dimensional financial time series. Autoencoding is the practice of copying input to output or learning the identity function. It has an internal state called latent space which is used to represent the input. Usually, this dimension is chosen to be smaller than the input(called undercomplete).


Applied Deep Learning - Part 3: Autoencoders – Towards Data Science

#artificialintelligence

Welcome to Part 3 of Applied Deep Learning series. Part 1 was a hands-on introduction to Artificial Neural Networks, covering both the theory and application with a lot of code examples and visualization. In Part 2 we applied deep learning to real-world datasets, covering the 3 most commonly encountered problems as case studies: binary classification, multiclass classification and regression. Now we will start diving into specific deep learning architectures, starting with the simplest: Autoencoders. The code for this article is available here as a Jupyter notebook, feel free to download and try it out yourself.


Glossary of Deep Learning: Autoencoder – Deeper Learning – Medium

#artificialintelligence

An Autoencoder is neural network capable of unsupervised feature learning. Neural networks are typically used for supervised learning problems, trying to predict a target vector y from input vectors x. An Autoencoder network, however, tries to predict x from x, without the need for labels. Here the challenge is recreating the essence of the original input from compressed, noisy or corrupted data. The idea behind the Autoencoder is to build a network with a narrow hidden layer between Encoder and Decoder that serves as a compressed representation of the input data.


A Deep Learning Tutorial: From Perceptrons to Deep Networks

#artificialintelligence

We have some algorithm that's given a handful of labeled examples, say 10 images of dogs with the label 1 ("Dog") and 10 images of other things with the label 0 ("Not dog")--note that we're mainly sticking to supervised, binary classification for this post. The algorithm "learns" to identify images of dogs and, when fed a new image, hopes to produce the correct label (1 if it's an image of a dog, and 0 otherwise). We have some algorithm that's given a handful of labeled examples, say 10 images of dogs with the label 1 ("Dog") and 10 images of other things with the label 0 ("Not dog")--note that we're mainly sticking to supervised, binary classification for this post. The algorithm "learns" to identify images of dogs and, when fed a new image, hopes to produce the correct label (1 if it's an image of a dog, and 0 otherwise). This setting is incredibly general: your data could be symptoms and your labels illnesses; or your data could be images of handwritten characters and your labels the actual characters they represent.


Auto-Encoder: What Is It? And What Is It Used For? (Part 1)

#artificialintelligence

Autoencoder is an unsupervised artificial neural network that learns how to efficiently compress and encode data then learns how to reconstruct the data back from the reduced encoded representation to a representation that is as close to the original input as possible. Autoencoder, by design, reduces data dimensions by learning how to ignore the noise in the data. Here is an example of the input/output image from the MNIST dataset to an autoencoder. This is the lowest possible dimensions of the input data. The training then involves using back propagation in order to minimize the network's reconstruction loss.