Goto

Collaborating Authors

Deep Learning: Recurrent Neural Networks in Python

#artificialintelligence

Like the course I just released on Hidden Markov Models, Recurrent Neural Networks are all about learning sequences - but whereas Markov Models are limited by the Markov assumption, Recurrent Neural Networks are not - and as a result, they are more expressive, and more powerful than anything we've seen on tasks that we haven't made progress on in decades. So what's going to be in this course and how will it build on the previous neural network courses and Hidden Markov Models? In the first section of the course we are going to add the concept of time to our neural networks. I'll introduce you to the Simple Recurrent Unit, also known as the Elman unit. We are going to revisit the XOR problem, but we're going to extend it so that it becomes the parity problem - you'll see that regular feedforward neural networks will have trouble solving this problem but recurrent networks will work because the key is to treat the input as a sequence.



Is deep learning a Markov chain in disguise?

@machinelearnbot

Andrej Karpathy's post "The Unreasonable Effectiveness of Recurrent Neural Networks" made splashes last year. The basic premise is that you can create a recurrent neural network to learn language features character-by-character. But is the resultant model any different from a Markov chain built for the same purpose? I implemented a character-by-character Markov chain in R to find out. First, let's play a variation of the Imitation Game with generated text from Karpathy's tinyshakespeare dataset.


Is deep learning a Markov chain in disguise?

@machinelearnbot

Andrej Karpathy's post "The Unreasonable Effectiveness of Recurrent Neural Networks" made splashes last year. The basic premise is that you can create a recurrent neural network to learn language features character-by-character. But is the resultant model any different from a Markov chain built for the same purpose? I implemented a character-by-character Markov chain in R to find out. First, let's play a variation of the Imitation Game with generated text from Karpathy's tinyshakespeare dataset.


On Education Unsupervised Machine Learning Hidden Markov Models in Python - all courses

#artificialintelligence

Understand and enumerate the various applications of Markov Models and Hidden Markov Models Understand how Markov Models work Write a Markov Model in code Apply Markov Models to any sequence of data Understand the mathematics behind Markov chains Apply Markov models to language Apply Markov models to website analytics Understand how Google's PageRank works Understand Hidden Markov Models Write a Hidden Markov Model in Code Write a Hidden Markov Model using Theano Understand how gradient descent, which is normally used in deep learning, can be used for HMMs Familiarity with probability and statistics Understand Gaussian mixture models Be comfortable with Python and Numpy The Hidden Markov Model or HMM is all about learning sequences. A lot of the data that would be very useful for us to model is in sequences. Stock prices are sequences of prices. Language is a sequence of words. Credit scoring involves sequences of borrowing and repaying money, and we can use those sequences to predict whether or not you're going to default.