Goto

Collaborating Authors

The Infinite Factorial Hidden Markov Model

Neural Information Processing Systems

We introduces a new probability distribution over a potentially infinite number of binary Markov chains which we call the Markov Indian buffet process. This process extends the IBP to allow temporal dependencies in the hidden variables. We use this stochastic process to build a nonparametric extension of the factorial hidden Markov model. After working out an inference scheme which combines slice sampling and dynamic programming we demonstrate how the infinite factorial hidden Markov model can be used for blind source separation. Papers published at the Neural Information Processing Systems Conference.


Unsupervised Machine Learning Hidden Markov Models in Python

#artificialintelligence

Created by Lazy Programmer Inc. English [Auto], Portuguese [Auto]Preview this Course - GET COUPON CODE The Hidden Markov Model or HMM is all about learning sequences. A lot of the data that would be very useful for us to model is in sequences. Stock prices are sequences of prices. Language is a sequence of words. Credit scoring involves sequences of borrowing and repaying money, and we can use those sequences to predict whether or not you're going to default.


Market forecasting using Hidden Markov Models

arXiv.org Machine Learning

Working on the daily closing prices and logreturns, in this paper we deal with the use of Hidden Markov Models (HMMs) to forecast the price of the EUR/USD Futures. The aim of our work is to understand how the HMMs describe different financial time series depending on their structure. Subsequently, we analyse the forecasting methods exposed in the previous literature, putting on evidence their pros and cons.


The Infinite Factorial Hidden Markov Model

Neural Information Processing Systems

We introduces a new probability distribution over a potentially infinite number of binary Markov chains which we call the Markov Indian buffet process. This process extends the IBP to allow temporal dependencies in the hidden variables. We use this stochastic process to build a nonparametric extension of the factorial hidden Markov model. After working out an inference scheme which combines slice sampling and dynamic programming we demonstrate how the infinite factorial hidden Markov model can be used for blind source separation.


Forecasting with the Baum-Welch Algorithm and Hidden Markov Models

@machinelearnbot

Leonard Baum and Lloyd Welch designed a probabilistic modelling algorithm to detect patterns in Hidden Markov Processes. They built upon the theory of probabilistic functions of a Markov Chain and the Expectation–Maximization (EM) Algorithm - an iterative method for finding maximum likelihood or maximum a-posteriori estimates of parameters in statistical models, where the model depends on unobserved latent variables. The Baum–Welch Algorithm initially proved to be a remarkable code-breaking and speech recognition tool but also has applications for business, finance, sciences and others. The algorithm finds unknown parameters of a Hidden Markov Model: the maximum likelihood estimate of the parameters of a Hidden Markov Model given a set of observed feature vectors. Two step process: 1. computing a-posteriori probabilities for a given model; and 2. re-estimation of the model parameters.