Goto

Collaborating Authors

Forecasting with the Baum-Welch Algorithm and Hidden Markov Models

@machinelearnbot

Leonard Baum and Lloyd Welch designed a probabilistic modelling algorithm to detect patterns in Hidden Markov Processes. They built upon the theory of probabilistic functions of a Markov Chain and the Expectation–Maximization (EM) Algorithm - an iterative method for finding maximum likelihood or maximum a-posteriori estimates of parameters in statistical models, where the model depends on unobserved latent variables. The Baum–Welch Algorithm initially proved to be a remarkable code-breaking and speech recognition tool but also has applications for business, finance, sciences and others. The algorithm finds unknown parameters of a Hidden Markov Model: the maximum likelihood estimate of the parameters of a Hidden Markov Model given a set of observed feature vectors. Two step process: 1. computing a-posteriori probabilities for a given model; and 2. re-estimation of the model parameters.


Investigation of commuting Hamiltonian in quantum Markov network

arXiv.org Artificial Intelligence

Graphical Models have various applications in science and engineering which include physics, bioinformatics, telecommunication and etc. Usage of graphical models needs complex computations in order to evaluation of marginal functions,so there are some powerful methods including mean field approximation, belief propagation algorithm and etc. Quantum graphical models have been recently developed in context of quantum information and computation, and quantum statistical physics, which is possible by generalization of classical probability theory to quantum theory. The main goal of this paper is preparing a primary generalization of Markov network, as a type of graphical models, to quantum case and applying in quantum statistical physics.We have investigated the Markov network and the role of commuting Hamiltonian terms in conditional independence with simple examples of quantum statistical physics.


Partially Observed Maximum Entropy Discrimination Markov Networks

Neural Information Processing Systems

Learning graphical models with hidden variables can offer semantic insights to complex data and lead to salient structured predictors without relying on expensive, sometime unattainable fully annotated training data. While likelihood-based methods have been extensively explored, to our knowledge, learning structured prediction models with latent variables based on the max-margin principle remains largely an open problem. In this paper, we present a partially observed Maximum Entropy Discrimination Markov Network (PoMEN) model that attempts to combine the advantages of Bayesian and margin based paradigms for learning Markov networks from partially labeled data. PoMEN leads to an averaging prediction rule that resembles a Bayes predictor that is more robust to overfitting, but is also built on the desirable discriminative laws resemble those of the M$ 3$N. We develop an EM-style algorithm utilizing existing convex optimization algorithms for M$ 3$N as a subroutine.


Deep Learning: Recurrent Neural Networks in Python

#artificialintelligence

Like the course I just released on Hidden Markov Models, Recurrent Neural Networks are all about learning sequences - but whereas Markov Models are limited by the Markov assumption, Recurrent Neural Networks are not - and as a result, they are more expressive, and more powerful than anything we've seen on tasks that we haven't made progress on in decades. So what's going to be in this course and how will it build on the previous neural network courses and Hidden Markov Models? In the first section of the course we are going to add the concept of time to our neural networks. I'll introduce you to the Simple Recurrent Unit, also known as the Elman unit. We are going to revisit the XOR problem, but we're going to extend it so that it becomes the parity problem - you'll see that regular feedforward neural networks will have trouble solving this problem but recurrent networks will work because the key is to treat the input as a sequence.


Model-based clustering with Hidden Markov Model regression for time series with regime changes

arXiv.org Machine Learning

This paper introduces a novel model-based clustering approach for clustering time series which present changes in regime. It consists of a mixture of polynomial regressions governed by hidden Markov chains. The underlying hidden process for each cluster activates successively several polynomial regimes during time. The parameter estimation is performed by the maximum likelihood method through a dedicated Expectation-Maximization (EM) algorithm. The proposed approach is evaluated using simulated time series and real-world time series issued from a railway diagnosis application. Comparisons with existing approaches for time series clustering, including the stand EM for Gaussian mixtures, $K$-means clustering, the standard mixture of regression models and mixture of Hidden Markov Models, demonstrate the effectiveness of the proposed approach.