Undirected Networks


How AI could help you learn sign language

#artificialintelligence

Sign languages aren't easy to learn and are even harder to teach. They use not just hand gestures but also mouthings, facial expressions and body posture to communicate meaning. This complexity means professional teaching programmes are still rare and often expensive. But this could all change soon, with a little help from artificial intelligence (AI). My colleagues and I are working on software for teaching yourself sign languages in an automated, intuitive way.


How AI Could Help You Learn Sign Language

#artificialintelligence

Sign languages aren't easy to learn and are even harder to teach. They use not just hand gestures but also mouthings, facial expressions and body posture to communicate meaning. This complexity means professional teaching programmes are still rare and often expensive. But this could all change soon, with a little help from artificial intelligence (AI). My colleagues and I are working on software for teaching yourself sign languages in an automated, intuitive way.


Marketing Analytics through Markov Chain – Data Science Central

#artificialintelligence

Imagine you are a company selling a fast-moving consumer good in the market. Let's assume that the customer would follow the given journey to make the final purchase: These are the states at which the customer would be at any point in the purchase journey. Now, how to find out in which state the customers would be after 6 months? Markov Chain comes to the rescue!! Let's first understand what Markov Chain is. Let's delve a little deeper.


Deep Learning meets Physics: Restricted Boltzmann Machines Part I

#artificialintelligence

In my opinion RBMs have one of the easiest architectures of all neural networks. As it can be seen in Fig.1. The absence of an output layer is apparent. But as it can be seen later an output layer wont be needed since the predictions are made differently as in regular feedforward neural networks. Energy is a term that may not be associated with deep learning in the first place.


Generating Haiku with Deep Learning – Towards Data Science

#artificialintelligence

I've done previous work on haiku generation. This generator uses Markov chains trained on a corpus of non-haiku poetry, generates haiku one word at a time, and ensures the 5-7-5 structure by backspacing when all the possible next words would violate the 5–7–5 structure. This isn't unlike what I do when I'm writing a haiku. I try things, count out the syllables, find they don't work and go back. It feels more like brute force than something that actually understands what it means to write a haiku.


Python: Advanced Guide to Artificial Intelligence - PDF eBook Now just $5

#artificialintelligence

This Learning Path is your complete guide to quickly getting to grips with popular machine learning algorithms. You'll be introduced to the most widely used algorithms in supervised, unsupervised, and semi-supervised machine learning, and learn how to use them in the best possible manner. Ranging from Bayesian models to the MCMC algorithm to Hidden Markov models, this Learning Path will teach you how to extract features from your dataset and perform dimensionality reduction by making use of Python-based libraries. You'll bring the use of TensorFlow and Keras to build deep learning models, using concepts such as transfer learning, generative adversarial networks, and deep reinforcement learning. Next, you'll learn the advanced features of TensorFlow1.x,


On Learning Markov Chains

Neural Information Processing Systems

The problem of estimating an unknown discrete distribution from its samples is a fundamental tenet of statistical learning. Over the past decade, it attracted significant research effort and has been solved for a variety of divergence measures. Surprisingly, an equally important problem, estimating an unknown Markov chain from its samples, is still far from understood. We consider two problems related to the min-max risk (expected loss) of estimating an unknown k-state Markov chain from its n sequential samples: predicting the conditional distribution of the next sample with respect to the KL-divergence, and estimating the transition matrix with respect to a natural loss induced by KL or a more general f-divergence measure. For the first measure, we determine the min-max prediction risk to within a linear factor in the alphabet size, showing it is \Omega(k\log\log n/n) and O(k^2\log\log n/n). For the second, if the transition probabilities can be arbitrarily small, then only trivial uniform risk upper bounds can be derived. We therefore consider transition probabilities that are bounded away from zero, and resolve the problem for essentially all sufficiently smooth f-divergences, including KL-, L_2-, Chi-squared, Hellinger, and Alpha-divergences.


What is Hidden in the Hidden Markov Models? – Acing AI – Medium

#artificialintelligence

Hidden Markov Models or HMMs are the most common models used for dealing with temporal Data. They also frequently come up in different ways in a Data Science Interview usually without the word HMM written over it. In such a scenario it is necessary to discern the problem as an HMM problem by knowing characteristics of HMMs. In the Hidden Markov Model we are constructing an inference model based on the assumptions of a Markov process. It means that the future state is related to the immediately previous state and not the states before that.


Kalman filter demystified: from intuition to probabilistic graphical model to real case in financial markets

arXiv.org Machine Learning

In this paper, we revisit the Kalman filter theory. After giving the intuition on a simplified financial markets example, we revisit the maths underlying it. We then show that Kalman filter can be presented in a very different fashion using graphical models. This enables us to establish the connection between Kalman filter and Hidden Markov Models. We then look at their application in financial markets and provide various intuitions in terms of their applicability for complex systems such as financial markets. Although this paper has been written more like a self contained work connecting Kalman filter to Hidden Markov Models and hence revisiting well known and establish results, it contains new results and brings additional contributions to the field. First, leveraging on the link between Kalman filter and HMM, it gives new algorithms for inference for extended Kalman filters. Second, it presents an alternative to the traditional estimation of parameters using EM algorithm thanks to the usage of CMA-ES optimization. Third, it examines the application of Kalman filter and its Hidden Markov models version to financial markets, providing various dynamics assumptions and tests. We conclude by connecting Kalman filter approach to trend following technical analysis system and showing their superior performances for trend following detection.


Predictive Learning on Sign-Valued Hidden Markov Trees

arXiv.org Machine Learning

We provide high-probability sample complexity guarantees for exact structure recovery and accurate Predictive Learning using noise-corrupted samples from an acyclic (tree-shaped) graphical model. The hidden variables follow a tree-structured Ising model distribution whereas the observable variables are generated by a binary symmetric channel, taking the hidden variables as its input. This model arises naturally in a variety of applications, such as in physics, biology, computer science, and finance. The noiseless structure learning problem has been studied earlier by Bresler and Karzand (2018); this paper quantifies how noise in the hidden model impacts the sample complexity of structure learning and predictive distributional inference by proving upper and lower bounds on the sample complexity. Quite remarkably, for any tree with $p$ vertices and probability of incorrect recovery $\delta>0$, the order of necessary number of samples remains logarithmic as in the noiseless case, i.e., $\mathcal{O}(\log(p/\delta))$, for both aforementioned tasks. We also present a new equivalent of Isserlis' Theorem for sign-valued tree-structured distributions, yielding a new low-complexity algorithm for higher order moment estimation.