Goto

Collaborating Authors

Undirected Networks


Deep Learning: Recurrent Neural Networks in Python

#artificialintelligence

The Recurrent Neural Network (RNN) has been used to obtain state-of-the-art results in sequence modeling. This includes time series analysis, forecasting and natural language processing (NLP). Learn about why RNNs beat old-school machine learning algorithms like Hidden Markov Models. The basics of machine learning and neurons (just a review to get you warmed up!) Neural networks for classification and regression (just a review to get you warmed up!) How to predict stock prices and stock returns with LSTMs in Tensorflow 2 (hint: it's not what you think!) All of the materials required for this course can be downloaded and installed for FREE.


EM Algorithm

#artificialintelligence

EM (Expectation-Maximisation) Algorithm is the go to algorithm whenever we have to do parameter estimation with hidden variables, such as in hidden Markov Chains. For some reason, it is often poorly explained and students end up confused as to what exactly are we maximising in the E-step and M-steps. Here is my attempt at a (hopefully) clear and step by step explanation on exactly how EM Algorithm works.


Unsupervised Machine Learning Hidden Markov Models in Python

#artificialintelligence

The Hidden Markov Model or HMM is all about learning sequences. A lot of the data that would be very useful for us to model is in sequences. Stock prices are sequences of prices. Language is a sequence of words. Credit scoring involves sequences of borrowing and repaying money, and we can use those sequences to predict whether or not you're going to default.


Deep Learning: Recurrent Neural Networks in Python

#artificialintelligence

Deep Learning: Recurrent Neural Networks in Python, GRU, LSTM, more modern deep learning, machine learning, and data science for sequences Created by Lazy Programmer Inc. English [Auto], Indonesian [Auto], 5 more Preview this Course - GET COUPON CODE Description Like the course I just released on Hidden Markov Models, Recurrent Neural Networks are all about learning sequences - but whereas Markov Models are limited by the Markov assumption, Recurrent Neural Networks are not - and as a result, they are more expressive, and more powerful than anything we've seen on tasks that we haven't made progress on in decades. So what's going to be in this course and how will it build on the previous neural network courses and Hidden Markov Models? In the first section of the course we are going to add the concept of time to our neural networks. I'll introduce you to the Simple Recurrent Unit, also known as the Elman unit. We are going to revisit the XOR problem, but we're going to extend it so that it becomes the parity problem - you'll see that regular feedforward neural networks will have trouble solving this problem but recurrent networks will work because the key is to treat the input as a sequence.


On Solving a Stochastic Shortest-Path Markov Decision Process as Probabilistic Inference

#artificialintelligence

We propose solving the general Stochastic Shortest-Path Markov Decision Process (SSP MDP) as probabilistic inference. Furthermore, we discuss online and offline methods for planning under uncertainty. In an SSP MDP, the horizon is indefinite and unknown a priori. SSP MDPs generalize finite and infinite horizon MDPs and are widely used in the artificial intelligence community. Additionally, we highlight some of the differences between solving an MDP using dynamic programming approaches widely used in the artificial intelligence community and approaches used in the active inference community.


Beginners Guide to Boltzmann Machine

#artificialintelligence

Deep learning implements structured machine learning algorithms by making use of artificial neural networks. These algorithms help the machine to learn by itself and develop the ability to establish new parameters with which help to make and execute decisions. Deep learning is considered to be a subset of machine learning and utilizes multi-layered artificial neural networks to carry out its processes, which enables it to deliver high accuracy in tasks such as speech recognition, object detection, language translation and other such modern use cases being implemented every day. One of the most intriguing implementations in the domain of artificial intelligence for creating deep learning models has been the Boltzmann Machine. In this article, we will try to understand what exactly a Boltzmann Machine is, how it can be implemented and its uses.


Boltzmann Machine

#artificialintelligence

Training problems: Given a set of binary data vectors, the machine must learn to predict the output vectors with high probability. The first step is to determine which layer connection weights have the lowest cost function values, relative to all the other possible binary vectors. The Boltzmann technique accomplishes this by continuously updating its own weights as each feature is processed, instead of treating the weights as a fixed value.


Variational Recurrent Neural Networks -- VRNNs

#artificialintelligence

First of all, Why VRNN? -- It's the result of the attempt to include the latent random variables into the hidden state of the RNN by combining the elements of the variational autoencoder. Learning generative models for sequences is a very challenging task. Significant work in this direction exists because of Dynamic Bayesian Networks (DBNs) such as Hidden Markov Models (HMMs) and Kalman Filters, but the dominance of DBN-based approaches has now been recently overturned by an interest in the recurrent neural network-based approaches. We know that RNN is very special in the sense that it is able to handle both the variable-length input and output and, by training an RNN to predict the next output in a sequence, given all the previous outputs, it can be used to model joint probability distribution over sequences. RNNs possess both a richly distributed internal state representation and flexible non-linear transition functions (which determine the evolution of the internal hidden state) giving them high expressive power and as a consequence of which RNNs have gained significant popularity as generative models for highly structured sequential data such as natural speech. By highly structured data, the authors meant that the data is characterized by two properties.


Congratulations to the #IJCAI2021 best paper award winners

AIHub

The IJCAI-2021 awards were announced during the opening ceremony of the International Joint Conference on Artificial Intelligence (IJCAI-21). The honours included the 2021 AIJ classic paper award, the AIJ prominent paper award, and the IJCAI-JAIR best paper prize. This award recognizes outstanding papers, exceptional in their significance and impact, that were published at least 15 years ago, in the journal Artificial Intelligence (AIJ). This paper brought partially observable Markov decision processes (POMDPs) from the field of operational research to the field of AI. It provides an excellent account of the theory behind POMDPs, which demystified the field for a generation of researchers, and popularised their use in both AI and robotics.


Quantum adaptive agents with efficient long-term memories

arXiv.org Artificial Intelligence

Central to the success of adaptive systems is their ability to interpret signals from their environment and respond accordingly -- they act as agents interacting with their surroundings. Such agents typically perform better when able to execute increasingly complex strategies. This comes with a cost: the more information the agent must recall from its past experiences, the more memory it will need. Here we investigate the power of agents capable of quantum information processing. We uncover the most general form a quantum agent need adopt to maximise memory compression advantages, and provide a systematic means of encoding their memory states. We show these encodings can exhibit extremely favourable scaling advantages relative to memory-minimal classical agents when information must be retained about events increasingly far into the past.