Goto

Collaborating Authors

Gentle Introduction to Models for Sequence Prediction with Recurrent Neural Networks - Machine Learning Mastery

#artificialintelligence

Sequence prediction is a problem that involves using historical sequence information to predict the next value or values in the sequence. The sequence may be symbols like letters in a sentence or real values like those in a time series of prices. Sequence prediction may be easiest to understand in the context of time series forecasting as the problem is already generally understood. In this post, you will discover the standard sequence prediction models that you can use to frame your own sequence prediction problems. Recurrent Neural Networks, like Long Short-Term Memory (LSTM) networks, are designed for sequence prediction problems.


Recurrent Auto-Encoder Model for Large-Scale Industrial Sensor Signal Analysis

arXiv.org Machine Learning

Recurrent auto-encoder model summarises sequential data through an encoder structure into a fixed-length vector and then reconstructs the original sequence through the decoder structure. The summarised vector can be used to represent time series features. In this paper, we propose relaxing the dimensionality of the decoder output so that it performs partial reconstruction. The fixed-length vector therefore represents features in the selected dimensions only. In addition, we propose using rolling fixed window approach to generate training samples from unbounded time series data. The change of time series features over time can be summarised as a smooth trajectory path. The fixed-length vectors are further analysed using additional visualisation and unsupervised clustering techniques. The proposed method can be applied in large-scale industrial processes for sensors signal analysis purpose, where clusters of the vector representations can reflect the operating states of the industrial system.


Backpropagation in Neural Networks: How it Helps?

#artificialintelligence

Neural networks have shown significant advancements in recent years. From facial recognition tools in smartphone Face ID, to self driving cars, the applications of neural networks have influenced every industry. This subset of machine learning is comprised of node layers, containing an input layer, one or more hidden layers, and an output layer. Each node is interconnected like human brain and has an associated weight and threshold. Suppose the output value of a node is higher than the specified threshold value, it implies that the node is activated and ready to relay data to the next layer of the neural network. There are various activation functions like Threshold function, Piecewise linear function or Sigmoid function.


Generalization and Scaling in Reinforcement Learning

Neural Information Processing Systems

In associative reinforcement learning, an environment generates input vectors, a learning system generates possible output vectors, and a reinforcement functioncomputes feedback signals from the input-output pairs. The task is to discover and remember input-output pairs that generate rewards. Especially difficult cases occur when rewards are rare, since the expected time for any algorithm can grow exponentially with the size of the problem. Nonetheless, if a reinforcement function possesses regularities, and a learning algorithm exploits them, learning time can be reduced below that of non-generalizing algorithms. This paper describes a neural network algorithm called complementary reinforcement back-propagation(CRBP), and reports simulation results on problems designed to offer differing opportunities for generalization.


Generalization and Scaling in Reinforcement Learning

Neural Information Processing Systems

In associative reinforcement learning, an environment generates input vectors, a learning system generates possible output vectors, and a reinforcement function computes feedback signals from the input-output pairs. The task is to discover and remember input-output pairs that generate rewards. Especially difficult cases occur when rewards are rare, since the expected time for any algorithm can grow exponentially with the size of the problem. Nonetheless, if a reinforcement function possesses regularities, and a learning algorithm exploits them, learning time can be reduced below that of non-generalizing algorithms. This paper describes a neural network algorithm called complementary reinforcement back-propagation (CRBP), and reports simulation results on problems designed to offer differing opportunities for generalization.