Goto

Collaborating Authors

time series


shobrook/sequitur

#artificialintelligence

It implements three different autoencoder architectures in PyTorch, and a predefined training loop. Each autoencoder learns to represent input sequences as lower-dimensional, fixed-size vectors. This can be useful for finding patterns among sequences, clustering sequences, or converting sequences into inputs for other algorithms. First, you need to prepare a set of example sequences to train an autoencoder on. So, if each example in your training set is a sequence of 10 5x5 matrices, then each example would be a tensor with shape [10, 5, 5].


Top 10 Python Tools For Time Series Analysis

#artificialintelligence

Time series is a sequence of numerical data points in successive order and time series analysis is the technique of analysing the available data to predict the future outcome of an application. At present, time series analysis has been utilised in a number of applications, including stock market analysis, economic forecasting, pattern recognition, and sales forecasting. Here is a list of top ten Python tools, in no particular order, for Time Series Analysis. About: Arrow is a Python library that offers a human-friendly approach to creating, manipulating, formatting and converting dates, times and timestamps. The library implements and updates the datetime type, plugging gaps in functionality and providing an intelligent module API that supports many common creation scenarios. About: Cesium is an open source library that allows users to extract features from raw time series data, build machine learning models from these features, as well as generate predictions for new data.


Predict just about anything with Google Earth Engine. Part I

#artificialintelligence

Description Would you like to be able to develop and prepare the data you need to pose, explore, and answer the most pressing and complex questions in your field of research? This course concerns itself with one of the most demanding and least covered parts of developing a predictive model for precision agriculture, or just about anything: sampling. When studying machine learning through video tutorials you normally access somebody's dataset and learn how to apply algorithms. But how were those neat datasets created? This course details how to use and adapt to your unique needs some tools I developed to sample just about any spatially explicit variable through the Google Earth Engine Platform.


Bayesian Change Point Dectection under Complex Time Series in Python Machine Learning Client for SAP HANA

#artificialintelligence

A complex time series in real life usually has many change points inside it. When dealing with such data, simply applying traditional seasonality test to it may not render a convincing decomposition result. In this blog post, we will show how to use Bayesian Change Point Detection in the Python machine learning client for SAP HANA(hana-ml) to detect those change points and decompose the target time series. Time series may not ideally contain monotonic trend and seasonal waves after decomposition. On the contrary, it may include a great many inner change points in those parts.


Am I fit for this physical activity? Neural embedding of physical conditioning from inertial sensors

arXiv.org Artificial Intelligence

Inertial Measurement Unit (IMU) sensors are becoming increasingly ubiquitous in everyday devices such as smartphones, fitness watches, etc. As a result, the array of health-related applications that tap onto this data has been growing, as well as the importance of designing accurate prediction models for tasks such as human activity recognition (HAR). However, one important task that has received little attention is the prediction of an individual's heart rate when undergoing a physical activity using IMU data. This could be used, for example, to determine which activities are safe for a person without having him/her actually perform them. We propose a neural architecture for this task composed of convolutional and LSTM layers, similarly to the state-of-the-art techniques for the closely related task of HAR. However, our model includes a convolutional network that extracts, based on sensor data from a previously executed activity, a physical conditioning embedding (PCE) of the individual to be used as the LSTM's initial hidden state. We evaluate the proposed model, dubbed PCE-LSTM, when predicting the heart rate of 23 subjects performing a variety of physical activities from IMU-sensor data available in public datasets (PAMAP2, PPG-DaLiA). For comparison, we use as baselines the only model specifically proposed for this task, and an adapted state-of-the-art model for HAR. PCE-LSTM yields over 10% lower mean absolute error. We demonstrate empirically that this error reduction is in part due to the use of the PCE. Last, we use the two datasets (PPG-DaLiA, WESAD) to show that PCE-LSTM can also be successfully applied when photoplethysmography (PPG) sensors are available to rectify heart rate measurement errors caused by movement, outperforming the state-of-the-art deep learning baselines by more than 30%.


An Experimental Review on Deep Learning Architectures for Time Series Forecasting

arXiv.org Artificial Intelligence

In recent years, deep learning techniques have outperformed traditional models in many machine learning tasks. Deep neural networks have successfully been applied to address time series forecasting problems, which is a very important topic in data mining. They have proved to be an effective solution given their capacity to automatically learn the temporal dependencies present in time series. However, selecting the most convenient type of deep neural network and its parametrization is a complex task that requires considerable expertise. Therefore, there is a need for deeper studies on the suitability of all existing architectures for different forecasting tasks. In this work, we face two main challenges: a comprehensive review of the latest works using deep learning for time series forecasting; and an experimental study comparing the performance of the most popular architectures. The comparison involves a thorough analysis of seven types of deep learning models in terms of accuracy and efficiency. We evaluate the rankings and distribution of results obtained with the proposed models under many different architecture configurations and training hyperparameters. The datasets used comprise more than 50000 time series divided into 12 different forecasting problems. By training more than 38000 models on these data, we provide the most extensive deep learning study for time series forecasting. Among all studied models, the results show that long short-term memory (LSTM) and convolutional networks (CNN) are the best alternatives, with LSTMs obtaining the most accurate forecasts. CNNs achieve comparable performance with less variability of results under different parameter configurations, while also being more efficient.


Simplifying data: IBM's AutoAI automates time series forecasting

#artificialintelligence

Creating AI models is not a walk in the park. So why not get AI to… build AI? Sounds simple, but with the ever-growing variety of models, data scientists first have to have the tools to better automate the model building process. In time series forecasting – models that predict future values of a time series, based on past data or features – the problem is even harder. There are just too many domains that generate time series data, with different and complex modeling approaches. We think we can help.


Learning Time Series from Scale Information

arXiv.org Machine Learning

Sequentially obtained dataset usually exhibits different behavior at different data resolutions/scales. Instead of inferring from data at each scale individually, it is often more informative to interpret the data as an ensemble of time series from different scales. This naturally motivated us to propose a new concept referred to as the scale-based inference. The basic idea is that more accurate prediction can be made by exploiting scale information of a time series. We first propose a nonparametric predictor based on $k$-nearest neighbors with an optimally chosen $k$ for a single time series. Based on that, we focus on a specific but important type of scale information, the resolution/sampling rate of time series data. We then propose an algorithm to sequentially predict time series using past data at various resolutions. We prove that asymptotically the algorithm produces the mean prediction error that is no larger than the best possible algorithm at any single resolution, under some optimally chosen parameters. Finally, we establish the general formulations for scale inference, and provide further motivating examples. Experiments on both synthetic and real data illustrate the potential applicability of our approaches to a wide range of time series models.


Modeling Multivariate Cyber Risks: Deep Learning Dating Extreme Value Theory

arXiv.org Machine Learning

Modeling cyber risks has been an important but challenging task in the domain of cyber security. It is mainly because of the high dimensionality and heavy tails of risk patterns. Those obstacles have hindered the development of statistical modeling of the multivariate cyber risks. In this work, we propose a novel approach for modeling the multivariate cyber risks which relies on the deep learning and extreme value theory. The proposed model not only enjoys the high accurate point predictions via deep learning but also can provide the satisfactory high quantile prediction via extreme value theory. The simulation study shows that the proposed model can model the multivariate cyber risks very well and provide satisfactory prediction performances. The empirical evidence based on real honeypot attack data also shows that the proposed model has very satisfactory prediction performances.


Forecasting reconciliation with a top-down alignment of independent level forecasts

arXiv.org Machine Learning

Hierarchical forecasting with intermittent time series is a challenge in both research and empirical studies. The overall forecasting performance is heavily affected by the forecasting accuracy of intermittent time series at bottom levels. In this paper, we present a forecasting reconciliation approach that treats the bottom level forecast as latent to ensure higher forecasting accuracy on the upper levels of the hierarchy. We employ a pure deep learning forecasting approach N-BEATS for continuous time series on top levels and a widely used tree-based algorithm LightGBM for the bottom level intermittent time series. The hierarchical forecasting with alignment approach is simple and straightforward to implement in practice. It sheds light on an orthogonal direction for forecasting reconciliation. When there is difficulty finding an optimal reconciliation, allowing suboptimal forecasts at a lower level could retain a high overall performance. The approach in this empirical study was developed by the first author during the M5 Forecasting Accuracy competition ranking second place. The approach is business orientated and could be beneficial for business strategic planning.