Goto

Collaborating Authors

time series


Getting started with Time Series using Pandas

#artificialintelligence

This article aims to introduce some standard techniques used in time-series analysis and walks through the iterative steps required to manipulate and visualize time-series data. Maruti Suzuki India Limited, formerly known as Maruti Udyog Limited, is an automobile manufacturer in India. It is a 56.21% owned subsidiary of the Japanese car and motorcycle manufacturer Suzuki Motor Corporation. Fire up the editor of your choice and type in the following code to import the required libraries and data. The data has been taken from Kaggle.


An Intuitive Study of Time Series Analysis

#artificialintelligence

A time series data is a set of observation on the value that a variable takes of different time, such data may be collected at regular time intervals such as daily stock price, monthly money supply figures, annual GDP etc. Time series data have a natural temporal ordering. This makes time series analysis distinct from other common data analysis problems in which there is no natural order of the observation. In simple word we can say, the data which are collected in according to time is called time series data. On the other hand, the data which are collected by observing many subject at the same point of time is called cross sectional data. A time series is a set of observations meas ured at time or space intervals arranged in chrono logical order.


Machine Learning Applied to Time Series

#artificialintelligence

Typically, the most distinctive feature is predictions by training an event that is likely to happen in the future with the available data sets. Objectives can of course change according to sectoral expectations, but we can emphasize what is common. Facilitating the handling of the focal point separately from the general in most cases makes it easier to understand the whole picture. Because a suitable estimate that will make the composition valid can be constructed this way. An example would be to show the curve of birth-death rates by years in a single graph.


Interpretable Models for Granger Causality Using Self-explaining Neural Networks

arXiv.org Machine Learning

Exploratory analysis of time series data can yield a better understanding of complex dynamical systems. Granger causality is a practical framework for analysing interactions in sequential data, applied in a wide range of domains. In this paper, we propose a novel framework for inferring multivariate Granger causality under nonlinear dynamics based on an extension of self-explaining neural networks. This framework is more interpretable than other neural-network-based techniques for inferring Granger causality, since in addition to relational inference, it also allows detecting signs of Granger-causal effects and inspecting their variability over time. In comprehensive experiments on simulated data, we show that our framework performs on par with several powerful baseline methods at inferring Granger causality and that it achieves better performance at inferring interaction signs. The results suggest that our framework is a viable and more interpretable alternative to sparse-input neural networks for inferring Granger causality.


Discrete Graph Structure Learning for Forecasting Multiple Time Series

arXiv.org Machine Learning

Time series forecasting is an extensively studied subject in statistics, economics, and computer science. Exploration of the correlation and causation among the variables in a multivariate time series shows promise in enhancing the performance of a time series model. When using deep neural networks as forecasting models, we hypothesize that exploiting the pairwise information among multiple (multivariate) time series also improves their forecast. If an explicit graph structure is known, graph neural networks (GNNs) have been demonstrated as powerful tools to exploit the structure. In this work, we propose learning the structure simultaneously with the GNN if the graph is unknown. We cast the problem as learning a probabilistic graph model through optimizing the mean performance over the graph distribution. The distribution is parameterized by a neural network so that discrete graphs can be sampled differentiably through reparameterization. Empirical evaluations show that our method is simpler, more efficient, and better performing than a recently proposed bilevel learning approach for graph structure learning, as well as a broad array of forecasting models, either deep or non-deep learning based, and graph or non-graph based.


The Connection between Discrete- and Continuous-Time Descriptions of Gaussian Continuous Processes

arXiv.org Machine Learning

Learning the continuous equations of motion from discrete observations is a common task in all areas of physics. However, not any discretization of a Gaussian continuous-time stochastic process can be adopted in parametric inference. We show that discretizations yielding consistent estimators have the property of `invariance under coarse-graining', and correspond to fixed points of a renormalization group map on the space of autoregressive moving average (ARMA) models (for linear processes). This result explains why combining differencing schemes for derivatives reconstruction and local-in-time inference approaches does not work for time series analysis of second or higher order stochastic differential equations, even if the corresponding integration schemes may be acceptably good for numerical simulations.


TC-DTW: Accelerating Multivariate Dynamic Time Warping Through Triangle Inequality and Point Clustering

arXiv.org Artificial Intelligence

Dynamic time warping (DTW) plays an important role in analytics on time series. Despite the large body of research on speeding up univariate DTW, the method for multivariate DTW has not been improved much in the last two decades. The most popular algorithm used today is still the one developed seventeen years ago. This paper presents a solution that, as far as we know, for the first time consistently outperforms the classic multivariate DTW algorithm across dataset sizes, series lengths, data dimensions, temporal window sizes, and machines. The new solution, named TC-DTW, introduces Triangle Inequality and Point Clustering into the algorithm design on lower bound calculations for multivariate DTW. In experiments on DTW-based nearest neighbor finding, the new solution avoids as much as 98% (60% average) DTW distance calculations and yields as much as 25X (7.5X average) speedups.


Exponential Kernels with Latency in Hawkes Processes: Applications in Finance

arXiv.org Machine Learning

The Tick library allows researchers in market microstructure to simulate and learn Hawkes process in high-frequency data, with optimized parametric and non-parametric learners. But one challenge is to take into account the correct causality of order book events considering latency: the only way one order book event can influence another is if the time difference between them (by the central order book timestamps) is greater than the minimum amount of time for an event to be (i) published in the order book, (ii) reach the trader responsible for the second event, (iii) influence the decision (processing time at the trader) and (iv) the 2nd event reach the order book and be processed. For this we can use exponential kernels shifted to the right by the latency amount. We derive the expression for the log-likelihood to be minimized for the 1-D and the multidimensional cases, and test this method with simulated data and real data. On real data we find that, although not all decays are the same, the latency itself will determine most of the decays. We also show how the decays are related to the latency. Code is available on GitHub at https://github.com/MarcosCarreira/Hawkes-With-Latency.


Challenges and approaches to time-series forecasting in data center telemetry: A Survey

arXiv.org Artificial Intelligence

Time-series forecasting has been an important research domain for so many years. Its applications include ECG predictions, sales forecasting, weather conditions, even COVID-19 spread predictions. These applications have motivated many researchers to figure out an optimal forecasting approach, but the modeling approach also changes as the application domain changes. This work has focused on reviewing different forecasting approaches for telemetry data predictions collected at data centers. Forecasting of telemetry data is a critical feature of network and data center management products. However, there are multiple options of forecasting approaches that range from a simple linear statistical model to high capacity deep learning architectures. In this paper, we attempted to summarize and evaluate the performance of well known time series forecasting techniques. We hope that this evaluation provides a comprehensive summary to innovate in forecasting approaches for telemetry data.


General Hannan and Quinn Criterion for Common Time Series

arXiv.org Machine Learning

A common solution in model selection is to choose the model, minimizing a penalized based criterion which is the sum of two terms: the first one is the empirical risk (least squares, likelihood) that measures the goodness of fit and the second one is an increasing function of the complexity which aims to penalize large models and control the bias. Therefore a challenging task when designing a penalized criterion is the specification of the penalty term. Considering leading model selection criteria (BIC, AIC, Cp, HQ to name a few), one can see that the penalty term is a product of the model dimension with a sequence which is specific to the criteria. Indeed, a criterion is designed according to the goal one would like to achieve. The classical properties for model selection criteria include consistency, efficiency (oracle inequality, asymptotic optimality), adaptative in the minimax sense.