Goto

Collaborating Authors

 chaotic time sery


A Novel Approach for Estimating Largest Lyapunov Exponents in One-Dimensional Chaotic Time Series Using Machine Learning

Velichko, A., Belyaev, M., Boriskov, P.

arXiv.org Artificial Intelligence

Understanding and quantifying chaos from data remains challenging. We present a data-driven method for estimating the largest Lyapunov exponent (LLE) from one-dimensional chaotic time series using machine learning. A predictor is trained to produce out-of-sample, multi-horizon forecasts; the LLE is then inferred from the exponential growth of the geometrically averaged forecast error (GMAE) across the horizon, which serves as a proxy for trajectory divergence. We validate the approach on four canonical 1D maps-logistic, sine, cubic, and Chebyshev-achieving R2pos > 0.99 against reference LLE curves with series as short as M = 450. Among baselines, KNN yields the closest fits (KNN-R comparable; RF larger deviations). By design the estimator targets positive exponents: in periodic/stable regimes it returns values indistinguishable from zero. Noise robustness is assessed by adding zero-mean white measurement noise and summarizing performance versus the average SNR over parameter sweeps: accuracy saturates for SNRm > 30 dB and collapses below 27 dB, a conservative sensor-level benchmark. The method is simple, computationally efficient, and model-agnostic, requiring only stationarity and the presence of a dominant positive exponent. It offers a practical route to LLE estimation in experimental settings where only scalar time-series measurements are available, with extensions to higher-dimensional and irregularly sampled data left for future work.


Scaling Law for Large-Scale Pre-Training Using Chaotic Time Series and Predictability in Financial Time Series

Takemoto, Yuki

arXiv.org Artificial Intelligence

Time series forecasting plays a critical role in decision-making processes across diverse fields including meteorology, traffic, electricity, economics, finance, and so on. Especially, predicting returns on financial instruments is a challenging problem. Some researchers have proposed time series foundation models applicable to various forecasting tasks. Simultaneously, based on the recognition that real-world time series exhibit chaotic properties, methods have been developed to artificially generate synthetic chaotic time series, construct diverse datasets and train models. In this study, we propose a methodology for modeling financial time series by generating artificial chaotic time series and applying resampling techniques to simulate financial time series data, which we then use as training samples. Increasing the resampling interval to extend predictive horizons, we conducted large-scale pre-training using 10 billion training samples for each case. We subsequently created test datasets for multiple timeframes using actual Bitcoin trade data and performed zero-shot prediction without re-training the pre-trained model. The results of evaluating the profitability of a simple trading strategy based on these predictions demonstrated significant performance improvements over autocorrelation models. During the large-scale pre-training process, we observed a scaling law-like phenomenon that we can achieve predictive performance at a certain level with extended predictive horizons for chaotic time series by increasing the number of training samples exponentially. If this scaling law proves robust and holds true across various chaotic models, it suggests the potential to predict near-future events by investing substantial computational resources. Future research should focus on further large-scale training and verifying the applicability of this scaling law to diverse chaotic models.


Predicting Chaotic System Behavior using Machine Learning Techniques

Rao, Huaiyuan, Zhao, Yichen, Lai, Qiang

arXiv.org Artificial Intelligence

The RCs have been developed from the original Echostate Time series data have attracted significant attention across network (ESN)-based to nonlinear vector autoregression various fields in the natural and social sciences because of (NVAR), which also called NG-RC [18]. An NVAR machine is their potential applications. The analysis and prediction of time created where the feature vector is composed of time-delayed series data have been the focus of extensive research over the observations of the dynamical system, along with nonlinear past few decades [1]-[5]. Chaotic time series are among the functions of these observations. It requires no random matrices, most complex because even small perturbation in initial values fewer metaparameters, and provides interpretable results can lead to significant variations in their behaviors. Due to which reflects the nature of the nonlinear model. In addition, their sensitivity to initial conditions, it is a challenging task to it is 33 162 times less costly to simulate than a typical predict chaotic time behaviors.


A New Self-organizing Interval Type-2 Fuzzy Neural Network for Multi-Step Time Series Prediction

Yao, Fulong, Zhao, Wanqing, Forshaw, Matthew, Song, Yang

arXiv.org Artificial Intelligence

This paper proposes a new self-organizing interval type-2 fuzzy neural network with multiple outputs (SOIT2FNN-MO) for multi-step time series prediction. Differing from the traditional six-layer IT2FNN, a nine-layer network is developed to improve prediction accuracy, uncertainty handling and model interpretability. First, a new co-antecedent layer and a modified consequent layer are devised to improve the interpretability of the fuzzy model for multi-step predictions. Second, a new transformation layer is designed to address the potential issues in the vanished rule firing strength caused by highdimensional inputs. Third, a new link layer is proposed to build temporal connections between multi-step predictions. Furthermore, a two-stage self-organizing mechanism is developed to automatically generate the fuzzy rules, in which the first stage is used to create the rule base from empty and perform the initial optimization, while the second stage is to fine-tune all network parameters. Finally, various simulations are carried out on chaotic and microgrid time series prediction problems, demonstrating the superiority of our approach in terms of prediction accuracy, uncertainty handling and model interpretability.


Temporal Convolution Derived Multi-Layered Reservoir Computing

Viehweg, Johannes, Walther, Dominik, Mäder, Prof. Dr. -Ing. Patrick

arXiv.org Artificial Intelligence

The prediction of time series is a challenging task relevant in such diverse applications as analyzing financial data, forecasting flow dynamics or understanding biological processes. Especially chaotic time series that depend on a long history pose an exceptionally difficult problem. While machine learning has shown to be a promising approach for predicting such time series, it either demands long training time and much training data when using deep recurrent neural networks. Alternative, when using a reservoir computing approach it comes with high uncertainty and typically a high number of random initializations and extensive hyper-parameter tuning when using a reservoir computing approach. In this paper, we focus on the reservoir computing approach and propose a new mapping of input data into the reservoir's state space. Furthermore, we incorporate this method in two novel network architectures increasing parallelizability, depth and predictive capabilities of the neural network while reducing the dependence on randomness. For the evaluation, we approximate a set of time series from the Mackey-Glass equation, inhabiting non-chaotic as well as chaotic behavior and compare our approaches in regard to their predictive capabilities to echo state networks and gated recurrent units. For the chaotic time series, we observe an error reduction of up to $85.45\%$ and up to $87.90\%$ in contrast to echo state networks and gated recurrent units respectively. Furthermore, we also observe tremendous improvements for non-chaotic time series of up to $99.99\%$ in contrast to existing approaches.


Evaluating generation of chaotic time series by convolutional generative adversarial networks

Tanaka, Yuki, Yamaguti, Yutaka

arXiv.org Artificial Intelligence

To understand the ability and limitations of convolutional neural networks to generate time series that mimic complex temporal signals, we trained a generative adversarial network consisting of deep convolutional networks to generate chaotic time series and used nonlinear time series analysis to evaluate the generated time series. A numerical measure of determinism and the Lyapunov exponent, a measure of trajectory instability, showed that the generated time series well reproduce the chaotic properties of the original time series. However, error distribution analyses showed that large errors appeared at a low but non-negligible rate. Such errors would not be expected if the distribution were assumed to be exponential.


Dynamic Modelling of Chaotic Time Series with Neural Networks

Neural Information Processing Systems

The auditory system of the barn owl contains several spatial maps. In young barn owls raised with optical prisms over their eyes, these auditory maps are shifted to stay in register with the visual map, suggesting that the visual input imposes a frame of reference on the auditory maps. However, the optic tectum, the first site of convergence of visual with auditory information, is not the site of plasticity for the shift of the auditory maps; the plasticity occurs instead in the inferior colliculus, which contains an auditory map and projects into the optic tectum. We explored a model of the owl remapping in which a global reinforcement signal whose delivery is controlled by visual foveation. A hebb learning rule gated by rein(cid:173) forcement learned to appropriately adjust auditory maps. In addi(cid:173) tion, reinforcement learning preferentially adjusted the weights in the inferior colliculus, as in the owl brain, even though the weights were allowed to change throughout the auditory system.


Effect of temporal resolution on the reproduction of chaotic dynamics via reservoir computing

Tsuchiyama, Kohei, Röhm, André, Mihana, Takatomo, Horisaki, Ryoichi, Naruse, Makoto

arXiv.org Artificial Intelligence

Reservoir computing is a machine learning paradigm that uses a structure called a reservoir, which has nonlinearities and short-term memory. In recent years, reservoir computing has expanded to new functions such as the autonomous generation of chaotic time series, as well as time series prediction and classification. Furthermore, novel possibilities have been demonstrated, such as inferring the existence of previously unseen attractors. Sampling, in contrast, has a strong influence on such functions. Sampling is indispensable in a physical reservoir computer that uses an existing physical system as a reservoir because the use of an external digital system for the data input is usually inevitable. This study analyzes the effect of sampling on the ability of reservoir computing to autonomously regenerate chaotic time series. We found, as expected, that excessively coarse sampling degrades the system performance, but also that excessively dense sampling is unsuitable. Based on quantitative indicators that capture the local and global characteristics of attractors, we identify a suitable window of the sampling frequency and discuss its underlying mechanisms.


Adaptive Anomaly Detection in Chaotic Time Series with a Spatially Aware Echo State Network

Heim, Niklas, Avery, James E.

arXiv.org Machine Learning

This work builds an automated anomaly detection method for chaotic time series, and more concretely for turbulent, high-dimensional, ocean simulations. We solve this task by extending the Echo State Network by spatially aware input maps, such as convolutions, gradients, cosine transforms, et cetera, as well as a spatially aware loss function. The spatial ESN is used to create predictions which reduce the detection problem to thresholding of the prediction error. We benchmark our detection framework on different tasks of increasing difficulty to show the generality of the framework before applying it to raw climate model output in the region of the Japanese ocean current Kuroshio, which exhibits a bimodality that is not easily detected by the naked eye. The code is available as an open source Python package, Torsk, available at https://github.com/nmheim/torsk, where we also provide supplementary material and programs that reproduce the results shown in this paper.


Classification of chaotic time series with deep learning

Boullé, Nicolas, Dallas, Vassilios, Nakatsukasa, Yuji, Samaddar, D.

arXiv.org Machine Learning

We use deep neural networks to classify time series generated by discrete and continuous dynamical systems based on their chaotic behaviour. Our approach to circumvent the lack of precise models for some of the most challenging real-life applications is to train different neural networks on a data set from a dynamical system with a basic or low-dimensional phase space and then use these networks to classify time series of a dynamical system with more intricate or high-dimensional phase space. We illustrate this extrapolation approach using the logistic map, the sine-circle map, the Lorenz system, and the Kuramoto-Sivashinsky equation. We observe that the proposed convolutional neural network with large kernel size outperforms state-of-the-art neural networks for time series classification and is able to classify time series as chaotic or non-chaotic with high accuracy. Introduction Data and in particular time series are generated from numerous observations and experiments across different scientific fields such as atmospheric and oceanic sciences for climate predictions, nuclear fusion for control and safety, biology and medicine for diagnosis. Fourier transforms, radial basis functions approximation and standard numerical techniques have been extensively applied to perform short and long term predictions of chaotic time series [1, 2, 3, 4]. On the other hand, the spectacular success of machine learning and deep learning techniques to image classification [5, 6], which have recently surpassed human-level performance on the ImageNet data set [7], has inspired the development of neural network techniques for time series forecasting [8, 9] and classification [10]. Recently, deep learning approaches have been used to solve partial differential equations in high dimensions [11, 12, 13] and identify hidden physics models from experimental data [14, 15, 16, 17].