Collaborating Authors

Error-feedback Stochastic Configuration Strategy on Convolutional Neural Networks for Time Series Forecasting Machine Learning

-- Despite the superiority of convolutional neural networks demonstrated in time series modeling and forecasting, it has not been fully explored on the design of the neural network architecture as well as the tuning of the hyper-parameters. Inspired by the iterative construction strategy for building a random multilayer perceptron, we propose a novel Error-feedback Stochastic Configuration (ESC) strategy to construct a random Convolutional Neural Network (ESC-CNN) for time series forecasting task, which builds the network architecture adaptively. The ESC strategy suggests that random filters and neurons of the error-feedback fully connected layer are incre-mentally added in a manner that they can steadily compensate the prediction error during the construction process, and a filter selection strategy is introduced to secure that ESC-CNN holds the universal approximation property, providing helpful information at each iterative process for the prediction. The performance of ESC-CNN is justified on its prediction accuracy for one-step- ahead and multi-step-ahead forecasting tasks. Comprehensive experiments on a synthetic dataset and two real-world datasets show that the proposed ESC-CNN not only outperforms the state-of-art random neural networks, but also exhibits strong predictive power in comparison to trained Convolution Neural Networks and Long Short-T erm Memory models, demonstrating the effectiveness of ESC-CNN in time series forecasting. Time series forecasting, especially computational intelligence enabled time series forecasting, is of great importance for a learning system in dynamic environments, and plays a vital role in applications such as in finance [1]-[3], energy [4]- [6], traffic [7]-[9], and electric load [10]-[12], etc. Recently, convolutional neural networks (CNNs) have been successfully implemented for time series forecasting tasks, benefiting from its strength in extracting local features via multiple convolu-tional filters and learning representation by fully connected layers [13]-[16].

GRU-ODE-Bayes: Continuous Modeling of Sporadically-Observed Time Series

Neural Information Processing Systems

Modeling real-world multidimensional time series can be particularly challenging when these are sporadically observed (i.e., sampling is irregular both in time and across dimensions)--such as in the case of clinical patient data. To address these challenges, we propose (1) a continuous-time version of the Gated Recurrent Unit, building upon the recent Neural Ordinary Differential Equations (Chen et al., 2018), and (2) a Bayesian update network that processes the sporadic observations. We bring these two ideas together in our GRU-ODE-Bayes method. We then demonstrate that the proposed method encodes a continuity prior for the latent process and that it can exactly represent the Fokker-Planck dynamics of complex processes driven by a multidimensional stochastic differential equation. Additionally, empirical evaluation shows that our method outperforms the state of the art on both synthetic data and real-world data with applications in healthcare and climate forecast.

ML Times Series Modeling


Through out my data science journey I have learned so many different modeling techniques, but I just had not found my niche yet, and I had been patiently waiting for the right machine learning model to come along and sweep me off my feet. I had been working in the business, finance, and fintech space for the past few years and my love for business forecasting and predictions to help maximize shareholder's wealth had to stay center staged. Then one day, I discovered the machine learning Time Series Model! I was hooked on day one. The fact that I can create a model to test/train on past performances to help forecast the future is very interesting to me.

PSO-MISMO Modeling Strategy for Multi-Step-Ahead Time Series Prediction Machine Learning

Multi-step-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multi-step-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this study proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.