Goto

Collaborating Authors

 convtimenet


On Multivariate Financial Time Series Classification

Bournassenko, Grégory

arXiv.org Artificial Intelligence

This article investigates the use of Machine Learning and Deep Learning models in multivariate time series analysis within financial markets. It compares small and big data approaches, focusing on their distinct challenges and the benefits of scaling. Traditional methods such as SVMs are contrasted with modern architectures like ConvTimeNet. The results show the importance of using and understanding Big Data in depth in the analysis and prediction of financial time series.


ConvTimeNet: A Deep Hierarchical Fully Convolutional Model for Multivariate Time Series Analysis

Cheng, Mingyue, Yang, Jiqian, Pan, Tingyue, Liu, Qi, Li, Zhi

arXiv.org Artificial Intelligence

Over a significant period in the past, the convolutional network [He et al., 2016; Zheng et al., 2014; Middlehurst This paper introduces ConvTimeNet, a novel deep et al., 2023] has played a crucial role in time series analysis, hierarchical fully convolutional network designed largely due to its inherent properties that strike an excellent to serve as a general-purpose model for time series balance between computational efficiency and representation analysis. The key design of this network is twofold, quality. Data back to the past years, many representative designed to overcome the limitations of traditional works [Bagnall et al., 2017] of time series analysis typically convolutional networks. Firstly, we propose an employ convolutional networks as the backbone. For adaptive segmentation of time series into sub-series instance, temporal convolutional network (TCN[Bai et al., level patches, treating these as fundamental modeling 2018]) and its variants are widely used in modeling temporal units. This setting avoids the sparsity semantics variation dependence for the time series forecasting task. Furthermore, associated with raw point-level time steps. Secondly, a large number of works (such as InceptionTime[Ismail we design a fully convolutional block by Fawaz et al., 2020], MiniRocket[Dempster et al., 2021], skillfully integrating deepwise and pointwise convolution and MCNN[Cui et al., 2016]) are also proposed by employing operations, following the advanced building convolutional networks to identify informative patterns block style employed in Transformer encoders.


ConvTimeNet: A Pre-trained Deep Convolutional Neural Network for Time Series Classification

Kashiparekh, Kathan, Narwariya, Jyoti, Malhotra, Pankaj, Vig, Lovekesh, Shroff, Gautam

arXiv.org Machine Learning

Training deep neural networks often requires careful hyper-parameter tuning and significant computational resources. In this paper, we propose ConvTimeNet (CTN): an off-the-shelf deep convolutional neural network (CNN) trained on diverse univariate time series classification (TSC) source tasks. Once trained, CTN can be easily adapted to new TSC target tasks via a small amount of fine-tuning using labeled instances from the target tasks. We note that the length of convolutional filters is a key aspect when building a pre-trained model that can generalize to time series of different lengths across datasets. To achieve this, we incorporate filters of multiple lengths in all convolutional layers of CTN to capture temporal features at multiple time scales. We consider all 65 datasets with time series of lengths up to 512 points from the UCR TSC Benchmark for training and testing transferability of CTN: We train CTN on a randomly chosen subset of 24 datasets using a multi-head approach with a different softmax layer for each training dataset, and study generalizability and transferability of the learned filters on the remaining 41 TSC datasets. We observe significant gains in classification accuracy as well as computational efficiency when using pre-trained CTN as a starting point for subsequent task-specific fine-tuning compared to existing state-of-the-art TSC approaches. We also provide qualitative insights into the working of CTN by: i) analyzing the activations and filters of first convolution layer suggesting the filters in CTN are generically useful, ii) analyzing the impact of the design decision to incorporate multiple length decisions, and iii) finding regions of time series that affect the final classification decision via occlusion sensitivity analysis.