Don't overfit the history -- Recursive time series data augmentation

Aboussalah, Amine Mohamed, Kwon, Min-Jae, Patel, Raj G, Chi, Cheng, Lee, Chi-Guhn

arXiv.org Artificial Intelligence 

The recent success of machine learning (ML) algorithms depends on the availability of a large amount of data and prodigious computing power, which in practice are not always available. In real world applications, it is often impossible to indefinitely sample and ideally, we would like the ML model to make good decisions with a limited number of samples. To overcome these issues, we can exploit additional information such as the structure or invariance in the data that help the ML algorithms efficiently learn and focus on the most important features for solving the task. In ML, the exploitation of structure in the data has been handled using four different yet complementary approaches: 1) Architecture design, 2) Transfer learning, 3) Data representation, and 4) Data augmentation. Our focus in this work is on data augmentation approaches in the context of time series learning. Time series representations do not expose the full information of the underlying dynamical system [1] in a way that ML can easily recognize. For instance, in financial time series data, there are patterns at various scales that can be learned to improve performance.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found