Contrastive learning of strong-mixing continuous-time stochastic processes

Liu, Bingbin, Ravikumar, Pradeep, Risteski, Andrej

arXiv.org Machine Learning 

One of the paradigms of learning from unlabeled data that has seen a lot of recent work in various application domains is "self-supervised learning". These methods supervise the training process with information inherent to the data without requiring human annotations, and have been applied across computer vision, natural language processing, reinforcement learning and scientific domains. Despite the popularity, they are still not very well understood--both on the theoretical and empirical front--often requiring extensive trial and error to find the right pairing of architecture and learning method. In particular, it is often hard to pin down what exactly these methods are trying to learn, and it is even harder to determine what is their statistical and algorithmic complexity. The specific family of self-supervised approaches we focus on in this work is contrastive learning, which constructs different types of tuples by utilizing certain structures in the data and trains the model to identify the types. For an example in vision, Chen et al. (2020) apply two random augmentations (e.g.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found