Video Representation Learning with Joint-Embedding Predictive Architectures
Drozdov, Katrina, Shwartz-Ziv, Ravid, LeCun, Yann
–arXiv.org Artificial Intelligence
Video representation learning is an increasingly important topic in machine learning research. We present Video JEPA with Variance-Covariance Regularization (VJ-VCR): a joint-embedding predictive architecture for self-supervised video representation learning that employs variance and covariance regularization to avoid representation collapse. We show that hidden representations from our VJ-VCR contain abstract, high-level information about the input data. Specifically, they outperform representations obtained from a generative baseline on downstream tasks that require understanding of the underlying dynamics of moving objects in the videos. Additionally, we explore different ways to incorporate latent variables into the VJ-VCR framework that capture information about uncertainty in the future in non-deterministic settings.
arXiv.org Artificial Intelligence
Dec-14-2024
- Country:
- North America > United States (0.28)
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine (0.46)
- Technology: