Self-Guided Masked Autoencoders for Domain-Agnostic Self-Supervised Learning
Xie, Johnathan, Lee, Yoonho, Chen, Annie S., Finn, Chelsea
–arXiv.org Artificial Intelligence
Self-supervised learning excels in learning representations from large amounts of unlabeled data, demonstrating success across multiple data modalities. Yet, extending self-supervised learning to new modalities is non-trivial because the specifics of existing methods are tailored to each domain, such as domain-specific augmentations which reflect the invariances in the target task. While masked modeling is promising as a domain-agnostic framework for self-supervised learning because it does not rely on input augmentations, its mask sampling procedure remains domain-specific. We present Self-guided Masked Autoencoders (SMA), a fully domain-agnostic masked modeling method. SMA trains an attention based model using a masked modeling objective, by learning masks to sample without any domain-specific assumptions. We evaluate SMA on three self-supervised learning benchmarks in protein biology, chemical property prediction, and particle physics. We find SMA is capable of learning representations without domain-specific knowledge and achieves state-of-the-art performance on these three benchmarks.
arXiv.org Artificial Intelligence
Feb-22-2024
- Country:
- Asia > Middle East
- Israel (0.14)
- Europe > Netherlands (0.14)
- North America > United States (0.14)
- Asia > Middle East
- Genre:
- Research Report (0.82)
- Industry:
- Health & Medicine (0.68)
- Technology: