Context-driven self-supervised visual learning: Harnessing the environment as a data source
Zhu, Lizhen, Wang, James Z., Lee, Wonseuk, Wyble, Brad
–arXiv.org Artificial Intelligence
Visual learning often occurs in a specific context, where an agent acquires skills through exploration and tracking of its location in a consistent environment. The historical spatial context of the agent provides a similarity signal for self-supervised contrastive learning. We present a unique approach, termed Environmental Spatial Similarity (ESS), that complements existing contrastive learning methods. Using images from simulated, photorealistic environments as an experimental setting, we demonstrate that ESS outperforms traditional instance discrimination approaches. Moreover, sampling additional data from the same environment substantially improves accuracy and provides new augmentations. ESS allows remarkable proficiency in room classification and spatial prediction tasks, especially in unfamiliar environments. This learning paradigm has the potential to enable rapid visual learning in agents operating in new environments with unique visual characteristics. Potentially transformative applications span from robotics to space exploration. Our proof of concept demonstrates improved efficiency over methods that rely on extensive, disconnected datasets.
arXiv.org Artificial Intelligence
Jan-25-2024
- Country:
- Europe > United Kingdom
- England > Leicestershire > Loughborough (0.14)
- North America > United States (0.68)
- Europe > United Kingdom
- Genre:
- Research Report (1.00)
- Industry:
- Information Technology (0.46)
- Technology: