CLIP-Fields: Weakly Supervised Semantic Fields for Robotic Memory
Shafiullah, Nur Muhammad Mahi, Paxton, Chris, Pinto, Lerrel, Chintala, Soumith, Szlam, Arthur
–arXiv.org Artificial Intelligence
We propose CLIP-Fields, an implicit scene model that can be used for a variety of tasks, such as segmentation, instance identification, semantic search over space, and view localization. CLIP-Fields learns a mapping from spatial locations to semantic embedding vectors. Importantly, we show that this mapping can be trained with supervision coming only from web-image and web-text trained models such as CLIP, Detic, and Sentence-BERT; and thus uses no direct human supervision. When compared to baselines like Mask-RCNN, our method outperforms on few-shot instance identification or semantic segmentation on the HM3D dataset with only a fraction of the examples. Finally, we show that using CLIP-Fields as a scene memory, robots can perform semantic navigation in real-world environments. Our code and demonstration videos are available here: https://mahis.life/clip-fields
arXiv.org Artificial Intelligence
May-22-2023
- Genre:
- Research Report (0.50)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Natural Language > Text Processing (1.00)
- Representation & Reasoning (1.00)
- Robots (1.00)
- Vision (1.00)
- Information Technology > Artificial Intelligence