Distilled Feature Fields Enable Few-Shot Language-Guided Manipulation
Shen, William, Yang, Ge, Yu, Alan, Wong, Jansen, Kaelbling, Leslie Pack, Isola, Phillip
–arXiv.org Artificial Intelligence
Self-supervised and language-supervised image models contain rich knowledge of the world that is important for generalization. Many robotic tasks, however, require a detailed understanding of 3D geometry, which is often lacking in 2D image features. This work bridges this 2D-to-3D gap for robotic manipulation by leveraging distilled feature fields to combine accurate 3D geometry with rich semantics from 2D foundation models. We present a few-shot learning method for 6-DOF grasping and placing that harnesses these strong spatial and semantic priors to achieve in-the-wild generalization to unseen objects. Using features distilled from a vision-language model, CLIP, we present a way to designate novel objects for manipulation via free-text natural language, and demonstrate its ability to generalize to unseen expressions and novel categories of objects.
arXiv.org Artificial Intelligence
Dec-29-2023
- Country:
- Europe > United Kingdom
- England (0.14)
- North America > United States (0.46)
- Europe > United Kingdom
- Genre:
- Research Report (0.64)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Neural Networks (0.46)
- Statistical Learning (0.68)
- Natural Language (1.00)
- Robots (1.00)
- Vision (1.00)
- Machine Learning
- Information Technology > Artificial Intelligence