Leveraging Large (Visual) Language Models for Robot 3D Scene Understanding
Chen, William, Hu, Siyi, Talak, Rajat, Carlone, Luca
–arXiv.org Artificial Intelligence
Abstract semantic 3D scene understanding is a problem of critical importance in robotics. As robots still lack the common-sense knowledge about household objects and locations of an average human, we investigate the use of pre-trained language models to impart common sense for scene understanding. We introduce and compare a wide range of scene classification paradigms that leverage language only (zero-shot, embedding-based, and structured-language) or vision and language (zero-shot and fine-tuned). We find that the best approaches in both categories yield $\sim 70\%$ room classification accuracy, exceeding the performance of pure-vision and graph classifiers. We also find such methods demonstrate notable generalization and transfer capabilities stemming from their use of language.
arXiv.org Artificial Intelligence
Nov-8-2023
- Country:
- North America > United States > Massachusetts (0.14)
- Genre:
- Research Report (0.82)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Neural Networks > Deep Learning (0.46)
- Performance Analysis > Accuracy (0.34)
- Natural Language > Large Language Model (1.00)
- Robots (1.00)
- Vision (1.00)
- Machine Learning
- Information Technology > Artificial Intelligence