RoboLLM: Robotic Vision Tasks Grounded on Multimodal Large Language Models
Long, Zijun, Killick, George, McCreadie, Richard, Camarasa, Gerardo Aragon
–arXiv.org Artificial Intelligence
Robotic vision applications often necessitate a wide range of visual perception tasks, such as object detection, segmentation, and identification. While there have been substantial advances in these individual tasks, integrating specialized models into a unified vision pipeline presents significant engineering challenges and costs. Recently, Multimodal Large Language Models (MLLMs) have emerged as novel backbones for various downstream tasks. We argue that leveraging the pre-training capabilities of MLLMs enables the creation of a simplified framework, thus mitigating the need for task-specific encoders. Specifically, the large-scale pretrained knowledge in MLLMs allows for easier fine-tuning to downstream robotic vision tasks and yields superior performance. We introduce the RoboLLM framework, equipped with a BEiT-3 backbone, to address all visual perception tasks in the ARMBench challenge-a large-scale robotic manipulation dataset about real-world warehouse scenarios. RoboLLM not only outperforms existing baselines but also substantially reduces the engineering burden associated with model selection and tuning. The source code is publicly available at https://github.com/longkukuhi/armbench.
arXiv.org Artificial Intelligence
Oct-16-2023
- Country:
- Asia > Middle East
- Israel (0.14)
- Europe (1.00)
- North America > United States
- Hawaii (0.14)
- Asia > Middle East
- Genre:
- Research Report (0.83)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Neural Networks > Deep Learning (0.46)
- Statistical Learning (0.66)
- Natural Language > Large Language Model (0.70)
- Robots (1.00)
- Vision (1.00)
- Machine Learning
- Information Technology > Artificial Intelligence