Resolving Positional Ambiguity in Dialogues by Vision-Language Models for Robot Navigation
Chen, Kuan-Lin, Wei, Tzu-Ti, Yeh, Li-Tzu, Kao, Elaine, Tseng, Yu-Chee, Chen, Jen-Jee
–arXiv.org Artificial Intelligence
We consider an autonomous navigation robot that can accept human commands through natural language to provide services in an indoor environment. These natural language commands may include time, position, object, and action components. However, we observe that the positional components within such commands usually refer to objects in the environment that may contain different levels of positional ambiguity. For example, the command "Go to the chair!" may be ambiguous when there are multiple chairs of the same type in a room. In order to disambiguate these commands, we employ a large language model and a large vision-language model to conduct multiple turns of conversations with the user. We propose a two-level approach that utilizes a vision-language model to map the meanings in natural language to a unique object ID in images and then performs another mapping from the unique object ID to a 3D depth map, thereby allowing the robot to navigate from its current position to the target position. To the best of our knowledge, this is the first work linking foundation models to the positional ambiguity issue.
arXiv.org Artificial Intelligence
Sep-30-2024
- Genre:
- Research Report (0.40)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks
- Deep Learning (0.48)
- Natural Language > Large Language Model (0.92)
- Robots (1.00)
- Vision (1.00)
- Machine Learning > Neural Networks
- Information Technology > Artificial Intelligence