LGR: LLM-Guided Ranking of Frontiers for Object Goal Navigation
Uno, Mitsuaki, Tanaka, Kanji, Iwata, Daiki, Noda, Yudai, Miyazaki, Shoya, Terashima, Kouki
–arXiv.org Artificial Intelligence
Object Goal Navigation (OGN) is a fundamental task for robot s and AI, with key applications such as mobile robot image databases (MRID). In particular, mapless OGN is essential i n scenarios involving unknown or dynamic environments. Thi s study aims to enhance recent modular mapless OGN systems by l everaging the commonsense reasoning capabilities of large language models (LLMs). Specifically, we address the challe nge of determining the visiting order in frontier-based exp loration by framing it as a frontier ranking problem. Our approach is g rounded in recent findings that, while LLMs cannot determine the absolute value of a frontier, they excel at evaluating the re lative value between multiple frontiers viewed within a sin gle image using the view image as context. We dynamically manage the fr ontier list by adding and removing elements, using an LLM as a ranking model. The ranking results are represented as re ciprocal rank vectors, which are ideal for multi-view, mult i-query information fusion. Object Goal Navigation (OGN) is a task in which a robot explor es and locates a user-specified object within a workspace, widely studied in robotics and artificial intelligence [1]. If object locations are pre-recorded on a map, the most effici ent method is to retrieve the object from the mobile robot image d atabase [2]-[4]. However, in unknown environments or when map information is unreliable, mapless OGN is essential. Ex isting OGN methods include end-to-end approaches, which directly generate action commands from sensor data [5], but these require extensive training data and high computation al costs.
arXiv.org Artificial Intelligence
Mar-26-2025
- Country:
- Asia > Japan (0.04)
- Europe > Switzerland (0.04)
- North America > Canada
- Genre:
- Research Report > New Finding (0.94)
- Technology: