Fast-SmartWay: Panoramic-Free End-to-End Zero-Shot Vision-and-Language Navigation
Shi, Xiangyu, Li, Zerui, Qiao, Yanyuan, Wu, Qi
–arXiv.org Artificial Intelligence
Recent advances in Vision-and-Language Navigation in Continuous Environments (VLN-CE) have leveraged multimodal large language models (MLLMs) to achieve zero-shot navigation. However, existing methods often rely on panoramic observations and two-stage pipelines involving waypoint predictors, which introduce significant latency and limit real-world applicability. In this work, we propose Fast-SmartWay, an end-to-end zero-shot VLN-CE framework that eliminates the need for panoramic views and waypoint predictors. Our approach uses only three frontal RGB-D images combined with natural language instructions, enabling MLLMs to directly predict actions. To enhance decision robustness, we introduce an Uncertainty-Aware Reasoning module that integrates (i) a Disambiguation Module for avoiding local optima, and (ii) a Future-Past Bidirectional Reasoning mechanism for globally coherent planning. Experiments on both simulated and real-robot environments demonstrate that our method significantly reduces per-step latency while achieving competitive or superior performance compared to panoramic-view baselines. These results demonstrate the practicality and effectiveness of Fast-SmartWay for real-world zero-shot embodied navigation.
arXiv.org Artificial Intelligence
Nov-4-2025
- Country:
- Europe > Switzerland > Vaud > Lausanne (0.04)
- Genre:
- Research Report > New Finding (0.66)
- Technology: