AnywhereVLA: Language-Conditioned Exploration and Mobile Manipulation
Gubernatorov, Konstantin, Voronov, Artem, Voronov, Roman, Pasynkov, Sergei, Perminov, Stepan, Guo, Ziang, Tsetserukou, Dzmitry
–arXiv.org Artificial Intelligence
We address natural language pick-and-place in unseen, unpredictable indoor environments with AnywhereVLA, a modular framework for mobile manipulation. A user text prompt serves as an entry point and is parsed into a structured task graph that conditions classical SLAM with LiDAR and cameras, metric semantic mapping, and a task-aware frontier exploration policy. An approach planner then selects visibility and reachability aware pre grasp base poses. For interaction, a compact SmolVLA manipulation head is fine tuned on platform pick and place trajectories for the SO-101 by TheRobotStudio, grounding local visual context and sub-goals into grasp and place proposals. The full system runs fully onboard on consumer-level hardware, with Jetson Orin NX for perception and VLA and an Intel NUC for SLAM, exploration, and control, sustaining real-time operation. We evaluated AnywhereVLA in a multi-room lab under static scenes and normal human motion. In this setting, the system achieves a $46\%$ overall task success rate while maintaining throughput on embedded compute. By combining a classical stack with a fine-tuned VLA manipulation, the system inherits the reliability of geometry-based navigation with the agility and task generalization of language-conditioned manipulation.
arXiv.org Artificial Intelligence
Sep-26-2025
- Country:
- Asia > Russia (0.04)
- Europe > Russia
- Central Federal District > Moscow Oblast > Moscow (0.04)
- Genre:
- Research Report (0.67)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Natural Language (1.00)
- Representation & Reasoning (1.00)
- Robots (1.00)
- Information Technology > Artificial Intelligence