WMPO: World Model-based Policy Optimization for Vision-Language-Action Models
Zhu, Fangqi, Yan, Zhengyang, Hong, Zicong, Shou, Quanxin, Ma, Xiao, Guo, Song
–arXiv.org Artificial Intelligence
Vision-Language-Action (VLA) models have shown strong potential for general-purpose robotic manipulation, but their reliance on expert demonstrations limits their ability to learn from failures and perform self-corrections. Reinforcement learning (RL) addresses these through self-improving interactions with the physical environment, but suffers from high sample complexity on real robots. We introduce World-Model-based Policy Optimization (WMPO), a principled framework for on-policy VLA RL without interacting with the real environment. In contrast to widely used latent world models, WMPO focuses on pixel-based predictions that align the "imagined" trajectories with the VLA features pretrained with web-scale images. Crucially, WMPO enables the policy to perform on-policy GRPO that provides stronger performance than the often-used off-policy methods. Extensive experiments in both simulation and real-robot settings demonstrate that WMPO (i) substantially improves sample efficiency, (ii) achieves stronger overall performance, (iii) exhibits emergent behaviors such as self-correction, and (iv) demonstrates robust generalization and lifelong learning capabilities.
arXiv.org Artificial Intelligence
Nov-13-2025
- Genre:
- Research Report (1.00)
- Industry:
- Education > Educational Setting (0.34)
- Technology:
- Information Technology > Artificial Intelligence
- Cognitive Science > Problem Solving (0.97)
- Machine Learning (1.00)
- Representation & Reasoning (1.00)
- Robots (1.00)
- Information Technology > Artificial Intelligence