Pixel Motion as Universal Representation for Robot Control
Ranasinghe, Kanchana, Li, Xiang, Nguyen, E-Ro, Mata, Cristina, Park, Jongwoo, Ryoo, Michael S
–arXiv.org Artificial Intelligence
We present LangToMo, a vision-language-action framework structured as a dual-system architecture that uses pixel motion forecasts as intermediate representations. Our high-level System 2, an image diffusion model, generates text-conditioned pixel motion sequences from a single frame to guide robot control. Pixel motion-a universal, interpretable, and motion-centric representation-can be extracted from videos in a weakly-supervised manner, enabling diffusion model training on any video-caption data. Treating generated pixel motion as learned universal representations, our low level System 1 module translates these into robot actions via motion-to-action mapping functions, which can be either hand-crafted or learned with minimal supervision. System 2 operates as a high-level policy applied at sparse temporal intervals, while System 1 acts as a low-level policy at dense temporal intervals. This hierarchical decoupling enables flexible, scalable, and generalizable robot control under both unsupervised and supervised settings, bridging the gap between language, motion, and action. Checkout https://kahnchana.github.io/LangToMo
arXiv.org Artificial Intelligence
Aug-29-2025
- Country:
- Europe > Netherlands
- South Holland > Delft (0.04)
- North America > United States
- New York > Suffolk County > Stony Brook (0.04)
- South America > Chile
- Europe > Netherlands
- Genre:
- Research Report (0.82)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Robots (1.00)
- Information Technology > Artificial Intelligence