OmniJARVIS Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents
–Neural Information Processing Systems
This paper presents OmniJARVIS, a novel Vision-Language-Action (VLA) model for open-world instruction-following agents in Minecraft. Compared to prior works that either emit textual goals to separate controllers or produce the control command directly, OmniJARVIS seeks a different path to ensure both strong reasoning and efficient decision-making capabilities via unified tokenization of multimodal interaction data.
Neural Information Processing Systems
May-30-2025, 19:43:13 GMT
- Country:
- Asia > China (0.14)
- North America > United States
- California (0.14)
- Genre:
- Research Report > Experimental Study (0.93)
- Industry:
- Leisure & Entertainment > Games
- Computer Games (0.93)
- Materials > Metals & Mining (1.00)
- Leisure & Entertainment > Games
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks
- Deep Learning (0.68)
- Natural Language
- Chatbot (1.00)
- Large Language Model (1.00)
- Representation & Reasoning > Agents (0.93)
- Robots (0.68)
- Vision (0.68)
- Machine Learning > Neural Networks
- Information Technology > Artificial Intelligence