Multitask Multimodal Prompted Training for Interactive Embodied Task Completion
Pantazopoulos, Georgios, Nikandrou, Malvina, Parekh, Amit, Hemanthage, Bhathiya, Eshghi, Arash, Konstas, Ioannis, Rieser, Verena, Lemon, Oliver, Suglia, Alessandro
–arXiv.org Artificial Intelligence
Interactive and embodied tasks pose at least two fundamental challenges to existing Vision & Language (VL) models, including 1) grounding language in trajectories of actions and observations, and 2) referential disambiguation. To tackle these challenges, we propose an Embodied MultiModal Agent (EMMA): a unified encoder-decoder model that reasons over images and trajectories, and casts action prediction as multimodal text generation. By unifying all tasks as text generation, EMMA learns a language of actions which facilitates transfer across tasks. Different to previous modular approaches with independently trained components, we use a single multitask model where each task contributes to goal completion. EMMA performs on par with similar models on several VL benchmarks and sets a new state-of-the-art performance (36.81% success rate) on the Dialog-guided Task Completion (DTC), a benchmark to evaluate dialog-guided agents in the Alexa Arena
arXiv.org Artificial Intelligence
Nov-7-2023
- Country:
- Asia > Middle East
- Europe (0.92)
- Genre:
- Research Report (0.64)
- Industry:
- Education (0.46)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks
- Deep Learning (0.46)
- Natural Language > Large Language Model (0.93)
- Representation & Reasoning (1.00)
- Robots (1.00)
- Vision (1.00)
- Machine Learning > Neural Networks
- Information Technology > Artificial Intelligence