ImageChain: Advancing Sequential Image-to-Text Reasoning in Multimodal Large Language Models
Villegas, Danae Sánchez, Ziegler, Ingo, Elliott, Desmond
–arXiv.org Artificial Intelligence
Reasoning over sequences of images remains a challenge for multimodal large language models (MLLMs). While recent models incorporate multi-image data during pre-training, they still struggle to recognize sequential structures, often treating images independently. This work introduces ImageChain, a framework that enhances MLLMs with sequential reasoning capabilities over image data by modeling visual sequences as a multi-turn conversation. In ImageChain, images are interleaved with corresponding textual descriptions to form a controlled dialogue that explicitly captures temporal dependencies and narrative progression. Our method optimizes for the task of next-scene description, where the model generates a context-aware description of an upcoming scene based on preceding visual and textual cues. We demonstrate that our approach improves performance on the next-scene description task -- achieving an average improvement from 3.7% to 19% in SimRate, a metric that quantifies semantic similarity to human-annotated ground truths. Moreover, ImageChain achieves robust zero-shot out-of-domain performance in applications ranging from comics to robotics. Extensive experiments validate that instruction-tuning in a multimodal, multi-turn conversation design is key to bridging the gap between static image understanding and temporally-aware reasoning.
arXiv.org Artificial Intelligence
Feb-26-2025
- Country:
- Asia > Middle East
- UAE (0.14)
- North America
- Mexico > Mexico City (0.14)
- United States > California (0.14)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.46)
- Technology: