MMPlanner: Zero-Shot Multimodal Procedural Planning with Chain-of-Thought Object State Reasoning
Tabassum, Afrina, Guo, Bin, Ma, Xiyao, Eldardiry, Hoda, Lourentzou, Ismini
–arXiv.org Artificial Intelligence
Multimodal Procedural Planning (MPP) aims to generate step-by-step instructions that combine text and images, with the central challenge of preserving object-state consistency across modalities while producing informative plans. Existing approaches often leverage large language models (LLMs) to refine textual steps; however, visual object-state alignment and systematic evaluation are largely underexplored. We present MMPlanner, a zero-shot MPP framework that introduces Object State Reasoning Chain-of-Thought (OSR-CoT) prompting to explicitly model object-state transitions and generate accurate multimodal plans. To assess plan quality, we design LLM-as-a-judge protocols for planning accuracy and cross-modal alignment, and further propose a visual step-reordering task to measure temporal coherence. Experiments on RECIPEPLAN and WIKIPLAN show that MMPlanner achieves state-of-the-art performance, improving textual planning by +6.8%, cross-modal alignment by +11.9%, and visual step ordering by +26.7%
arXiv.org Artificial Intelligence
Sep-29-2025
- Country:
- Asia > Myanmar
- Tanintharyi Region > Dawei (0.04)
- North America > United States
- Illinois > Champaign County
- Urbana (0.04)
- Virginia (0.04)
- Illinois > Champaign County
- Asia > Myanmar
- Genre:
- Instructional Material > Training Manual (0.34)
- Research Report (0.64)
- Workflow (0.91)
- Industry:
- Government (0.46)
- Technology: