InfiGUI-G1: Advancing GUI Grounding with Adaptive Exploration Policy Optimization

Liu, Yuhang, Liu, Zeyu, Zhu, Shuanghe, Li, Pengxiang, Xie, Congkai, Wang, Jiasheng, Hu, Xavier, Han, Xiaotian, Yuan, Jianbo, Wang, Xinyao, Zhang, Shengyu, Yang, Hongxia, Wu, Fei

arXiv.org Artificial Intelligence 

The emergence of Multimodal Large Language Models (MLLMs) has propelled the development of autonomous agents that operate on Graphical User Interfaces (GUIs) using pure visual input. A fundamental challenge is robustly grounding natural language instructions. This requires a precise spatial alignment, which accurately locates the coordinates of each element, and, more critically, a correct semantic alignment, which matches the instructions to the functionally appropriate UI element. Although Reinforcement Learning with V erifiable Rewards (RL VR) has proven to be effective at improving spatial alignment for these MLLMs, we find that inefficient exploration bottlenecks semantic alignment, which prevent models from learning difficult semantic associations. To address this exploration problem, we present Adaptive Exploration Policy Optimization (AEPO), a new policy optimization framework. AEPO employs a multi-answer generation strategy to enforce broader exploration, which is then guided by a theoretically grounded Adaptive Exploration Reward (AER) function derived from first principles of efficiency η = U/C . Our AEPO-trained models, InfiGUI-G1-3B and InfiGUI-G1-7B, establish new state-of-the-art results across multiple challenging GUI grounding benchmarks, achieving significant relative improvements of up to 9.0% against the naive RL VR baseline on benchmarks designed to test generalization and semantic understanding. Resources are available at https://github.