OThink-MR1: Stimulating multimodal generalized reasoning capabilities through dynamic reinforcement learning
Liu, Zhiyuan, Zhang, Yuting, Liu, Feng, Zhang, Changwang, Sun, Ying, Wang, Jun
–arXiv.org Artificial Intelligence
Multimodal Language Models have gained significant traction for their ability to process diverse input data types and generate coherent, contextually relevant outputs across various applications. While supervised fine-tuning (SFT) has been the predominant approach to enhance MLLM capabilities in task-specific optimization, it often falls short in fostering crucial generalized reasoning abilities. Despite the potential of reinforcement learning (RL) to address these limitations, it faces two issues: (1) its generalized capabilities in multimodal tasks remain underexplored. (2) its training constraints such as constant Kullback-Leibler or clamp strategy easily lead to suboptimal bottleneck. To adress these issues, we introduce OThink-MR1, a framework that extends RL to MLLMs, enabling them to achieve deeper understanding and reasoning across multimodal tasks. We design a dynamic Kullback-Leibler strategy that significantly enhances RL performance, surpassing SFT in same-task evaluations. Also, we are the first to reveal that RL exhibits remarkable cross-task generalization capabilities, which shows that models post-trained with RL on one multimodal task can be effectively transfered to another tasks. Finally, extensive experiments demonstrate the great reasoning ability of our proposed OThink-MR1.
arXiv.org Artificial Intelligence
Mar-20-2025
- Country:
- Asia > China > Guangdong Province (0.15)
- Genre:
- Research Report > New Finding (0.46)
- Technology: