Vision-Language Models Can Self-Improve Reasoning via Reflection
Cheng, Kanzhi, Li, Yantao, Xu, Fangzhi, Zhang, Jianbing, Zhou, Hao, Liu, Yang
–arXiv.org Artificial Intelligence
Chain-of-thought (CoT) has proven to improve the reasoning capability of large language models (LLMs). However, due to the complexity of multimodal scenarios and the difficulty in collecting high-quality CoT data, CoT reasoning in multimodal LLMs has been largely overlooked. To this end, we propose a simple yet effective self-training framework, R3V, which iteratively enhances the model's Vision-language Reasoning by Reflecting on CoT Rationales. Our framework consists of two interleaved parts: (1) iteratively bootstrapping positive and negative solutions for reasoning datasets, and (2) reflection on rationale for learning from mistakes. Specifically, we introduce the self-refine and self-select losses, enabling the model to refine flawed rationale and derive the correct answer by comparing rationale candidates. Experiments on a wide range of vision-language tasks show that R3V consistently improves multimodal LLM reasoning, achieving a relative improvement of 23 to 60 percent over GPT-distilled baselines. Additionally, our approach supports self-reflection on generated solutions, further boosting performance through test-time computation.
arXiv.org Artificial Intelligence
Oct-30-2024
- Country:
- Asia
- China
- Guangxi Province > Nanning (0.04)
- Jiangsu Province > Nanjing (0.04)
- Shanghai > Shanghai (0.04)
- Middle East > UAE (0.04)
- Thailand > Bangkok
- Bangkok (0.04)
- China
- Asia
- Genre:
- Research Report (1.00)
- Industry:
- Education (0.46)
- Technology: