GRPO-RM: Fine-Tuning Representation Models via GRPO-Driven Reinforcement Learning
Xu, Yanchen, Jiao, Ziheng, Zhang, Hongyuan, Li, Xuelong
–arXiv.org Artificial Intelligence
The Group Relative Policy Optimization (GRPO), a reinforcement learning method used to fine-tune large language models (LLMs), has proved its effectiveness in practical applications such as DeepSeek-R1. It raises a question whether GRPO can be generalized to representation learning models. In this paper, we propose Group Relative Policy Optimization for Representation Model (GRPO-RM), and investigate the performance of GRPO-like policy in post-training representation models. Specifically, our method establishes a predefined output set to functionally replace token sequence sampling in LLMs, thereby generating an output group, which is essential for the probability-driven optimization of GRPO. In addition, a specialized reward function is designed to accommodate the properties of representation models. Extensive experiments are conducted on various real-world datasets to validate the effectiveness of our proposed method.
arXiv.org Artificial Intelligence
Nov-20-2025
- Country:
- Asia > China
- Hong Kong (0.04)
- North America > United States
- Washington > King County > Seattle (0.04)
- Asia > China
- Genre:
- Research Report (0.50)
- Industry:
- Health & Medicine > Diagnostic Medicine > Imaging (0.46)
- Technology: