Prototypical Reward Network for Data-Efficient RLHF
Zhang, Jinghan, Wang, Xiting, Jin, Yiqiao, Chen, Changyu, Zhang, Xinhao, Liu, Kunpeng
–arXiv.org Artificial Intelligence
The reward model for Reinforcement Learning from Human Feedback (RLHF) has proven effective in fine-tuning Large Language Models (LLMs). Notably, collecting human feedback for RLHF can be resource-intensive and lead to scalability issues for LLMs and complex tasks. Our proposed framework Proto-RM leverages prototypical networks to enhance reward models under limited human feedback. By enabling stable and reliable structural learning from fewer samples, Proto-RM significantly enhances LLMs' adaptability and accuracy in interpreting human preferences. Extensive experiments on various datasets demonstrate that Proto-RM significantly improves the performance of reward models and LLMs in human feedback tasks, achieving comparable and usually better results than traditional methods, while requiring significantly less data. in data-limited scenarios. This research offers a promising direction for enhancing the efficiency of reward models and optimizing the fine-tuning of language models under restricted feedback conditions.
arXiv.org Artificial Intelligence
Jul-7-2024
- Country:
- Asia > China (0.15)
- North America > United States (0.14)
- Genre:
- Research Report (0.64)
- Technology: