Interpreting Learned Feedback Patterns in Large Language Models

Neural Information Processing Systems 

Reinforcement learning from human feedback (RLHF) is widely used to train large language models (LLMs). However, it is unclear whether LLMs accurately learn the underlying preferences in human feedback data.