A Survey on Personalized and Pluralistic Preference Alignment in Large Language Models
Xie, Zhouhang, Wu, Junda, Shen, Yiran, Xia, Yu, Li, Xintong, Chang, Aaron, Rossi, Ryan, Kumar, Sachin, Majumder, Bodhisattwa Prasad, Shang, Jingbo, Ammanabrolu, Prithviraj, McAuley, Julian
–arXiv.org Artificial Intelligence
Personalized preference alignment for large language models (LLMs), the process of tailoring LLMs to individual users' preferences, is an emerging research direction spanning the area of NLP and personalization. In this survey, we present an analysis of works on personalized alignment and modeling for LLMs. We introduce a taxonomy of preference alignment techniques, including training time, inference time, and additionally, user-modeling based methods. We provide analysis and discussion on the strengths and limitations of each group of techniques and then cover evaluation, benchmarks, as well as open problems in the field.
arXiv.org Artificial Intelligence
Apr-10-2025
- Country:
- Asia
- China > Hong Kong (0.04)
- Middle East
- Jordan (0.04)
- UAE > Abu Dhabi Emirate
- Abu Dhabi (0.14)
- Thailand > Bangkok
- Bangkok (0.04)
- North America > United States
- California
- Los Angeles County > Los Angeles (0.14)
- San Diego County > San Diego (0.04)
- Ohio (0.04)
- California
- Asia
- Genre:
- Overview (1.00)
- Technology: