Personalized LLM for Generating Customized Responses to the Same Query from Different Users
Zeng, Hang, Niu, Chaoyue, Wu, Fan, Lv, Chengfei, Chen, Guihai
–arXiv.org Artificial Intelligence
Existing work on large language model (LLM) personalization assigned different responding roles to LLM, but overlooked the diversity of questioners. In this work, we propose a new form of questioner-aware LLM personalization, generating different responses even for the same query from different questioners. We design a dual-tower model architecture with a cross-questioner general encoder and a questioner-specific encoder. We further apply contrastive learning with multi-view augmentation, pulling close the dialogue representations of the same questioner, while pulling apart those of different questioners. To mitigate the impact of question diversity on questioner-contrastive learning, we cluster the dialogues based on question similarity and restrict the scope of contrastive learning within each cluster. We also build a multi-questioner dataset from English and Chinese scripts and WeChat records, called MQDialog, containing 173 questioners and 12 responders. Extensive evaluation with different metrics shows a significant improvement in the quality of personalized response generation.
arXiv.org Artificial Intelligence
Dec-16-2024
- Country:
- Asia (0.93)
- Europe (1.00)
- North America
- Canada > British Columbia (0.28)
- United States (1.00)
- Genre:
- Research Report (0.82)
- Industry:
- Information Technology > Services (0.49)
- Technology: