EmbSum: Leveraging the Summarization Capabilities of Large Language Models for Content-Based Recommendations
Zhang, Chiyu, Sun, Yifei, Wu, Minghao, Chen, Jun, Lei, Jie, Abdul-Mageed, Muhammad, Jin, Rong, Liu, Angli, Zhu, Ji, Park, Sem, Yao, Ning, Long, Bo
–arXiv.org Artificial Intelligence
Content-based recommendation systems play a crucial role in delivering personalized content to users in the digital world. In this work, we introduce EmbSum, a novel framework that enables offline pre-computations of users and candidate items while capturing the interactions within the user engagement history. By utilizing the pretrained encoder-decoder model and poly-attention layers, EmbSum derives User Poly-Embedding (UPE) and Content Poly-Embedding (CPE) to calculate relevance scores between users and candidate items. EmbSum actively learns the long user engagement histories by generating user-interest summary with supervision from large language model (LLM). The effectiveness of EmbSum is validated on two datasets from different domains, surpassing state-of-the-art (SoTA) methods with higher accuracy and fewer parameters. Additionally, the model's ability to generate summaries of user interests serves as a valuable by-product, enhancing its usefulness for personalized content recommendations.
arXiv.org Artificial Intelligence
May-19-2024
- Country:
- Asia (0.93)
- Europe (0.68)
- North America
- Canada > British Columbia (0.14)
- United States
- Genre:
- Overview (0.47)
- Research Report (0.64)
- Industry:
- Government (0.46)