LLM-Enhanced Reranking for Complementary Product Recommendation
–arXiv.org Artificial Intelligence
Complementary product recommendation, which aims to suggest items that are used together to enhance customer value, is a crucial yet challenging task in e-commerce. While existing graph neural network (GNN) approaches have made significant progress in capturing complex product relationships, they often struggle with the accuracy-diversity tradeoff, particularly for long-tail items. This paper introduces a model-agnostic approach that leverages Large Language Models (LLMs) to enhance the reranking of complementary product recommendations. Unlike previous works that use LLMs primarily for data preprocessing and graph augmentation, our method applies LLM-based prompting strategies directly to rerank candidate items retrieved from existing recommendation models, eliminating the need for model retraining. Through extensive experiments on public datasets, we demonstrate that our approach effectively balances accuracy and diversity in complementary product recommendations, with at least 50% lift in accuracy metrics and 2% lift in diversity metrics on average for the top recommended items across datasets.
arXiv.org Artificial Intelligence
Dec-2-2025
- Country:
- Asia > Myanmar
- Tanintharyi Region > Dawei (0.04)
- North America
- Canada > Ontario
- Toronto (0.05)
- United States
- Iowa > Story County
- Ames (0.40)
- New York > New York County
- New York City (0.06)
- North Carolina > Wake County
- Raleigh (0.40)
- Iowa > Story County
- Canada > Ontario
- Asia > Myanmar
- Genre:
- Research Report (0.64)
- Technology: