More diverse more adaptive: Comprehensive Multi-task Learning for Improved LLM Domain Adaptation in E-commerce
Piao, Tong, Tang, Pei, Zhang, Zhipeng, Li, Jiaqi, Liu, Qiao, Wu, Zufeng
–arXiv.org Artificial Intelligence
In recent years, Large Language Models (LLMs) have been widely applied across various domains due to their powerful domain adaptation capabilities. Previous studies have suggested that diverse, multi-modal data can enhance LLMs' domain adaptation performance. However, this hypothesis remains insufficiently validated in the e-commerce sector. To address this gap, we propose a comprehensive e-commerce multi-task framework and design empirical experiments to examine the impact of diverse data and tasks on LLMs from two perspectives: "capability comprehensiveness" and "task comprehensiveness." Specifically, we observe significant improvements in LLM performance by progressively introducing tasks related to new major capability areas and by continuously adding subtasks within different major capability domains. Furthermore, we observe that increasing model capacity amplifies the benefits of diversity, suggesting a synergistic relationship between model capacity and data diversity. Finally, we validate the best-performing model from our empirical experiments in the KDD Cup 2024, achieving a rank 5 in Task 1. This outcome demonstrates the significance of our research for advancing LLMs in the e-commerce domain.
arXiv.org Artificial Intelligence
Apr-14-2025
- Country:
- Asia > China
- Shanghai > Shanghai (0.05)
- Sichuan Province > Chengdu (0.06)
- Europe > Spain
- Catalonia > Barcelona Province > Barcelona (0.07)
- North America > United States
- New York > New York County > New York City (0.05)
- Asia > China
- Genre:
- Research Report > New Finding (0.35)
- Industry:
- Information Technology > Services > e-Commerce Services (1.00)
- Technology: