Targeted Distillation for Sentiment Analysis
Zhang, Yice, Xie, Guangyu, Lin, Jingjie, Bao, Jianzhu, Wang, Qianlong, Zeng, Xi, Xu, Ruifeng
–arXiv.org Artificial Intelligence
This paper presents a compact model that achieves strong sentiment analysis capabilities through targeted distillation from advanced large language models (LLMs). Our methodology decouples the distillation target into two key components: sentiment-related knowledge and task alignment. To transfer these components, we propose a two-stage distillation framework. The first stage, knowledge-driven distillation (\textsc{KnowDist}), transfers sentiment-related knowledge to enhance fundamental sentiment analysis capabilities. The second stage, in-context learning distillation (\textsc{ICLDist}), transfers task-specific prompt-following abilities to optimize task alignment. For evaluation, we introduce \textsc{SentiBench}, a comprehensive sentiment analysis benchmark comprising 3 task categories across 12 datasets. Experiments on this benchmark demonstrate that our model effectively balances model size and performance, showing strong competitiveness compared to existing small-scale LLMs.
arXiv.org Artificial Intelligence
Mar-5-2025
- Country:
- Asia > Middle East
- UAE (0.14)
- Europe > Middle East
- Malta (0.14)
- North America > United States
- California (0.14)
- Louisiana (0.14)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Leisure & Entertainment (0.94)
- Technology: