SageLM: A Multi-aspect and Explainable Large Language Model for Speech Judgement
Ge, Yuan, Zhang, Junxiang, Liu, Xiaoqian, Li, Bei, Ma, Xiangnan, Wang, Chenglong, Ye, Kaiyang, Du, Yangfan, Zhang, Linfeng, Huang, Yuxin, Xiao, Tong, Yu, Zhengtao, Zhu, JingBo
–arXiv.org Artificial Intelligence
Speech-to-Speech (S2S) Large Language Models (LLMs) are foundational to natural human-computer interaction, enabling end-to-end spoken dialogue systems. However, evaluating these models remains a fundamental challenge. We propose \texttt{SageLM}, an end-to-end, multi-aspect, and explainable speech LLM for comprehensive S2S LLMs evaluation. First, unlike cascaded approaches that disregard acoustic features, SageLM jointly assesses both semantic and acoustic dimensions. Second, it leverages rationale-based supervision to enhance explainability and guide model learning, achieving superior alignment with evaluation outcomes compared to rule-based reinforcement learning methods. Third, we introduce \textit{SpeechFeedback}, a synthetic preference dataset, and employ a two-stage training paradigm to mitigate the scarcity of speech preference data. Trained on both semantic and acoustic dimensions, SageLM achieves an 82.79\% agreement rate with human evaluators, outperforming cascaded and SLM-based baselines by at least 7.42\% and 26.20\%, respectively.
arXiv.org Artificial Intelligence
Nov-11-2025
- Country:
- Asia
- Europe
- Italy > Calabria
- Catanzaro Province > Catanzaro (0.04)
- Spain > Catalonia
- Barcelona Province > Barcelona (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Italy > Calabria
- North America
- Canada > Ontario
- Toronto (0.04)
- United States > Florida
- Miami-Dade County > Miami (0.04)
- Canada > Ontario
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Technology: