Step-Audio-EditX Technical Report
Yan, Chao, Wu, Boyong, Yang, Peng, Tan, Pengfei, Hu, Guoqiang, Xie, Li, Zhang, Yuxin, Xiangyu, null, Zhang, null, Tian, Fei, Yang, Xuerui, Zhang, Xiangyu, Jiang, Daxin, Zhou, Shuchang, Yu, Gang
–arXiv.org Artificial Intelligence
We present Step-Audio-EditX, the first open-source LLM-based audio model excelling at expressive and iterative audio editing encompassing emotion, speaking style, and paralinguistics alongside robust zero-shot text-to-speech (TTS) capabilities. Our core innovation lies in leveraging only large-margin synthetic data, which circumvents the need for embedding-based priors or auxiliary modules. This large-margin learning approach enables both iterative control and high expressivity across voices, and represents a fundamental pivot from the conventional focus on representation-level disentanglement. Evaluation results demonstrate that Step-Audio-EditX surpasses both MiniMax-2.6-hd and Doubao-Seed-TTS-2.0 in emotion editing and other fine-grained control tasks.
arXiv.org Artificial Intelligence
Nov-20-2025
- Genre:
- Research Report > New Finding (0.48)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Natural Language > Large Language Model (1.00)
- Speech (1.00)
- Information Technology > Artificial Intelligence