ECTSpeech: Enhancing Efficient Speech Synthesis via Easy Consistency Tuning
Zhu, Tao, Yu, Yinfeng, Wang, Liejun, Sun, Fuchun, Zheng, Wendong
–arXiv.org Artificial Intelligence
Diffusion models have demonstrated remarkable performance in speech synthesis, but typically require multi-step sampling, resulting in low inference efficiency. Recent studies address this issue by distilling diffusion models into consistency models, enabling efficient one-step generation. However, these approaches introduce additional training costs and rely heavily on the performance of pre-trained teacher models. In this paper, we propose ECTSpeech, a simple and effective one-step speech synthesis framework that, for the first time, incorporates the Easy Consistency Tuning (ECT) strategy into speech synthesis. By progressively tightening consistency constraints on a pre-trained diffusion model, ECTSpeech achieves high-quality one-step generation while significantly reducing training complexity. In addition, we design a multi-scale gate module (MSGate) to enhance the denoiser's ability to fuse features at different scales. Experimental results on the LJSpeech dataset demonstrate that ECTSpeech achieves audio quality comparable to state-of-the-art methods under single-step sampling, while substantially reducing the model's training cost and complexity.
arXiv.org Artificial Intelligence
Oct-8-2025
- Country:
- Asia
- China
- Beijing > Beijing (0.04)
- Tianjin Province > Tianjin (0.04)
- Xinjiang Uygur Autonomous Region (0.05)
- Malaysia > Kuala Lumpur
- Kuala Lumpur (0.05)
- China
- North America > United States
- Hawaii > Honolulu County
- Honolulu (0.04)
- New York > New York County
- New York City (0.04)
- Hawaii > Honolulu County
- Asia
- Genre:
- Research Report (1.00)
- Technology: