Audio-Conditioned Diffusion LLMs for ASR and Deliberation Processing
Wang, Mengqi, Liu, Zhan, Jin, Zengrui, Sun, Guangzhi, Zhang, Chao, Woodland, Philip C.
–arXiv.org Artificial Intelligence
Diffusion-based large language models (DLLMs) have recently attracted growing interest as an alternative to autoregressive decoders. In this work, we present an empirical study on using the diffusion-based large language model LLaDA for automatic speech recognition (ASR). We first investigate its use as an external deliberation-based processing module for Whisper-LLaMA transcripts. By leveraging the bidirectional attention and denoising capabilities of LLaDA, we explore random masking, low-confidence masking, and semi-autoregressive strategies, showing that Whisper-LLaDA substantially reduces WER compared with the baseline. On LibriSpeech, the best cascade system achieves 2.25%/4.94% WER on test-clean/test-other, representing a 12.3% relative improvement over the Whisper-LLaMA baseline on the test-other split. In contrast, a plain-text LLaDA without acoustic features fails to improve accuracy, highlighting the importance of audio-conditioned embeddings. We further evaluate Whisper-LLaDA as a standalone decoder for ASR with diffusion-based and semi-autoregressive decoding. Most experimental configurations achieve faster inference than the Whisper-LLaMA baseline, although recognition accuracy is slightly lower. These findings offer an empirical view of diffusion-based LLMs for ASR and point to promising directions for improvements.
arXiv.org Artificial Intelligence
Oct-10-2025
- Country:
- Asia
- Europe
- Austria > Vienna (0.05)
- Czechia > South Moravian Region
- Brno (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- North America
- Canada > Ontario
- Toronto (0.05)
- United States
- Hawaii > Honolulu County
- Honolulu (0.04)
- Illinois (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- Hawaii > Honolulu County
- Canada > Ontario
- Genre:
- Research Report (0.64)
- Technology: