Confidence-Modulated Speculative Decoding for Large Language Models

Sen, Jaydip, Dasgupta, Subhasis, Waghela, Hetvi

arXiv.org Artificial Intelligence 

-- Speculative decoding has emerged as an effective approach for accelerating autoregressive inference by parallelizing token generation through a draft - then - verify paradigm. However, existing methods rely on static drafting lengths and rigid verification cri teria, limiting their adaptability across varying model uncertainties and input complexities. This paper proposes an information - theoretic framework for speculative decoding based on confidence - modulated drafting. By leveraging entropy and margin - based uncertainty measures over the drafter's output distribution, the proposed method dynamically adjusts the number of speculatively generated tokens at each iteration. This adaptive mechanism reduces rollback frequency, improves resource utilization, an d maintains output fidelity. Additionally, the verification process is modulated using the same confidence signals, enabling more flexible acceptance of drafted tokens without sacrificing generation quality. Experiments on machine translation and summariza tion tasks demonstrate significant speedups over standard speculative decoding while preserving or improving BLEU and ROUGE scores. The proposed approach offers a principled, plug - in method for efficient and robust decoding in large language models under v arying conditions of uncertainty. Keywords -- Speculative Decoding, Autoregressive Models, Confidence Estimation, Adaptive Inference, Entropy - Based Drafting, Sequence Generation, Large Language Models, Large Language Models (LLMs), Information - Theoretic Decoding. The task of sequence generation lies at the heart of numerous applications in natural language processing, including machine translation, text summarization, dialogue generation, and code synthesis. In the overwhelming majority of these applications, autor egressive (AR) decoding remains the dominant paradigm for generating sequences from a probabilistic language model [1 - 2] . Autoregressive models, particularly those based on the Transformer architecture, operate by predicting each token conditioned on the e ntire history of previously generated tokens. This left - to - right decoding strategy, though optimal in terms of likelihood estimation, suffers from a fundamental limitation: the inherently sequential nature of generation prohibits efficient parallelization, severely hindering inference throughput, especially in latency - sensitive deployment scenarios.