Advancing Decoding Strategies: Enhancements in Locally Typical Sampling for LLMs

Sen, Jaydip, Sengupta, Saptarshi, Dasgupta, Subhasis

arXiv.org Artificial Intelligence 

This chapter explores advancements in decoding strategies for large language models (LLMs), focusing on enhancing the Locally Typical Sampling (LTS) algorithm. Traditional decoding methods, such as top - k and nucleus sampling, often struggle to balance fluency, diversity, and coherence in text generation. To address these challenges, Adaptive Semantic - Aware Typicality Sampling (ASTS) is proposed as an improved version of LTS, incorporating dynamic entropy thresholding, multi - objective scoring, and reward - penalty adjustments. ASTS ensures contextually coherent and diverse text generation while maintaining computational efficiency. Its performance is evaluated across multiple benchmarks, including story generation and abstractive summarization, using metrics such a s perplexity, MAUVE, and diversity scores. Experimental results demonstrate that ASTS outperforms existing sampling techniques by reducing repetition, enhancing semantic alignment, and improving fluency. Keywords: Locally Typical Sampling, Adaptive Semantic - Aware Typicality Sampling (ASTS), Decoding Strategies, Large Language Models (LLMs), Entrop y - Based Sampling, Multi - Objective Scoring.