Advancing Decoding Strategies: Enhancements in Locally Typical Sampling for LLMs
Sen, Jaydip, Sengupta, Saptarshi, Dasgupta, Subhasis
–arXiv.org Artificial Intelligence
This chapter explores advancements in decoding strategies for large language models (LLMs), focusing on enhancing the Locally Typical Sampling (LTS) algorithm. Traditional decoding methods, such as top - k and nucleus sampling, often struggle to balance fluency, diversity, and coherence in text generation. To address these challenges, Adaptive Semantic - Aware Typicality Sampling (ASTS) is proposed as an improved version of LTS, incorporating dynamic entropy thresholding, multi - objective scoring, and reward - penalty adjustments. ASTS ensures contextually coherent and diverse text generation while maintaining computational efficiency. Its performance is evaluated across multiple benchmarks, including story generation and abstractive summarization, using metrics such a s perplexity, MAUVE, and diversity scores. Experimental results demonstrate that ASTS outperforms existing sampling techniques by reducing repetition, enhancing semantic alignment, and improving fluency. Keywords: Locally Typical Sampling, Adaptive Semantic - Aware Typicality Sampling (ASTS), Decoding Strategies, Large Language Models (LLMs), Entrop y - Based Sampling, Multi - Objective Scoring.
arXiv.org Artificial Intelligence
Jun-12-2025
- Country:
- Africa > Ethiopia
- Addis Ababa > Addis Ababa (0.04)
- Asia > India
- West Bengal > Kolkata (0.04)
- Europe > Germany
- Berlin (0.04)
- North America
- Canada > Quebec
- Montreal (0.04)
- Mexico > Mexico City
- Mexico City (0.04)
- United States
- California > Santa Clara County
- San Jose (0.04)
- Massachusetts > Middlesex County
- Cambridge (0.04)
- California > Santa Clara County
- Canada > Quebec
- Oceania > Australia
- Africa > Ethiopia
- Genre:
- Overview (1.00)
- Research Report > New Finding (0.87)
- Technology: