Decoding Decoded: Understanding Hyperparameter Effects in Open-Ended Text Generation
Arias, Esteban Garces, Li, Meimingwei, Heumann, Christian, Aßenmacher, Matthias
–arXiv.org Artificial Intelligence
Decoding strategies for generative large language models (LLMs) are a critical but often underexplored aspect of text generation tasks. Guided by specific hyperparameters, these strategies aim to transform the raw probability distributions produced by language models into coherent, fluent text. In this study, we undertake a large-scale empirical assessment of a range of decoding methods, open-source LLMs, textual domains, and evaluation protocols to determine how hyperparameter choices shape the outputs. Our experiments include both factual (e.g., news) and creative (e.g., fiction) domains, and incorporate a broad suite of automatic evaluation metrics alongside human judgments. Through extensive sensitivity analyses, we distill practical recommendations for selecting and tuning hyperparameters, noting that optimal configurations vary across models and tasks. By synthesizing these insights, this study provides actionable guidance for refining decoding strategies, enabling researchers and practitioners to achieve higher-quality, more reliable, and context-appropriate text generation outcomes.
arXiv.org Artificial Intelligence
Dec-14-2024
- Country:
- Asia > Afghanistan (1.00)
- Europe > France (0.68)
- Genre:
- Research Report
- Experimental Study (0.66)
- New Finding (0.87)
- Research Report
- Industry:
- Government > Regional Government > Europe Government > France Government (0.46)
- Technology: