Reranking Laws for Language Generation: A Communication-Theoretic Perspective
–Neural Information Processing Systems
To ensure large language models (LLMs) are used safely, one must reduce their propensity to hallucinate or to generate unacceptable answers. A simple and often used strategy is to first let the LLM generate multiple hypotheses and then employ a reranker to choose the best one. In this paper, we draw a parallel between this strategy and the use of redundancy to decrease the error rate in noisy communication channels.
Neural Information Processing Systems
Dec-27-2025, 07:28:36 GMT
- Technology: