Reranking Laws for Language Generation: A Communication-Theoretic Perspective António Farinhas 1,2 Haau-Sing Li2,3 André F. T. Martins Instituto Superior Técnico, Universidade de Lisboa
–Neural Information Processing Systems
To ensure large language models (LLMs) are used safely, one must reduce their propensity to hallucinate or to generate unacceptable answers. A simple and often used strategy is to first let the LLM generate multiple hypotheses and then employ a reranker to choose the best one. In this paper, we draw a parallel between this strategy and the use of redundancy to decrease the error rate in noisy communication channels.
Neural Information Processing Systems
Mar-27-2025, 07:47:05 GMT
- Country:
- Asia > Middle East (0.46)
- Europe > Portugal
- North America > United States (0.46)
- Genre:
- Research Report > Experimental Study (0.93)
- Industry:
- Information Technology > Security & Privacy (0.46)
- Technology: