Reranking Laws for Language Generation: A Communication-Theoretic Perspective António Farinhas 1,2 Haau-Sing Li2,3 André F. T. Martins Instituto Superior Técnico, Universidade de Lisboa

Neural Information Processing Systems 

To ensure large language models (LLMs) are used safely, one must reduce their propensity to hallucinate or to generate unacceptable answers. A simple and often used strategy is to first let the LLM generate multiple hypotheses and then employ a reranker to choose the best one. In this paper, we draw a parallel between this strategy and the use of redundancy to decrease the error rate in noisy communication channels.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found