Reranking Laws for Language Generation: A Communication-Theoretic Perspective

Neural Information Processing Systems 

To ensure large language models (LLMs) are used safely, one must reduce their propensity to hallucinate or to generate unacceptable answers. A simple and often used strategy is to first let the LLM generate multiple hypotheses and then employ a reranker to choose the best one.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found