Calibrating Reasoning in Language Models with Internal Consistency

Neural Information Processing Systems 

Large language models (LLMs) have demonstrated impressive capabilities in various reasoning tasks, aided by techniques like chain-of-thought prompting that elicits verbalized reasoning.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found