Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?

Neural Information Processing Systems 

Our findings on NoRa dataset reveal a prevalent vulnerability to such noise among current LLMs, with existing robust methods like self-correction and self-consistency showing limited efficacy.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found