On the Noise Robustness of In-Context Learning for Text Generation

Neural Information Processing Systems 

Large language models (LLMs) have shown impressive performance on downstream tasks by in-context learning (ICL), which heavily relies on the quality of demonstrations selected from a large set of annotated examples.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found