OntheNoiseRobustnessofIn-ContextLearning forTextGeneration

Neural Information Processing Systems 

Large language models (LLMs) have shown impressive performance on downstream tasks by in-contextlearning (ICL), which heavily relies on the quality of demonstrations selected from a large set of annotated examples.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found