Effective Demonstration Annotation for In-Context Learning via Language Model-Based Determinantal Point Process
Wang, Peng, Wang, Xiaobin, Lou, Chao, Mao, Shengyu, Xie, Pengjun, Jiang, Yong
–arXiv.org Artificial Intelligence
In-context learning (ICL) is a few-shot learning paradigm that involves learning mappings through input-output pairs and appropriately applying them to new instances. Despite the remarkable ICL capabilities demonstrated by Large Language Models (LLMs), existing works are highly dependent on large-scale labeled support sets, not always feasible in practical scenarios. To refine this approach, we focus primarily on an innovative selective annotation mechanism, which precedes the standard demonstration retrieval. We introduce the Language Model-based Determinant Point Process (LM-DPP) that simultaneously considers the uncertainty and diversity of unlabeled instances for optimal selection. Consequently, this yields a subset for annotation that strikes a trade-off between the two factors. We apply LM-DPP to various language models, including GPT-J, LlaMA, and GPT-3. Experimental results on 9 NLU and 2 Generation datasets demonstrate that LM-DPP can effectively select canonical examples. Further analysis reveals that LLMs benefit most significantly from subsets that are both low uncertainty and high diversity.
arXiv.org Artificial Intelligence
Aug-4-2024
- Country:
- Asia
- Europe
- Spain > Catalonia
- Barcelona Province > Barcelona (0.04)
- United Kingdom > England
- Cumbria (0.04)
- Spain > Catalonia
- North America > United States
- Minnesota > Hennepin County
- Minneapolis (0.14)
- New York > New York County
- New York City (0.04)
- South Carolina > Spartanburg County
- Spartanburg (0.04)
- Texas (0.04)
- Washington > King County
- Seattle (0.04)
- Minnesota > Hennepin County
- Genre:
- Research Report (1.00)
- Industry:
- Education (0.46)
- Government (0.68)
- Technology: