Structured Prompting: Scaling In-Context Learning to 1,000 Examples
Hao, Yaru, Sun, Yutao, Dong, Li, Han, Zhixiong, Gu, Yuxian, Wei, Furu
–arXiv.org Artificial Intelligence
Large language models have exhibited intriguing in-context learning capability, achieving promising zero- and few-shot performance without updating the parameters. However, conventional in-context learning is usually restricted by length constraints, rendering it ineffective to absorb supervision from a large number of examples. In order to go beyond few shots, we introduce structured prompting that breaks the length limit and scales in-context learning to thousands of examples. Specifically, demonstration examples are separately encoded with well-designed position embeddings, and then they are jointly attended by the test example using a rescaled attention mechanism. So we can scale the number of exemplars with linear complexity instead of quadratic complexity with respect to length. Experimental results on a diverse set of tasks show that our approach improves end-task performance and reduces evaluation variance over conventional in-context learning as the number of demonstration examples increases. Code has been released at https://aka.ms/structured-prompting.
arXiv.org Artificial Intelligence
Dec-13-2022
- Country:
- Europe (0.68)
- North America > United States
- Minnesota (0.28)
- Washington > King County
- Seattle (0.28)
- Genre:
- Research Report (0.64)
- Technology: