Auto-ICL: In-Context Learning without Human Supervision
Yang, Jinghan, Ma, Shuming, Wei, Furu
–arXiv.org Artificial Intelligence
In the era of Large Language Models (LLMs), human-computer interaction has evolved towards natural language, offering unprecedented flexibility. Despite this, LLMs are heavily reliant on well-structured prompts to function efficiently within the realm of In-Context Learning. Vanilla In-Context Learning relies on human-provided contexts, such as labeled examples, explicit instructions, or other guiding mechanisms that shape the model's outputs. To address this challenge, our study presents a universal framework named Automatic In-Context Learning. Upon receiving a user's request, we ask the model to independently generate examples, including labels, instructions, or reasoning pathways. The model then leverages this self-produced context to tackle the given problem. Our approach is universally adaptable and can be implemented in any setting where vanilla In-Context Learning is applicable. We demonstrate that our method yields strong performance across a range of tasks, standing up well when compared to existing methods.
arXiv.org Artificial Intelligence
Nov-15-2023
- Country:
- Asia
- China > Hong Kong (0.04)
- Middle East
- Jordan (0.04)
- UAE > Abu Dhabi Emirate
- Abu Dhabi (0.04)
- Europe > Ireland
- Leinster > County Dublin > Dublin (0.04)
- North America
- Canada (0.04)
- United States > Washington
- King County > Seattle (0.04)
- South America > Uruguay (0.04)
- Asia
- Genre:
- Research Report > New Finding (0.45)
- Industry:
- Education (1.00)
- Health & Medicine (0.67)
- Leisure & Entertainment > Sports
- Soccer (0.46)
- Technology: