Zero-to-Strong Generalization: Eliciting Strong Capabilities of Large Language Models Iteratively without Gold Labels
Liu, Chaoqun, Chao, Qin, Zhang, Wenxuan, Wu, Xiaobao, Li, Boyang, Luu, Anh Tuan, Bing, Lidong
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) have demonstrated remarkable performance through supervised fine-tuning or in-context learning using gold labels. However, this paradigm is limited by the availability of gold labels, while in certain scenarios, LLMs may need to perform tasks that are too complex for humans to provide such labels. To tackle this challenge, this study explores whether solely utilizing unlabeled data can elicit strong model capabilities. We propose a new paradigm termed zero-to-strong generalization. We iteratively prompt LLMs to annotate unlabeled data and retain high-quality labels by filtering. Surprisingly, we obverse that this iterative process gradually unlocks LLMs' potential on downstream tasks. Our experiments on extensive classification and reasoning tasks confirm the effectiveness of our proposed framework. Our analysis indicates that this paradigm is effective for both in-context learning and fine-tuning, and for various model sizes.
arXiv.org Artificial Intelligence
Sep-18-2024
- Country:
- Asia
- China > Zhejiang Province
- Hangzhou (0.04)
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.04)
- Singapore (0.05)
- China > Zhejiang Province
- Europe
- Belgium > Brussels-Capital Region
- Brussels (0.04)
- Iceland > Capital Region
- Reykjavik (0.04)
- Middle East > Malta
- Eastern Region > Northern Harbour District > St. Julian's (0.04)
- Spain > Catalonia
- Barcelona Province > Barcelona (0.04)
- Belgium > Brussels-Capital Region
- North America
- Canada > Ontario
- Toronto (0.04)
- United States
- New York > New York County
- New York City (0.04)
- Washington > King County
- Seattle (0.14)
- New York > New York County
- Canada > Ontario
- Asia
- Genre:
- Research Report (0.40)
- Technology: