MAmmoTH2: Scaling Instructions from the Web Xiang Yue
–Neural Information Processing Systems
Instruction tuning improves the reasoning abilities of large language models (LLMs), with data quality and scalability being the crucial factors. Most instruction tuning data come from human crowd-sourcing or GPT-4 distillation. We propose a paradigm to efficiently harvest 10 million naturally existing instruction data from the pre-training web corpus to enhance LLM reasoning. Our approach involves (1) recalling relevant documents, (2) extracting instruction-response pairs, and (3) refining the extracted pairs using open-source LLMs. Fine-tuning base LLMs on this dataset, we build MAmmoTH2 models, which significantly boost performance on reasoning benchmarks.
Neural Information Processing Systems
May-31-2025, 19:47:08 GMT
- Country:
- North America > United States (0.28)
- Genre:
- Research Report
- Experimental Study (0.93)
- New Finding (0.92)
- Research Report
- Industry:
- Education (0.92)
- Technology: