What are the Essential Factors in Crafting Effective Long Context Multi-Hop Instruction Datasets? Insights and Best Practices
Chen, Zhi, Chen, Qiguang, Qin, Libo, Guo, Qipeng, Lv, Haijun, Zou, Yicheng, Che, Wanxiang, Yan, Hang, Chen, Kai, Lin, Dahua
–arXiv.org Artificial Intelligence
Recent advancements in large language models (LLMs) with extended context windows have significantly improved tasks such as information extraction, question answering, and complex planning scenarios. In order to achieve success in long context tasks, a large amount of work has been done to enhance the long context capabilities of the model through synthetic data. Existing methods typically utilize the Self-Instruct framework to generate instruction tuning data for better long context capability improvement. However, our preliminary experiments indicate that less than 35% of generated samples are multi-hop, and more than 40% exhibit poor quality, limiting comprehensive understanding and further research. To improve the quality of synthetic data, we propose the Multi-agent Interactive Multihop Generation (MIMG) framework, incorporating a Quality Verification Agent, a Single-hop Question Generation Agent, a Multiple Question Sampling Strategy, and a Multi-hop Question Merger Agent. This framework improves the data quality, with the proportion of high-quality, multi-hop, and diverse data exceeding 85%. Furthermore, we systematically investigate strategies for document selection, question merging, and validation techniques through extensive experiments across various models. Our findings show that our synthetic high-quality long-context instruction data significantly enhances model performance, even surpassing models trained on larger amounts of human-annotated data. Our code is available at: https://github.com/WowCZ/LongMIT. Recently, large language models (LLMs) with long context windows have significantly improved tasks such as information extraction, question answering, and even complex planning scenarios (Liu et al., 2024a; Bai et al., 2024b; Hu et al., 2023; 2024; Xu et al., 2024b). Research on developing long-context LLMs has predominantly focused on extending the context window (Ding et al., 2024; Jin et al., 2024; Peng et al., 2024). Nevertheless, in practical applications, simply expanding the context window proves inadequate (Hsieh et al., 2024; Huang, 2024). There is a pressing need for training to optimize utilization of long context (Zhang et al., 2024), especially in instruction tuning (Fu et al., 2024b).
arXiv.org Artificial Intelligence
Sep-3-2024
- Country:
- Africa > North Africa (0.04)
- Asia
- Europe
- North America
- Canada > Ontario
- Toronto (0.04)
- Mexico > Mexico City
- Mexico City (0.04)
- United States > California
- San Francisco County > San Francisco (0.04)
- Canada > Ontario
- Genre:
- Research Report > New Finding (0.86)
- Technology: