TL-Training: A Task-Feature-Based Framework for Training Large Language Models in Tool Use
Ye, Junjie, Wu, Yilong, Li, Sixian, Yang, Yuming, Gui, Tao, Zhang, Qi, Huang, Xuanjing, Wang, Peng, Shi, Zhongchao, Fan, Jianping, Du, Zhengyin
–arXiv.org Artificial Intelligence
Large language models (LLMs) achieve remarkable advancements by leveraging tools to interact with external environments, a critical step toward generalized AI. However, the standard supervised fine-tuning (SFT) approach, which relies on large-scale datasets, often overlooks task-specific characteristics in tool use, leading to performance bottlenecks. To address this issue, we analyze three existing LLMs and uncover key insights: training data can inadvertently impede tool-use behavior, token importance is distributed unevenly, and errors in tool calls fall into a small set of distinct categories. Building on these findings, we propose TL-Training, a task-feature-based framework that mitigates the effects of suboptimal training data, dynamically adjusts token weights to prioritize key tokens during SFT, and incorporates a robust reward mechanism tailored to error categories, optimized through proximal policy optimization. We validate TL-Training by training CodeLLaMA-2-7B and evaluating it on four diverse open-source test sets. Our results demonstrate that the LLM trained by our method matches or surpasses both open- and closed-source LLMs in tool-use performance using only 1,217 training data points. Additionally, our method enhances robustness in noisy environments and improves general task performance, offering a scalable and efficient paradigm for tool-use training in LLMs. The code and data are available at https://github.com/Junjie-Ye/TL-Training.
arXiv.org Artificial Intelligence
Dec-19-2024
- Country:
- Europe > Austria
- Vienna (0.14)
- North America > United States (0.28)
- Europe > Austria
- Genre:
- Research Report > New Finding (1.00)
- Technology: