Beyond Similarity: A Gradient-based Graph Method for Instruction Tuning Data Selection

Zhao, Yang, Du, Li, Ding, Xiao, Ouyang, Yangou, Wang, Hepeng, Xiong, Kai, Gao, Jinglong, Sun, Zhouhao, Xu, Dongliang, Qing, Yang, Li, Dongchen, Qin, Bing, Liu, Ting

arXiv.org Artificial Intelligence 

Large language models (LLMs) have shown great potential across various industries due to their remarkable ability to generalize through instruction tuning. However, the limited availability of domain-specific data significantly hampers their performance on specialized tasks. While existing methods primarily focus on selecting training data from general datasets that are similar to the target domain, they often fail to consider the joint distribution of instructions, resulting in inefficient learning and suboptimal knowledge transfer. To address these challenges, we introduce G2IS (Gradient-based Graph Instruction Selection), a novel method that constructs a mixed gradient-based instruction graph to capture the joint distribution and interdependencies between instructions. By accounting for the relationships between instructions, G2IS improves domain adaptation efficiency. Additionally, we propose a gradient walk algorithm to refine the data selection process, enhancing both training effectiveness and efficiency. Our experiments demonstrate that G2IS outperforms traditional methods across various domain adaptation tasks, yielding significant performance gains, particularly in complex, data-scarce scenarios. These results underscore the potential of G2IS in advancing the development of large, domain-specific models.