LAMDAS: LLM as an Implicit Classifier for Domain-specific Data Selection
Wu, Jian, Yu, Hang, Liu, Bingchang, Yang, Wenjie, Di, Peng, Li, Jianguo, Zhang, Yue
–arXiv.org Artificial Intelligence
Adapting large language models (LLMs) to specific domains often faces a critical bottleneck: the scarcity of high-quality, human-curated data. While large volumes of unchecked data are readily available, indiscriminately using them for fine-tuning risks introducing noise and degrading performance. Strategic data selection is thus crucial, requiring a method that is both accurate and efficient. Existing approaches, categorized as similarity-based and direct optimization methods, struggle to simultaneously achieve these goals. In this paper, we introduce LAMDAS (LLM As an iMplicit classifier for domain-specific DAta Selection), a novel approach that leverages the pre-trained LLM itself as an implicit classifier, thereby bypassing explicit feature engineering and computationally intensive optimization process. LAMDAS reframes data selection as a one-class classification problem, identifying candidate data that "belongs" to the target domain defined by a small reference dataset. Extensive experimental results demonstrate that LAMDAS not only exceeds the performance of full-data training using a fraction of the data but also outperforms nine state-of-the-art (SOTA) baselines under various scenarios. Furthermore, LAMDAS achieves the most compelling balance between performance gains and computational efficiency compared to all evaluated baselines.
arXiv.org Artificial Intelligence
Sep-9-2025
- Country:
- Asia > Thailand
- North America > United States
- Florida > Miami-Dade County
- Miami (0.04)
- Washington > King County
- Seattle (0.04)
- Florida > Miami-Dade County
- Genre:
- Overview (0.87)
- Research Report > New Finding (0.48)
- Industry:
- Education (0.46)
- Technology: