Swift Cross-Dataset Pruning: Enhancing Fine-Tuning Efficiency in Natural Language Understanding
–arXiv.org Artificial Intelligence
Dataset pruning aims to select a subset of a dataset for efficient model training. While data efficiency in natural language processing has primarily focused on within-corpus scenarios during model pre-training, efficient dataset pruning for task-specific fine-tuning across diverse datasets remains challenging due to variability in dataset sizes, data distributions, class imbalance and label spaces. Current cross-dataset pruning techniques for fine-tuning often rely on computationally expensive sample ranking processes, typically requiring full dataset training or reference models. We address this gap by proposing Swift Cross-Dataset Pruning (SCDP). Specifically, our approach uses TF-IDF embeddings with geometric median to rapidly evaluate sample importance. We then apply dataset size-adaptive pruning to ensure diversity: for smaller datasets, we retain samples far from the geometric median, while for larger ones, we employ distance-based stratified pruning. Experimental results on six diverse datasets demonstrate the effectiveness of our method, spanning various tasks and scales while significantly reducing computational resources. Source code is available at: https://github.com/he-y/NLP-Dataset-Pruning
arXiv.org Artificial Intelligence
Jan-4-2025
- Country:
- Asia
- Europe
- Denmark (0.04)
- France > Île-de-France
- Italy > Sardinia (0.04)
- Netherlands (0.04)
- Russia (0.04)
- Sweden (0.04)
- Ukraine > Crimea
- Sevastopol (0.04)
- United Kingdom (0.04)
- North America
- Canada > British Columbia
- Mexico > Mexico City
- Mexico City (0.04)
- Saint Martin (0.04)
- United States > Minnesota
- Hennepin County > Minneapolis (0.14)
- Oceania
- Australia (0.04)
- Fiji (0.04)
- New Zealand (0.04)
- Samoa (0.04)
- Genre:
- Research Report (0.50)
- Industry:
- Government (1.00)
- Technology: