DRUPI: Dataset Reduction Using Privileged Information

Wang, Shaobo, Yang, Yantai, Zhang, Shuaiyu, Sun, Chenghao, Li, Weiya, Hu, Xuming, Zhang, Linfeng

arXiv.org Artificial Intelligence 

Dataset reduction (DR) seeks to select or distill samples from large datasets into smaller subsets while preserving performance on target tasks. Existing methods primarily focus on pruning or synthesizing data in the same format as the original dataset, typically the input data and corresponding labels. However, in DR settings, we find it is possible to synthesize more information beyond the data-label pair as an additional learning target to facilitate model training. In this paper, we introduce Dataset Reduction Using Privileged Information (DRUPI), which enriches DR by synthesizing privileged information alongside the reduced dataset. This privileged information can take the form of feature labels or attention labels, providing auxiliary supervision to improve model learning. Our findings reveal that effective feature labels must balance between being overly discriminative and excessively diverse, with a moderate level proving optimal for improving the reduced dataset's efficacy. Extensive experiments on ImageNet, CIFAR-10/100, and Tiny ImageNet demonstrate that DRUPI integrates seamlessly with existing dataset reduction methods, offering significant performance gains. The code will be released after the paper is accepted. Dataset Reduction (DR) has attracted considerable attention in recent years, with the primary aim of compressing large datasets into smaller subsets while maintaining comparable statistical performance. Existing methods for DR can be broadly classified into two main categories: coreset selection and dataset distillation. In typical real-world scenarios, training models for target tasks is generally constrained to input data (e.g., images) and their corresponding labels, as these are the most readily available elements.