Generative Dataset Distillation Based on Self-knowledge Distillation
Li, Longzhen, Li, Guang, Togo, Ren, Maeda, Keisuke, Ogawa, Takahiro, Haseyama, Miki
–arXiv.org Artificial Intelligence
Generative dataset distillation aims to condense the information from large-scale datasets into a generative model rather than a static Dataset distillation is an effective technique for reducing the cost dataset [16, 17]. Unlike traditional dataset distillation methods, and complexity of model training while maintaining performance by which produce a smaller fixed dataset, generative dataset distillation compressing large datasets into smaller, more efficient versions. In trains a model capable of generating effective synthetic data on this paper, we present a novel generative dataset distillation method the fly [18]. This approach has been shown to offer better crossarchitecture that can improve the accuracy of aligning prediction logits. Our approach performance compared to traditional methods, while integrates self-knowledge distillation to achieve more precise also providing greater flexibility in the data it generates. The generative distribution matching between the synthetic and original data, dataset distillation process typically consists of two steps.
arXiv.org Artificial Intelligence
Jan-7-2025