dataset distillation
Dataset Distillation Efficiently Encodes Low-Dimensional Representations from Gradient-Based Learning of Non-Linear Tasks
Kinoshita, Yuri, Nishikawa, Naoki, Toyoizumi, Taro
Dataset distillation, a training-aware data compression technique, has recently attracted increasing attention as an effective tool for mitigating costs of optimization and data storage. However, progress remains largely empirical. Mechanisms underlying the extraction of task-relevant information from the training process and the efficient encoding of such information into synthetic data points remain elusive. In this paper, we theoretically analyze practical algorithms of dataset distillation applied to the gradient-based training of two-layer neural networks with width $L$. By focusing on a non-linear task structure called multi-index model, we prove that the low-dimensional structure of the problem is efficiently encoded into the resulting distilled data. This dataset reproduces a model with high generalization ability for a required memory complexity of $\tildeΘ$$(r^2d+L)$, where $d$ and $r$ are the input and intrinsic dimensions of the task. To the best of our knowledge, this is one of the first theoretical works that include a specific task structure, leverage its intrinsic dimensionality to quantify the compression rate and study dataset distillation implemented solely via gradient-based algorithms.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California > Santa Clara County > Mountain View (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Education (0.69)
- Information Technology (0.46)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Diversity-Driven Synthesis: Enhancing Dataset Distillation through Directed Weight Adjustment
To avoid redundancy in these synthetic datasets, it is crucial that each element contains unique features and remains diverse from others during the synthesis stage. In this paper, we provide a thorough theoretical and empirical analysis of diversity within synthesized datasets. We argue that enhancing diversity can improve the parallelizable yet isolated synthesizing approach.
- Asia > Singapore > Central Region > Singapore (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > China > Shaanxi Province > Xi'an (0.04)
- Asia > China > Hubei Province > Wuhan (0.04)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.67)
- Research Report (0.46)
- Overview (0.46)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Maryland > Prince George's County > College Park (0.04)
- Asia > China > Liaoning Province (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Asia > China > Shaanxi Province > Xi'an (0.04)
- Africa > Togo (0.04)