Dataset Distillation with Convexified Implicit Gradients
Loo, Noel, Hasani, Ramin, Lechner, Mathias, Rus, Daniela
We propose a new dataset distillation algorithm using reparameterization and convexification of implicit gradients (RCIG), that substantially improves the state-of-the-art. To this end, we first formulate dataset distillation as a bi-level optimization problem. Then, we show how implicit gradients can be effectively used to compute meta-gradient updates. We further equip the algorithm with a convexified approximation that corresponds to learning on top of a frozen finite-width neural tangent kernel. Finally, we improve bias in implicit gradients by parameterizing the neural network to enable analytical computation of final-layer parameters given the body parameters. RCIG establishes the new state-of-the-art on a diverse series of dataset distillation tasks. Notably, with one image per class, on resized ImageNet, RCIG sees on average a 108\% improvement over the previous state-of-the-art distillation algorithm. Similarly, we observed a 66\% gain over SOTA on Tiny-ImageNet and 37\% on CIFAR-100.
Nov-9-2023
- Country:
- North America > United States
- California (0.14)
- Hawaii (0.14)
- Maryland (0.14)
- Massachusetts (0.14)
- North America > United States
- Genre:
- Research Report (1.00)
- Industry:
- Technology: