Goto

Collaborating Authors

 upsamplingbilinear2d



A Experimental Setup

Neural Information Processing Systems

A.2 Training Settings of T eacher We provide training settings of the teacher w.r.t. In practice, we do not optimize the student and the generator via the plain losses in Eq. 4 and Eq. 6, Number of steps for pretraining G, δ: the bound in Eqs. A.4 Generator Architectures In Table 8, we show different architectures of the generator w.r.t. ResNetBlockY are provided in Table 9. ConvBlockX(c This is because the "uncond" generator has learned to jump "sum" generator enables stable training of our model and gives the best accuracy and crossentropy The "cat" generator only yields good results at "uncond" generator does not encounter any problem with MAD to learn faster than the "cat" generator. An important question is "What is a reasonable upper bound


Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation

Do, Kien, Le, Hung, Nguyen, Dung, Nguyen, Dang, Harikumar, Haripriya, Tran, Truyen, Rana, Santu, Venkatesh, Svetha

arXiv.org Artificial Intelligence

Data-free Knowledge Distillation (DFKD) has attracted attention recently thanks to its appealing capability of transferring knowledge from a teacher network to a student network without using training data. The main idea is to use a generator to synthesize data for training the student. As the generator gets updated, the distribution of synthetic data will change. Such distribution shift could be large if the generator and the student are trained adversarially, causing the student to forget the knowledge it acquired at previous steps. To alleviate this problem, we propose a simple yet effective method called Momentum Adversarial Distillation (MAD) which maintains an exponential moving average (EMA) copy of the generator and uses synthetic samples from both the generator and the EMA generator to train the student. Since the EMA generator can be considered as an ensemble of the generator's old versions and often undergoes a smaller change in updates compared to the generator, training on its synthetic samples can help the student recall the past knowledge and prevent the student from adapting too quickly to new updates of the generator. Our experiments on six benchmark datasets including big datasets like ImageNet and Places365 demonstrate the superior performance of MAD over competing methods for handling the large distribution shift problem. Our method also compares favorably to existing DFKD methods and even achieves state-of-the-art results in some cases.