Creating a Good Teacher for Knowledge Distillation in Acoustic Scene Classification

Morocutti, Tobias, Schmid, Florian, Koutini, Khaled, Widmer, Gerhard

arXiv.org Artificial Intelligence 

The DCASE23 challenge's [1] Low-Complexity Acoustic Scene Classificat ion task focuses on utilizing the TAU Urban Acoustic Scenes 2022 Mobile development dataset (TAU22) [2]. This dataset comprises one-second audio snippets from ten distinct acoustic scenes. In an attempt to make the models deployable on edge devices, a comple xity limit on the models is enforced: models are constrained to ha ve no more than 128,000 parameters and 30 million multiply-accum ulate operations (MMACs) for the inference of a 1-second audio sni p-pet. Among other model compression techniques such as Quantization [3] and Pruning [4], Knowledge Distillation (KD) [ 5-7] proved to be a particularly well-suited technique to improv e the performance of a low-complexity model in ASC. In a standard KD setting, a low-complexity model learns to mimic the teacher by minimizing a weighted sum of hard label l oss and distillation loss. While the soft targets are usually ob tained by one or multiple possibly complex teacher models, the distil lation loss tries to match the student predictions with the compute d soft targets based on the Kullback-Leibler divergence. Jung et al. [8] demonstrate that soft targets in a teacher-st udent setup benefit the learning process since one-hot labels do no t reflect the blurred decision boundaries between different acousti c scenes. Knowledge distillation has also been a very popular method i n the DCASE challenge submissions.