Bayes Conditional Distribution Estimation for Knowledge Distillation Based on Conditional Mutual Information
Ye, Linfeng, Hamidi, Shayan Mohajer, Tan, Renhao, Yang, En-Hui
–arXiv.org Artificial Intelligence
It is believed that in knowledge distillation (KD), the role of the teacher is to provide an estimate for the unknown Bayes conditional probability distribution (BCPD) to be used in the student training process. Conventionally, this estimate is obtained by training the teacher using maximum log-likelihood (MLL) method. To improve this estimate for KD, in this paper we introduce the concept of conditional mutual information (CMI) into the estimation of BCPD and propose a novel estimator called the maximum CMI (MCMI) method. Specifically, in MCMI estimation, both the log-likelihood and CMI of the teacher are simultaneously maximized when the teacher is trained. Through Eigen-CAM, it is further shown that maximizing the teacher's CMI value allows the teacher to capture more contextual information in an image cluster. Via conducting a thorough set of experiments, we show that by employing a teacher trained via MCMI estimation rather than one trained via MLL estimation in various state-of-the-art KD frameworks, the student's classification accuracy consistently increases, with the gain of up to 3.32%. This suggests that the teacher's BCPD estimate provided by MCMI method is more accurate than that provided by MLL method. In addition, we show that such improvements in the student's accuracy are more drastic in zero-shot and few-shot settings. Notably, the student's accuracy increases with the gain of up to 5.72% when 5% of the training samples are available to the student (few-shot), and increases from 0% to as high as 84% for an omitted class (zero-shot). Knowledge distillation (Buciluǎ et al., 2006; Hinton et al., 2015) (KD) has received tremendous attention from both academia and industry in recent years as a highly effective model compression technique, and has been deployed in different settings (Radosavovic et al., 2018; Furlanello et al., 2018; Xie et al., 2020). The crux of KD is to distill the knowledge of a cumbersome model (teacher) into a lightweight model (student). One critical component of KD that has received relatively little attention is the training of the teacher model. In fact, in most of the existing KD methods, the teacher is trained to maximize its own performance, even though this does not necessarily lead to an improvement in the student's performance (Cho & Hariharan, 2019; Mirzadeh et al., 2020).
arXiv.org Artificial Intelligence
Jan-16-2024
- Country:
- North America > Canada > Ontario > Toronto (0.14)
- Genre:
- Research Report (1.00)
- Industry:
- Education (1.00)
- Technology: