Goto

Collaborating Authors

 student network


Paraphrasing Complex Network: Network Compression via Factor Transfer

Neural Information Processing Systems

Many researchers have sought ways of model compression to reduce the size of a deep neural network (DNN) with minimal performance degradation in order to use DNNs in embedded systems. Among the model compression methods, a method called knowledge transfer is to train a student network with a stronger teacher network. In this paper, we propose a novel knowledge transfer method which uses convolutional operations to paraphrase teacher's knowledge and to translate it for the student. This is done by two convolutional modules, which are called a paraphraser and a translator. The paraphraser is trained in an unsupervised manner to extract the teacher factors which are defined as paraphrased information of the teacher network. The translator located at the student network extracts the student factors and helps to translate the teacher factors by mimicking them. We observed that our student network trained with the proposed factor transfer method outperforms the ones trained with conventional knowledge transfer methods.


DiversityMattersWhenLearningFromEnsembles

Neural Information Processing Systems

Whilesomerecent works propose to distill an ensemble model into a single model to reduce such costs,thereisstillaperformance gapbetween theensemble anddistilledmodels.


How a student becomes a teacher: learning and forgetting through Spectral methods

Neural Information Processing Systems

The above scheme proves particularly relevant when the student network is overparameterized (namely, when larger layer sizes are employed) as compared to the underlying teacher network. Under these operating conditions, it is tempting to speculate that the student ability to handle the given task could be eventually stored in a sub-portion of the whole network.


How a student becomes a teacher: learning and forgetting through Spectral methods

Neural Information Processing Systems

The above scheme proves particularly relevant when the student network is overparameterized (namely, when larger layer sizes are employed) as compared to the underlying teacher network. Under these operating conditions, it is tempting to speculate that the student ability to handle the given task could be eventually stored in a sub-portion of the whole network.