Goto

Collaborating Authors

 subnetwork





DiversityMattersWhenLearningFromEnsembles

Neural Information Processing Systems

Whilesomerecent works propose to distill an ensemble model into a single model to reduce such costs,thereisstillaperformance gapbetween theensemble anddistilledmodels.



Towards Data-Agnostic Pruning At Initialization: What Makes a Good Sparse Mask?

Neural Information Processing Systems

PaI methods manage to find trainable subnetworks that outperform random pruning, their performance in terms of both accuracy and computational reduction is far from satisfactory compared to post-training pruning and the understanding of PaI is missing.


Towards Data-Agnostic Pruning At Initialization: What Makes a Good Sparse Mask?

Neural Information Processing Systems

PaI methods manage to find trainable subnetworks that outperform random pruning, their performance in terms of both accuracy and computational reduction is far from satisfactory compared to post-training pruning and the understanding of PaI is missing.



Enhancing Knowledge Transfer for Task Incremental Learning with Data-free Subnetwork Qiang Gao

Neural Information Processing Systems

DSN primarily seeks to transfer knowledge to the new coming task from the learned tasks by selecting the affiliated weights of a small set of neurons to be activated, including the reused neurons from prior tasks via neuron-wise masks. And it also transfers possibly valuable knowledge to the earlier tasks via data-free replay.