Pruning neural networks without any data by iteratively conserving synaptic flow

Neural Information Processing Systems 

Recent works have identified, through an expensive sequence of training and pruning cycles, the existence of winning lottery tickets or sparse trainable subnetworks at initialization. This raises a foundational question: can we identify highly sparse trainable subnetworks at initialization, without ever training, or indeed without ever looking at the data?

Similar Docs  Excel Report  more

TitleSimilaritySource
None found