A Further Results on the Existence of Matching Subnetworks in BERT
–Neural Information Processing Systems
In Table 2 in Section 3, we show the highest sparsities for which IMP subnetwork performance is within one standard deviation of the unpruned BERT model on each task. In Table 4 below, we plot the same information for the highest sparsities at which IMP subnetworks match or exceed the performance of the unpruned BERT model on each task. The sparsest winning tickets are in many cases larger under this stricter criterion. QQP goes from 90% sparsity to 70% sparsity, STS-B goes from 50% sparsity to 40% sparsity, QNLI goes from 70% sparsity to 50% sparsity, MRPC goes from 50% sparsity to 40% sparsity, RTE goes from 60% sparsity to 50%, SST-2 goes from 60% sparsity to 50%, CoLA goes from 50% sparsity to 40% sparsity, SQuAD goes from 40% sparsity to 20% sparsity, and MLM goes from 70% sparsity to 50% sparsity. As broader context for the relationship between sparsity and accuracy, Figure 11 shows the performance of IMP subnetworks across all sparsities on each task.
Neural Information Processing Systems
May-31-2025, 13:54:21 GMT
- Technology: