$μ$-MoE: Test-Time Pruning as Micro-Grained Mixture-of-Experts

Koike-Akino, Toshiaki, Liu, Jing, Wang, Ye

arXiv.org Artificial Intelligence 

To tackle the huge computational demand of large foundation models, activation-aware compression techniques without retraining have been introduced. However, since these rely on calibration data, domain shift may arise for unknown downstream tasks. With a computationally efficient calibration, activation-aware pruning can be executed for every prompt adaptively, yet achieving reduced complexity at inference. We formulate it as a mixture of micro-experts, called $μ$-MoE. Several experiments demonstrate that $μ$-MoE can dynamically adapt to task/prompt-dependent structured sparsity on the fly.