$μ$-MoE: Test-Time Pruning as Micro-Grained Mixture-of-Experts
Koike-Akino, Toshiaki, Liu, Jing, Wang, Ye
–arXiv.org Artificial Intelligence
To tackle the huge computational demand of large foundation models, activation-aware compression techniques without retraining have been introduced. However, since these rely on calibration data, domain shift may arise for unknown downstream tasks. With a computationally efficient calibration, activation-aware pruning can be executed for every prompt adaptively, yet achieving reduced complexity at inference. We formulate it as a mixture of micro-experts, called $μ$-MoE. Several experiments demonstrate that $μ$-MoE can dynamically adapt to task/prompt-dependent structured sparsity on the fly.
arXiv.org Artificial Intelligence
May-27-2025
- Country:
- North America > United States
- Massachusetts > Middlesex County
- Cambridge (0.04)
- New Jersey (0.04)
- Massachusetts > Middlesex County
- Oceania > Australia
- New South Wales > Sydney (0.04)
- North America > United States
- Genre:
- Research Report (0.82)
- Industry:
- Education > Curriculum > Subject-Specific Education (0.46)
- Technology: