Latent Prototype Routing: Achieving Near-Perfect Load Balancing in Mixture-of-Experts
–arXiv.org Artificial Intelligence
Mixture-of-Experts (MoE) architectures have emerged as a key strategy for scaling large language models (LLMs) efficiently. However, current MoE systems suffer from severe load imbalance, where only a small subset of experts is consistently activated during training and inference, leading to significant underutilization of model capacity and computational resources. In this work, we revisit expert routing through a clustering perspective and propose Latent Prototype Routing (LPR), a novel routing framework that generalizes existing approaches while promoting balanced expert utilization without compromising downstream performance. Extensive experiments across multiple open-source MoE models -- including DeepSeek-V3, Qwen3-MoE, and Mixtral -- demonstrate that LPR reduces the Gini coefficient of expert load from 0.70 to 0.035 on average, improves the min-max expert load ratio from 1e-6 to 0.70, achieving near-perfect load balancing.
arXiv.org Artificial Intelligence
Jun-27-2025
- Country:
- Asia > Middle East
- Israel > Jerusalem District
- Jerusalem (0.04)
- Jordan (0.04)
- Israel > Jerusalem District
- Asia > Middle East
- Genre:
- Research Report (0.64)
- Industry:
- Energy > Power Industry (0.35)
- Technology: