CompeteSMoE -- Statistically Guaranteed Mixture of Experts Training via Competition
Nguyen, Nam V., Nguyen, Huy, Pham, Quang, Nguyen, Van, Ramasamy, Savitha, Ho, Nhat
–arXiv.org Artificial Intelligence
Sparse mixture of experts (SMoE) offers an appealing solution to scale up the model complexity beyond the mean of increasing the network's depth or width. However, we argue that effective SMoE training remains challenging because of the suboptimal routing process where experts that perform computation do not directly contribute to the routing process. In this work, we propose competition, a novel mechanism to route tokens to experts with the highest neural response. Theoretically, we show that the competition mechanism enjoys a better sample efficiency than the traditional softmax routing. Furthermore, we develop CompeteSMoE, a simple yet effective algorithm to train large language models by deploying a router to learn the competition policy, thus enjoying strong performances at a low training overhead. Our extensive empirical evaluations on both the visual instruction tuning and language pre-training tasks demonstrate the efficacy, robustness, and scalability of CompeteSMoE compared to state-of-the-art SMoE strategies. We have made the implementation available at: https://github.com/Fsoft-AIC/CompeteSMoE. This work is an improved version of the previous study at arXiv:2402.02526
arXiv.org Artificial Intelligence
May-20-2025
- Country:
- Asia
- China (0.04)
- Middle East
- Jordan (0.04)
- UAE > Abu Dhabi Emirate
- Abu Dhabi (0.04)
- Europe
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Ireland > Leinster
- North America > United States
- Texas > Travis County > Austin (0.04)
- Asia
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Health & Medicine > Therapeutic Area > Neurology (0.34)
- Technology: