Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization James Oldfield
–Neural Information Processing Systems
The Mixture of Experts (MoE) paradigm provides a powerful way to decompose dense layers into smaller, modular computations often more amenable to human interpretation, debugging, and editability. However, a major challenge lies in the computational cost of scaling the number of experts high enough to achieve finegrained specialization. In this paper, we propose the Multilinear Mixture of Experts (µMoE) layer to address this, focusing on vision models.
Neural Information Processing Systems
Mar-21-2025, 10:24:39 GMT
- Country:
- Asia > Middle East
- Iraq (0.14)
- Europe (1.00)
- North America > United States
- California (0.14)
- Wisconsin (0.14)
- Asia > Middle East
- Genre:
- Research Report > Experimental Study (0.46)
- Technology: