SlimCaching: Edge Caching of Mixture-of-Experts for Distributed Inference
Chen, Qian, Chen, Xianhao, Huang, Kaibin
–arXiv.org Artificial Intelligence
Mixture-of-Experts (MoE) models improve the scalability of large language models (LLMs) by activating only a small subset of relevant experts per input. However, the sheer number of expert networks in an MoE model introduces a significant storage burden for an edge device. To address this challenge, we consider a scenario where experts are dispersed across an edge network for distributed inference. Based on the popular Top-$K$ expert selection strategy, we formulate a latency minimization problem by optimizing expert caching on edge servers under storage constraints. When $K=1$, the problem reduces to a monotone submodular maximization problem with knapsack constraints, for which we design a greedy-based algorithm with a $(1 - 1/e)$-approximation guarantee. For the general case where $K \geq 1$, expert co-activation within the same MoE layer introduces non-submodularity, which renders greedy methods ineffective. To tackle this issue, we propose a successive greedy decomposition method to decompose the original problem into a series of subproblems, with each being solved by a dynamic programming approach. Furthermore, we design an accelerated algorithm based on the max-convolution technique to obtain the approximate solution with a provable guarantee in polynomial time. Simulation results on various MoE models demonstrate that our method significantly reduces inference latency compared to existing baselines.
arXiv.org Artificial Intelligence
Nov-25-2025
- Country:
- Asia
- Europe
- Austria > Vienna (0.14)
- France (0.04)
- Germany (0.04)
- Netherlands > North Holland
- Amsterdam (0.04)
- North America > United States
- California > San Francisco County
- San Francisco (0.04)
- Hawaii > Honolulu County
- Honolulu (0.04)
- California > San Francisco County
- Genre:
- Research Report (0.82)
- Industry:
- Telecommunications (0.67)
- Technology: