MERLOT: A Distilled LLM-based Mixture-of-Experts Framework for Scalable Encrypted Traffic Classification
Chen, Yuxuan, Li, Rongpeng, Zhao, Zhifeng, Zhang, Honggang
–arXiv.org Artificial Intelligence
We present MERLOT, a scalable mixture-of-expert (MoE) based refinement of distilled large language model optimized for encrypted traffic classification. By applying model distillation techniques in a teacher-student paradigm, compact models derived from GPT-2-base retain high classification accuracy while minimizing computational costs. These models function as specialized experts in an MoE architecture, dynamically assigned via a gating network. Unlike generation-based methods, our approach directly classifies encrypted traffic using the final decoder token with contextual feature embedding as input. Experiments on 10 datasets show superior or competitive performance over the state-of-the-art models while significantly reducing resource demands, underscoring its effectiveness and robustness.
arXiv.org Artificial Intelligence
Nov-19-2024
- Country:
- Asia
- China > Zhejiang Province
- Hangzhou (0.05)
- Macao (0.04)
- Middle East > Jordan (0.04)
- China > Zhejiang Province
- Europe
- France > Auvergne-Rhône-Alpes
- Spain > Galicia
- Madrid (0.04)
- United Kingdom > England
- Greater London > London (0.04)
- North America > United States
- Louisiana > Orleans Parish
- New Orleans (0.04)
- New York > New York County
- New York City (0.04)
- Louisiana > Orleans Parish
- Asia
- Genre:
- Research Report > Promising Solution (0.34)
- Industry:
- Technology: