LLM-Sketch: Enhancing Network Sketches with LLM

Li, Yuanpeng, Xu, Zhen, Lv, Zongwei, Hu, Yannan, Cui, Yong, Yang, Tong

arXiv.org Artificial Intelligence 

Recent studies attempt to optimize maintain acceptable error rates in the face of massive-scale networks sketches using machine learning; however, these approaches and highly skewed traffic distributions [7, 15]. In practice, a face the challenges of lacking adaptivity to dynamic networks and small fraction of large flows typically accounts for the majority of incurring high training costs. In this paper, we propose LLM-Sketch, total traffic volume, while many small flows remain numerous yet based on the insight that fields beyond the flow IDs in packet headers contribute only modestly. A representative example is the Count-can also help infer flow sizes. By using a two-tier data structure Min Sketch (CMS) [12], which updates and queries counters based and separately recording large and small flows, LLM-Sketch improves on hashed flow IDs. Although CMS is simple and memory-efficient, accuracy while minimizing memory usage. Furthermore, it it faces a fundamental trade-off: counters sized for small flows undercount leverages fine-tuned large language models (LLMs) to reliably estimate the large ones, while counters sized for large flows waste flow sizes. We evaluate LLM-Sketch on three representative memory on the many small ones. Consequently, CMS cannot accurately tasks, and the results demonstrate that LLM-Sketch outperforms capture the minority of large flows without significantly state-of-the-art methods by achieving a 7.5 accuracy improvement.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found