Distillation-Enabled Knowledge Alignment Protocol for Semantic Communication in AI Agent Networks
–arXiv.org Artificial Intelligence
Abstract--Future networks are envisioned to connect massive artificial intelligence (AI) agents, enabling their extensive collaboration on diverse tasks. Compared to traditional entities, these agents naturally suit the semantic communication (SC), which can significantly enhance the bandwidth efficiency. Nevertheless, SC requires the knowledge among agents to be aligned, while agents have distinct expert knowledge for their individual tasks in practice. In this paper, we propose a distillation-enabled knowledge alignment protocol (DeKAP), which distills the expert knowledge of each agent into parameter-efficient low-rank matrices, allocates them across the network, and allows agents to simultaneously maintain aligned knowledge for multiple tasks. We formulate the joint minimization of alignment loss, communication overhead, and storage cost as a large-scale integer linear programming problem and develop a highly efficient greedy algorithm. From computer simulation, the DeKAP establishes knowledge alignment with the lowest communication and computation resources compared to conventional approaches. Future communication networks will usher in a new era of the "Internet of Intelligence," where human beings, devices, and a wide range of artificial intelligence (AI) agents are seamlessly interconnected [1].
arXiv.org Artificial Intelligence
Sep-29-2025
- Country:
- Asia > Malaysia
- Kuala Lumpur > Kuala Lumpur (0.04)
- Europe
- Finland (0.04)
- France > Auvergne-Rhône-Alpes
- Italy > Lazio
- Rome (0.04)
- Netherlands > North Holland
- Amsterdam (0.04)
- United Kingdom (0.04)
- North America > United States
- California > Los Angeles County > Long Beach (0.04)
- South America > Brazil
- Rio de Janeiro > Rio de Janeiro (0.04)
- Asia > Malaysia
- Genre:
- Research Report (0.40)
- Technology: