KVComm: Enabling Efficient LLM Communication through Selective KV Sharing

Shi, Xiangyu, Chiesa, Marco, Maguire, Gerald Q. Jr., Kostic, Dejan

arXiv.org Artificial Intelligence 

Large Language Models (LLMs) are increasingly deployed in multi-agent systems, where effective inter-model communication is crucial. Existing communication protocols either rely on natural language, incurring high inference costs and information loss, or on hidden states, which suffer from information concentration bias and inefficiency. To address these limitations, we propose KVComm, a novel communication framework that enables efficient communication between LLMs through selective sharing of KV pairs. KVComm leverages the rich information encoded in the KV pairs while avoiding the pitfalls of hidden states. We introduce a KV layer-wise selection strategy based on attention importance scores with a Gaussian prior to identify the most informative KV pairs for communication. Extensive experiments across diverse tasks and model pairs demonstrate that KVComm achieves comparable performance to the upper-bound method, which directly merges inputs to one model without any communication, while transmitting as few as 30% of layers' KV pairs. Our study highlights the potential of KV pairs as an effective medium for inter-LLM communication, paving the way for scalable and efficient multi-agent systems. Large Language Models (LLMs) have catalyzed a paradigm shift from isolated model capabilities towards collaborative multi-agent systems (Guo et al., 2024; Tran et al., 2025). CAMEL (Li et al., 2023), AutoGen (Wu et al., 2024), and ChatDev (Qian et al., 2023) have demonstrated the potential of LLMs to collaborate effectively in multi-agent systems, achieving impressive results in various tasks. These systems leverage the strengths of individual LLMs and enable them to work together to solve complex problems that are beyond the capabilities of a single model (Y ang et al., 2024a).