Federated Attention: A Distributed Paradigm for Collaborative LLM Inference over Edge Networks
Deng, Xiumei, Xiong, Zehui, Chen, Binbin, Kim, Dong In, Debbah, Merouane, Poor, H. Vincent
–arXiv.org Artificial Intelligence
Large language models (LLMs) are proliferating rapidly at the edge, delivering intelligent capabilities across diverse application scenarios. However, their practical deployment in collaborative scenarios confronts fundamental challenges: privacy vulnerabilities, communication overhead, and computational bottlenecks. To address these, we propose Federated Attention (FedAttn), which integrates the federated paradigm into the self-attention mechanism, creating a new distributed LLM inference framework that simultaneously achieves privacy protection, communication efficiency, and computational efficiency. FedAttn enables participants to perform local self-attention over their own token representations while periodically exchanging and aggregating Key-Value (KV) matrices across multiple Transformer blocks, collaboratively generating LLM responses without exposing private prompts. Further, we identify a structural duality between contextual representation refinement in FedAttn and parameter optimization in FL across private data, local computation, and global aggregation. This key insight provides a principled foundation for systematically porting federated optimization techniques to collaborative LLM inference. Building on this framework, we theoretically analyze how local self-attention computation within participants and heterogeneous token relevance among participants shape error propagation dynamics across Transformer blocks. Moreover, we characterize the fundamental trade-off between response quality and communication/computation efficiency, which is governed by the synchronization interval and the number of participants. Experimental results validate our theoretical analysis, and reveal significant optimization opportunities through sparse attention and adaptive KV aggregation, highlighting FedAttn's potential to deliver scalability and efficiency in real-world edge deployments.
arXiv.org Artificial Intelligence
Nov-5-2025
- Country:
- Asia
- China > Zhejiang Province
- Hangzhou (0.04)
- Japan > Honshū
- Kansai > Osaka Prefecture > Osaka (0.04)
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.14)
- Singapore (0.04)
- South Korea > Gyeonggi-do
- Suwon (0.04)
- Thailand > Bangkok
- Bangkok (0.04)
- China > Zhejiang Province
- Europe
- Austria > Vienna (0.14)
- France
- Hauts-de-France > Nord
- Lille (0.04)
- Île-de-France > Paris
- Paris (0.04)
- Hauts-de-France > Nord
- Sweden (0.04)
- United Kingdom > England
- Greater London > London (0.04)
- North America
- Canada
- British Columbia > Vancouver (0.04)
- Quebec > Montreal (0.04)
- Mexico > Mexico City
- Mexico City (0.04)
- United States
- California
- Los Angeles County > Long Beach (0.04)
- Santa Clara County > Santa Clara (0.04)
- Massachusetts
- Middlesex County > Wakefield (0.04)
- Suffolk County > Boston (0.04)
- California
- Canada
- Oceania > Australia (0.04)
- Asia
- Genre:
- Research Report (0.82)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Law (0.93)
- Technology: