Enhancing Communication Efficiency in FL with Adaptive Gradient Quantization and Communication Frequency Optimization
Tariq, Asadullah, Qayyum, Tariq, Serhani, Mohamed Adel, Sallabi, Farag, Taleb, Ikbal, Barka, Ezedin S.
–arXiv.org Artificial Intelligence
Federated Learning (FL) enables participant devices to collaboratively train deep learning models without sharing their data with the server or other devices, effectively addressing data privacy and computational concerns. However, FL faces a major bottleneck due to high communication overhead from frequent model updates between devices and the server, limiting deployment in resource-constrained wireless networks. In this paper, we propose a three-fold strategy. Firstly, an Adaptive Feature-Elimination Strategy to drop less important features while retaining high-value ones; secondly, Adaptive Gradient Innovation and Error Sensitivity-Based Quantization, which dynamically adjusts the quantization level for innovative gradient compression; and thirdly, Communication Frequency Optimization to enhance communication efficiency. We evaluated our proposed model's performance through extensive experiments, assessing accuracy, loss, and convergence compared to baseline techniques. The results show that our model achieves high communication efficiency in the framework while maintaining accuracy.
arXiv.org Artificial Intelligence
Sep-30-2025
- Country:
- Africa > Middle East
- Asia
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.15)
- Sharjah Emirate > Sharjah (0.05)
- Pakistan (0.04)
- Middle East > UAE
- North America
- Canada
- Ontario > National Capital Region
- Ottawa (0.04)
- Quebec > Montreal (0.05)
- Ontario > National Capital Region
- United States (0.04)
- Canada
- South America > Brazil
- Rio de Janeiro > Rio de Janeiro (0.04)
- Genre:
- Research Report > New Finding (0.35)
- Industry:
- Health & Medicine (0.93)
- Information Technology > Security & Privacy (1.00)
- Technology: