th client
- North America > United States > Massachusetts > Hampshire County > Amherst (0.14)
- North America > United States > Virginia (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Asia > India > Karnataka > Bengaluru (0.04)
A Limitations, Future Work, and Broader Impact 504 Learning on naturally heterogeneous datasets can be challenging, as the true data distributions of individual 505
Flow has shown the promise of per-instance personalization in improving clients' accuracy. We have trained Flow and its baselines on the Stackoverflow dataset for 2000 rounds. The batch size used for each client on each baseline is 16. The default learning rate used is 0.1. All the baselines and Flow variants have been run for 1500 rounds, with 10 clients per round.
- North America > United States > Massachusetts > Hampshire County > Amherst (0.14)
- North America > United States > Virginia (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Asia > India > Karnataka > Bengaluru (0.04)
IPFed: Identity protected federated learning for user authentication
Kaga, Yosuke, Suzuki, Yusei, Takahashi, Kenta
With the development of laws and regulations related to privacy preservation, it has become difficult to collect personal data to perform machine learning. In this context, federated learning, which is distributed learning without sharing personal data, has been proposed. In this paper, we focus on federated learning for user authentication. We show that it is difficult to achieve both privacy preservation and high accuracy with existing methods. To address these challenges, we propose IPFed which is privacy-preserving federated learning using random projection for class embedding. Furthermore, we prove that IPFed is capable of learning equivalent to the state-of-the-art method. Experiments on face image datasets show that IPFed can protect the privacy of personal data while maintaining the accuracy of the state-of-the-art method.
- Europe > United Kingdom > England > Greater London > London (0.04)
- Asia > Japan (0.04)
FedDQ: Communication-Efficient Federated Learning with Descending Quantization
Qu, Linping, Song, Shenghui, Tsui, Chi-Ying
Federated learning (FL) is an emerging learning paradigm without violating users' privacy. However, large model size and frequent model aggregation cause serious communication bottleneck for FL. To reduce the communication volume, techniques such as model compression and quantization have been proposed. Besides the fixed-bit quantization, existing adaptive quantization schemes use ascending-trend quantization, where the quantization level increases with the training stages. In this paper, we first investigate the impact of quantization on model convergence, and show that the optimal quantization level is directly related to the range of the model updates. Given the model is supposed to converge with the progress of the training, the range of the model updates will gradually shrink, indicating that the quantization level should decrease with the training stages. Based on the theoretical analysis, a descending quantization scheme named FedDQ is proposed. Experimental results show that the proposed descending quantization scheme can save up to 65.2% of the communicated bit volume and up to 68% of the communication rounds, when compared with existing schemes.