fedgm
Rethinking Federated Graph Learning: A Data Condensation Perspective
Zhang, Hao, Li, Xunkai, Zhu, Yinlin, Hu, Lianglin
Federated graph learning is a widely recognized technique that promotes collaborative training of graph neural networks (GNNs) by multi-client graphs. However, existing approaches heavily rely on the communication of model parameters or gradients for federated optimization and fail to adequately address the data heterogeneity introduced by intricate and diverse graph distributions. Although some methods attempt to share additional messages among the server and clients to improve federated convergence during communication, they introduce significant privacy risks and increase communication overhead. To address these issues, we introduce the concept of a condensed graph as a novel optimization carrier to address FGL data heterogeneity and propose a new FGL paradigm called FedGM. Specifically, we utilize a generalized condensation graph consensus to aggregate comprehensive knowledge from distributed graphs, while minimizing communication costs and privacy risks through a single transmission of the condensed data. Extensive experiments on six public datasets consistently demonstrate the superiority of FedGM over state-of-the-art baselines, highlighting its potential for a novel FGL paradigm.
- Asia > China > Beijing > Beijing (0.05)
- North America > United States > Virginia (0.04)
- Europe > Greece (0.04)
- Asia > China > Guangdong Province > Guangzhou (0.04)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (0.68)
On the Role of Server Momentum in Federated Learning
Sun, Jianhui, Wu, Xidong, Huang, Heng, Zhang, Aidong
Federated Averaging (FedAvg) is known to experience convergence issues when encountering significant clients system heterogeneity and data heterogeneity. Server momentum has been proposed as an effective mitigation. However, existing server momentum works are restrictive in the momentum formulation, do not properly schedule hyperparameters and focus only on system homogeneous settings, which leaves the role of server momentum still an under-explored problem. In this paper, we propose a general framework for server momentum, that (a) covers a large class of momentum schemes that are unexplored in federated learning (FL), (b) enables a popular stagewise hyperparameter scheduler, (c) allows heterogeneous and asynchronous local computing. We provide rigorous convergence analysis for the proposed framework. To our best knowledge, this is the first work that thoroughly analyzes the performances of server momentum with a hyperparameter scheduler and system heterogeneity. Extensive experiments validate the effectiveness of our proposed framework.
- North America > United States > Maryland > Prince George's County > College Park (0.14)
- North America > United States > Virginia (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (4 more...)
- Law (0.45)
- Information Technology > Security & Privacy (0.45)