Goto

Collaborating Authors

 sageflow


SupplementaryMaterial

Neural Information Processing Systems

For RFA of [5], the maximum iteration is set to 10. In this setup, the learning rate is decayed for all three schemes (Sageflow,RFA,FedAvg). The number of poisoned images inabatch is20, and we do not decay the learningratehere. Figure 1 shows theperformance under theno-scaled backdoor attack with only adversaries (nostragglers). The loss associated with a poisoned device increases if we increase the scale factor from 0.1 to 10.



Sageflow: Robust Federated Learning against Both Stragglers and Adversaries

Neural Information Processing Systems

While federated learning (FL) allows efficient model training with local data at edge devices, among major issues still to be resolved are: slow devices known as stragglers and malicious attacks launched by adversaries. While the presence of both of these issues raises serious concerns in practical FL systems, no known schemes or combinations of schemes effectively address them at the same time. We propose Sageflow, staleness-aware grouping with entropy-based filtering and loss-weighted averaging, to handle both stragglers and adversaries simultaneously. Model grouping and weighting according to staleness (arrival delay) provides robustness against stragglers, while entropy-based filtering and loss-weighted averaging, working in a highly complementary fashion at each grouping stage, counter a wide range of adversary attacks. A theoretical bound is established to provide key insights into the convergence behavior of Sageflow. Extensive experimental results show that Sageflow outperforms various existing methods aiming to handle stragglers/adversaries.


Sageflow: Robust Federated Learning against Both Stragglers and Adversaries (Supplementary Material)

Neural Information Processing Systems

The hyperparameter settings for Sageflow are shown in Table 1. Table 2. Backdoor attack: The hyperparameter details are shown in Table 4. Table 4: Hyperparameters for Sageflow with both stragglers and adversaries, under backdoor attackDataset γ λ δ E We specify these values in Table 5. The local batch size is set to 64. Figure 1 shows the performance under the no-scaled backdoor attack with only adversaries (no stragglers). Figure 1 shows the case with both stragglers and adversaries. Some additional experiments were conducted under model poisoning with the scale factor 10. Figure 1 The loss associated with a poisoned device increases if we increase the scale factor from 0.1 to 10. Sageflow but also Zeno+ can effectively defend against the attacks with only adversaries.


Sageflow: Robust Federated Learning against Both Stragglers and Adversaries

Neural Information Processing Systems

While federated learning (FL) allows efficient model training with local data at edge devices, among major issues still to be resolved are: slow devices known as stragglers and malicious attacks launched by adversaries.


Sageflow: Robust Federated Learning against Both Stragglers and Adversaries

Neural Information Processing Systems

While federated learning (FL) allows efficient model training with local data at edge devices, among major issues still to be resolved are: slow devices known as stragglers and malicious attacks launched by adversaries. While the presence of both of these issues raises serious concerns in practical FL systems, no known schemes or combinations of schemes effectively address them at the same time. We propose Sageflow, staleness-aware grouping with entropy-based filtering and loss-weighted averaging, to handle both stragglers and adversaries simultaneously. Model grouping and weighting according to staleness (arrival delay) provides robustness against stragglers, while entropy-based filtering and loss-weighted averaging, working in a highly complementary fashion at each grouping stage, counter a wide range of adversary attacks. A theoretical bound is established to provide key insights into the convergence behavior of Sageflow.


AEDFL: Efficient Asynchronous Decentralized Federated Learning with Heterogeneous Devices

Liu, Ji, Che, Tianshi, Zhou, Yang, Jin, Ruoming, Dai, Huaiyu, Dou, Dejing, Valduriez, Patrick

arXiv.org Artificial Intelligence

Federated Learning (FL) has achieved significant achievements recently, enabling collaborative model training on distributed data over edge devices. Iterative gradient or model exchanges between devices and the centralized server in the standard FL paradigm suffer from severe efficiency bottlenecks on the server. While enabling collaborative training without a central server, existing decentralized FL approaches either focus on the synchronous mechanism that deteriorates FL convergence or ignore device staleness with an asynchronous mechanism, resulting in inferior FL accuracy. In this paper, we propose an Asynchronous Efficient Decentralized FL framework, i.e., AEDFL, in heterogeneous environments with three unique contributions. First, we propose an asynchronous FL system model with an efficient model aggregation method for improving the FL convergence. Second, we propose a dynamic staleness-aware model update approach to achieve superior accuracy. Third, we propose an adaptive sparse training method to reduce communication and computation costs without significant accuracy degradation. Extensive experimentation on four public datasets and four models demonstrates the strength of AEDFL in terms of accuracy (up to 16.3% higher), efficiency (up to 92.9% faster), and computation costs (up to 42.3% lower).