HASFL: Heterogeneity-aware Split Federated Learning over Edge Computing Systems

Lin, Zheng, Chen, Zhe, Chen, Xianhao, Ni, Wei, Gao, Yue

arXiv.org Artificial Intelligence 

--Split federated learning (SFL) has emerged as a promising paradigm to democratize machine learning (ML) on edge devices by enabling layer-wise model partitioning. However, existing SFL approaches suffer significantly from the straggler effect due to the heterogeneous capabilities of edge devices. T o address the fundamental challenge, we propose adaptively controlling batch sizes (BSs) and model splitting (MS) for edge devices to overcome resource heterogeneity. We first derive a tight convergence bound of SFL that quantifies the impact of varied BSs and MS on learning performance. Based on the convergence bound, we propose HASFL, a heterogeneity-aware SFL framework capable of adaptively controlling BS and MS to balance communication-computing latency and training convergence in heterogeneous edge networks. Extensive experiments with various datasets validate the effectiveness of HASFL and demonstrate its superiority over state-of-the-art benchmarks. Conventional machine learning (ML) frameworks predominantly rely on centralized learning (CL), where raw data is gathered and processed at a central server for model training. However, CL is often impractical due to its high communication latency, increased backbone traffic, and privacy risks [1]-[4]. To address these limitations, federated learning (FL) [5], [6] has emerged as a promising alternative that allows participating devices to collaboratively train a shared model via exchanging model parameters (e.g., gradients) rather than raw data, thereby protecting data privacy and reducing communication costs [7], [8]. Despite its advantage, on-device training of FL poses a significant challenge for its deployment on resource-constrained edge devices as ML models scale up [9], [10].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found