When Computing Power Network Meets Distributed Machine Learning: An Efficient Federated Split Learning Framework
Yuan, Xinjing, Pu, Lingjun, Jiao, Lei, Wang, Xiaofei, Yang, Meijuan, Xu, Jingdong
–arXiv.org Artificial Intelligence
In this paper, we advocate CPN-FedSL, a novel and flexible Federated Split Learning (FedSL) framework over Computing Power Network (CPN). We build a dedicated model to capture the basic settings and learning characteristics (e.g., training flow, latency and convergence). Based on this model, we introduce Resource Usage Effectiveness (RUE), a novel performance metric integrating training utility with system cost, and formulate a multivariate scheduling problem that maxi?mizes RUE by comprehensively taking client admission, model partition, server selection, routing and bandwidth allocation into account (i.e., mixed-integer fractional programming). We design Refinery, an efficient approach that first linearizes the fractional objective and non-convex constraints, and then solves the transformed problem via a greedy based rounding algorithm in multiple iterations. Extensive evaluations corroborate that CPN-FedSL is superior to the standard and state-of-the-art learning frameworks (e.g., FedAvg and SplitFed), and besides Refinery is lightweight and significantly outperforms its variants and de facto heuristic methods under a variety of settings.
arXiv.org Artificial Intelligence
May-22-2023
- Country:
- Asia > China (0.28)
- North America > United States (0.28)
- Genre:
- Research Report (0.40)
- Industry:
- Information Technology > Security & Privacy (0.46)
- Technology: