Resource Utilization Optimized Federated Learning
Zhang, Zihan, Wong, Leon, Varghese, Blesson
–arXiv.org Artificial Intelligence
Zihan Zhang University of St Andrews, UK Leon Wong Rakuten Mobile, Inc., Japan Blesson V arghese University of St Andrews, UK Abstract --Federated learning (FL) systems facilitate distributed machine learning across a server and multiple devices. However, FL systems have low resource utilization limiting their practical use in the real world. This inefficiency primarily arises from two types of idle time: (i) task dependency between the server and devices, and (ii) stragglers among heterogeneous devices. This paper introduces FedOptima, a resource-optimized FL system designed to simultaneously minimize both types of idle time; existing systems do not eliminate or reduce both at the same time. First, devices operate independently of each other using asynchronous aggregation to eliminate straggler effects, and independently of the server by utilizing auxiliary networks to minimize idle time caused by task dependency. Second, the server performs centralized training using a task scheduler that ensures balanced contributions from all devices, improving model accuracy. Four state-of-the-art offloading-based and asynchronous FL methods are chosen as baselines. Experimental results show that compared to the best results of the baselines on convolutional neural networks and transformers on multiple lab-based testbeds, FedOptima (i) achieves higher or comparable accuracy, (ii) accelerates training by 1.9 to 21.8, (iii) reduces server and device idle time by up to 93.9% and 81.8%, respectively, and (iv) increases throughput by 1.1 to 2.0 . Index T erms--federated learning, distributed system, resource utilization, idle time, edge computing I. I NTRODUCTION Federated learning (FL) [1]-[3] offers distributed training across user devices as an alternative to traditional centralized machine training. Devices train a deep neural network (DNN) on their data and send model parameters to the server. The server aggregates these into a global model, which is then distributed to the devices for the next round. Thus, FL utilizes insight from user data via local models to train a global model without sharing original data. Sub-optimal resource utilization is a critical problem in FL that results in two types of idle time on the server and devices (see Section II-A). The first is due to task dependency between server and devices - the server is idle for considerable periods when aggregating local models from devices as it waits for on-device training to complete, which is usually time-consuming. The second is due to hardware heterogeneity - stragglers or slower devices require more time to train than faster devices that idle while waiting for the stragglers. Two categories of methods are considered in the existing literature for reducing idle time.
arXiv.org Artificial Intelligence
Apr-22-2025
- Genre:
- Research Report > New Finding (0.34)
- Industry:
- Information Technology > Security & Privacy (0.48)
- Technology: