FedTrans: Efficient Federated Learning via Multi-Model Transformation

Zhu, Yuxuan, Liu, Jiachen, Chowdhury, Mosharaf, Lai, Fan

arXiv.org Artificial Intelligence 

Federated learning (FL) aims to train machine learning (ML) models across potentially millions of edge client devices. Yet, training and customizing models for FL clients is notoriously challenging due to the heterogeneity of client data, device capabilities, and the massive scale of clients, making individualized model exploration prohibitively expensive. State-of-the-art FL solutions personalize a globally trained model or concurrently train multiple models, but they often incur suboptimal model accuracy and huge training costs. In this paper, we introduce FedTrans, a multi-model FL training framework that automatically produces and trains high-accuracy, hardware-compatible models for individual clients at scale. FedTrans begins with a basic global model, identifies accuracy bottlenecks in model architectures during training, and then employs model transformation to derive new models for heterogeneous clients on the fly. It judiciously assigns models to individual clients while performing soft aggregation on multi-model updates to minimize total training costs. Our evaluations using realistic settings show that FedTrans improves individual client model accuracy by 14% - 72% while slashing training costs by 1.6 - 20 over state-of-the-art solutions. First, the heterogeneous capabilities of client devices, such as communication and computation, necessitate Federated learning (FL) is an emerging machine learning FL models with different complexities aligned to clients' (ML) paradigm that trains ML models across potentially hardware for better user experience (e.g., model training and millions of clients (e.g., smartphones) over hundreds of inference latency).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found