Mix2FLD: Downlink Federated Learning After Uplink Federated Distillation With Two-Way Mixup

Oh, Seungeun, Park, Jihong, Jeong, Eunjeong, Kim, Hyesung, Bennis, Mehdi, Kim, Seong-Lyun

arXiv.org Machine Learning 

Abstract--This letter proposes a novel communication-efficient and privacy-preserving distributed machine learning framework, coined Mix2FLD. To address uplink-downlink capacity asymmetry, local model outputs are uploaded to a server in the uplink as in federated distillation (FD), whereas global model parameters are downloaded in the downlink as in federated learning (FL). This requires a model output-to-parameter conversion at the server, after collecting additional data samples from devices. Index Terms--Distributed machine learning, on-device learning, federated learning, federated distillation, uplink-downlink asymmetry. Federated learning (FL) is a compelling depicted in Figure 1, Mix2FLD is built upon two key algorithms: solution that collectively trains on-device ML models using federated learning after distillation (FLD) [8] and Mixup their local private data [2], [3].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found