Goto

Collaborating Authors

 fedfnn


FedFNN: Faster Training Convergence Through Update Predictions in Federated Recommender Systems

Fabbri, Francesco, Liu, Xianghang, McKenzie, Jack R., Twardowski, Bartlomiej, Wijaya, Tri Kurniawan

arXiv.org Artificial Intelligence

Federated Learning (FL) has emerged as a key approach for distributed machine learning, enhancing online personalization while ensuring user data privacy. Instead of sending private data to a central server as in traditional approaches, FL decentralizes computations: devices train locally and share updates with a global server. A primary challenge in this setting is achieving fast and accurate model training--vital for recommendation systems where delays can compromise user engagement. This paper introduces FedFNN, an algorithm that accelerates decentralized model training. In FL, only a subset of users are involved in each training epoch. FedFNN employs supervised learning to predict weight updates from unsampled users, using updates from the sampled set. Our evaluations, using real and synthetic data, show: (i) FedFNN achieves training speeds 5x faster than leading methods, maintaining or improving accuracy; (ii) the algorithm's performance is consistent regardless of client cluster variations; (iii) FedFNN outperforms other methods in scenarios with limited client availability, converging more quickly.


Federated Fuzzy Neural Network with Evolutionary Rule Learning

Zhang, Leijie, Shi, Ye, Chang, Yu-Cheng, Lin, Chin-Teng

arXiv.org Artificial Intelligence

Distributed fuzzy neural networks (DFNNs) have attracted increasing attention recently due to their learning abilities in handling data uncertainties in distributed scenarios. However, it is challenging for DFNNs to handle cases in which the local data are non-independent and identically distributed (non-IID). In this paper, we propose a federated fuzzy neural network (FedFNN) with evolutionary rule learning (ERL) to cope with non-IID issues as well as data uncertainties. The FedFNN maintains a global set of rules in a server and a personalized subset of these rules for each local client. ERL is inspired by the theory of biological evolution; it encourages rule variations while activating superior rules and deactivating inferior rules for local clients with non-IID data. Specifically, ERL consists of two stages in an iterative procedure: a rule cooperation stage that updates global rules by aggregating local rules based on their activation statuses and a rule evolution stage that evolves the global rules and updates the activation statuses of the local rules. This procedure improves both the generalization and personalization of the FedFNN for dealing with non-IID issues and data uncertainties. Extensive experiments conducted on a range of datasets demonstrate the superiority of the FedFNN over state-of-the-art methods.