Goto

Collaborating Authors

 fedprox



OnConvergenceofFedProx: LocalDissimilarity InvariantBounds, Non-smoothnessandBeyond

Neural Information Processing Systems

Several popularly used FL algorithms for this setting includeFedAvg (McMahan et al., 2017), FedProx(Lietal.,2020b), We analyze its convergence behavior, expose problems, andpropose alternativesmore suitable forscaling upandgeneralization.



2f2b265625d76a6704b08093c652fd79-Supplemental.pdf

Neural Information Processing Systems

A central challenge in training classification models in the real-world federated system is learning with non-IID data. To cope with this, most of the existing works involve enforcing regularization in local optimization or improving the model aggregation scheme at the server.


Federated Learning for the Design of Parametric Insurance Indices under Heterogeneous Renewable Production Losses

Niakh, Fallou

arXiv.org Machine Learning

We propose a federated learning framework for the calibration of parametric insurance indices under heterogeneous renewable energy production losses. Producers locally model their losses using Tweedie generalized linear models and private data, while a common index is learned through federated optimization without sharing raw observations. The approach accommodates heterogeneity in variance and link functions and directly minimizes a global deviance objective in a distributed setting. We implement and compare FedAvg, FedProx and FedOpt, and benchmark them against an existing approximation-based aggregation method. An empirical application to solar power production in Germany shows that federated learning recovers comparable index coefficients under moderate heterogeneity, while providing a more general and scalable framework.


On Convergence of FedProx: Local Dissimilarity Invariant Bounds, Non-smoothness and Beyond

Neural Information Processing Systems

The \FedProx~algorithm is a simple yet powerful distributed proximal point optimization method widely used for federated learning (FL) over heterogeneous data. Despite its popularity and remarkable success witnessed in practice, the theoretical understanding of FedProx is largely underinvestigated: the appealing convergence behavior of \FedProx~is so far characterized under certain non-standard and unrealistic dissimilarity assumptions of local functions, and the results are limited to smooth optimization problems. In order to remedy these deficiencies, we develop a novel local dissimilarity invariant convergence theory for \FedProx~and its minibatch stochastic extension through the lens of algorithmic stability. As a result, we contribute to derive several new and deeper insights into \FedProx~for non-convex federated optimization including: 1) convergence guarantees invariant to certain stringent local dissimilarity conditions; 2) convergence guarantees for non-smooth FL problems; and 3) linear speedup with respect to size of minibatch and number of sampled devices. Our theory for the first time reveals that local dissimilarity and smoothness are not must-have for \FedProx~to get favorable complexity bounds.


Bringing Federated Learning to Space

Kim, Grace, Svoboda, Filip, Lane, Nicholas

arXiv.org Artificial Intelligence

Abstract-- As Low Earth Orbit (LEO) satellite constellations rapidly expand to hundreds and thousands of spacecraft, the need for distributed on-board machine learning becomes critical to address downlink bandwidth limitations. Federated learning (FL) offers a promising framework to conduct collaborative model training across satellite networks. Realizing its benefits in space naturally requires addressing space-specific constraints, from intermittent connectivity to dynamics imposed by orbital motion. This work presents the first systematic feasibility analysis of adapting off-the-shelf FL algorithms for satellite constellation deployment. We introduce a comprehensive "space-ification" framework that adapts terrestrial algorithms (FedA vg, FedProx, FedBuff) to operate under orbital constraints, producing an orbital-ready suite of FL algorithms. We then evaluate these space-ified methods through extensive parameter sweeps across 768 constellation configurations that vary cluster sizes (1-10), satellites per cluster (1-10), and ground station networks (1-13). Our analysis demonstrates that space-adapted FL algorithms efficiently scale to constellations of up to 100 satellites, achieving performance close to the centralized ideal. Multi-month training cycles can be reduced to days, corresponding to a 9X speedup through orbital scheduling and local coordination within satellite clusters. These results provide actionable insights for future mission designers, enabling distributed on-board learning for more autonomous, resilient, and data-driven satellite operations. Low Earth Orbit (LEO) satellite constellations are expanding rapidly, supporting applications in Earth observation (EO), telecommunications, and navigation. Large-scale constellations such as Planet Labs' Dove fleet, SpaceX's Starlink, and Amazon's Project Kuiper already consist of hundreds to thousands of spacecraft, representing some of the largest distributed systems ever deployed. This unprecedented scale is driving a dramatic increase in the volume and diversity of space-based data. Earth observation missions in particular bear the brunt of this data challenge. High-resolution missions such as Landsat-8 produce 1.8 GB per scene and more than 400 TB annually [1]. At constellation scale, Planet Labs' fleet of over 200 satellites generates terabytes of imagery each day [2].



Privacy-Preserving Personalization in Education: A Federated Recommender System for Student Performance Prediction

Tertulino, Rodrigo, Almeida, Ricardo

arXiv.org Artificial Intelligence

The increasing digitalization of education presents unprecedented opportunities for data-driven personalization, but it also introduces significant challenges to student data privacy. Conventional recommender systems rely on centralized data, a paradigm often incompatible with modern data protection regulations. A novel privacy-preserving recommender system is proposed and evaluated to address this critical issue using Federated Learning (FL). The approach utilizes a Deep Neural Network (DNN) with rich, engineered features from the large-scale ASSISTments educational dataset. A rigorous comparative analysis of federated aggregation strategies was conducted, identifying FedProx as a significantly more stable and effective method for handling heterogeneous student data than the standard FedAvg baseline. The optimized federated model achieves a high-performance F1-Score of 76.28%, corresponding to 92% of the performance of a powerful, centralized XGBoost model. These findings validate that a federated approach can provide highly effective content recommendations without centralizing sensitive student data. Consequently, our work presents a viable and robust solution to the personalization-privacy dilemma in modern educational platforms.


MetaFed: Advancing Privacy, Performance, and Sustainability in Federated Metaverse Systems

Yagiz, Muhammet Anil, Cengiz, Zeynep Sude, Goktas, Polat

arXiv.org Artificial Intelligence

Abstract--The rapid expansion of immersive Metaverse applications introduces complex challenges at the intersection of performance, privacy, and environmental sustainability. Centralized architectures fall short in addressing these demands, often resulting in elevated energy consumption, latency, and privacy concerns. This paper proposes MetaF ed, a decentralized federated learning (FL) framework that enables sustainable and intelligent resource orchestration for Metaverse environments. MetaFed integrates (i) multi-agent reinforcement learning for dynamic client selection, (ii) privacy-preserving FL using homomorphic encryption, and (iii) carbon-aware scheduling aligned with renewable energy availability. Evaluations on MNIST and CIF AR-10 using lightweight ResNet architectures demonstrate that MetaFed achieves up to 25% reduction in carbon emissions compared to conventional approaches, while maintaining high accuracy and minimal communication overhead.