Goto

Collaborating Authors

 per-fedavg










Federated Learning in Open- and Closed-Loop EMG Decoding: A Privacy and Performance Perspective

Malcolm, Kai, Uribe, César, Yamagami, Momona

arXiv.org Artificial Intelligence

Invasive and non-invasive neural interfaces hold promise as high-bandwidth input devices for next-generation technologies. However, neural signals inherently encode sensitive information about an individual's identity and health, making data sharing for decoder training a critical privacy challenge. Federated learning (FL), a distributed, privacy-preserving learning framework, presents a promising solution, but it remains unexplored in closed-loop adaptive neural interfaces. Here, we introduce FL-based neural decoding and systematically evaluate its performance and privacy using high-dimensional electromyography signals in both open- and closed-loop scenarios. In open-loop simulations, FL significantly outperformed local learning baselines, demonstrating its potential for high-performance, privacy-conscious neural decoding. In contrast, closed-loop user studies required adapting FL methods to accommodate single-user, real-time interactions, a scenario not supported by standard FL. This modification resulted in local learning decoders surpassing the adapted FL approach in closed-loop performance, yet local learning still carried higher privacy risks. Our findings highlight a critical performance-privacy tradeoff in real-time adaptive applications and indicate the need for FL methods specifically designed for co-adaptive, single-user applications.


A Generalized Meta Federated Learning Framework with Theoretical Convergence Guarantees

Jamali, Mohammad Vahid, Saber, Hamid, Bae, Jung Hyun

arXiv.org Artificial Intelligence

Meta federated learning (FL) is a personalized variant of FL, where multiple agents collaborate on training an initial shared model without exchanging raw dat a samples. The initial model should be trained in a way that current or new agents can easily adapt it to their local datasets after one or a few fine-tuning steps, thus improving the model personaliza tion. Conventional meta FL approaches minimize the average loss of agents on the local models obtai ned after one step of fine-tuning. In practice, agents may need to apply several fine-tuning steps to adapt the global model to their local data, especially under highly heterogeneous data dis tributions across agents. To this end, we present a generalized framework for the meta FL by minimizin g the average loss of agents on their local model after any arbitrary number ν of fine-tuning steps. For this generalized framework, we present a variant of the well-known federated averaging ( FedAvg) algorithm and conduct a comprehensive theoretical convergence analysis to charac terize the convergence speed as well as behavior of the meta loss functions in both the exact and appr oximated cases. Our experiments on real-world datasets demonstrate superior accuracy and fas ter convergence for the proposed scheme compared to conventional approaches.