Federated Reinforcement Learning in Heterogeneous Environments
–arXiv.org Artificial Intelligence
Abstract--We investigate a Federated Reinforcement Learning with Environment Heterogeneity (FRL-EH) framework, where local environments exhibit statistical heterogeneity . Within this framework, agents collaboratively learn a global policy by aggregating their collective experiences while preserving the privacy of their local trajectories. T o better reflect real-world scenarios, we introduce a robust FRL-EH framework by presenting a novel global objective function. This function is specifically designed to optimize a global policy that ensures robust performance across heterogeneous local environments and their plausible perturbations. We propose a tabular FRL algorithm named FedRQ and theoretically prove its asymptotic convergence to an optimal policy for the global objective function. Furthermore, we extend FedRQ to environments with continuous state space through the use of expectile loss, addressing the key challenge of minimizing a value function over a continuous subset of the state space. Reinforcement Learning (RL) has demonstrated remarkable efficacy in tackling complex challenges across various domains, including gaming, robotics, intelligent networks, manufacturing, and finance [1]-[3]. However, the practical implementation of RL algorithms often encounters persistent obstacles, particularly the scarcity of training samples, especially in large action and state spaces.
arXiv.org Artificial Intelligence
Jul-22-2025
- Country:
- Asia
- China (0.04)
- South Korea > Seoul
- Seoul (0.04)
- Asia
- Genre:
- Research Report > New Finding (0.93)
- Industry:
- Information Technology > Security & Privacy (0.68)
- Technology: