Goto

Collaborating Authors

 td-error





Learning to Explore in Diverse Reward Settings via Temporal-Difference-Error Maximization

Griesbach, Sebastian, D'Eramo, Carlo

arXiv.org Artificial Intelligence

Numerous heuristics and advanced approaches have been proposed for exploration in different settings for deep reinforcement learning. Noise-based exploration generally fares well with dense-shaped rewards and bonus-based exploration with sparse rewards. However, these methods usually require additional tuning to deal with undesirable reward settings by adjusting hyperparameters and noise distributions. Rewards that actively discourage exploration, i.e., with an action cost and no other dense signal to follow, can pose a major challenge. We propose a novel exploration method, Stable Error-seeking Exploration (SEE), that is robust across dense, sparse, and exploration-adverse reward settings. To this endeavor, we revisit the idea of maximizing the TD-error as a separate objective. Our method introduces three design choices to mitigate instability caused by far-off-policy learning, the conflict of interest of maximizing the cumulative TD-error in an episodic setting, and the non-stationary nature of TD-errors. SEE can be combined with off-policy algorithms without modifying the optimization pipeline of the original objective. In our experimental analysis, we show that a Soft-Actor Critic agent with the addition of SEE performs robustly across three diverse reward settings in a variety of tasks without hyperparameter adjustments.





Deterministic Exploration via Stationary Bellman Error Maximization

Griesbach, Sebastian, D'Eramo, Carlo

arXiv.org Artificial Intelligence

Exploration is a crucial and distinctive aspect of reinforcement learning (RL) that remains a fundamental open problem. Several methods have been proposed to tackle this challenge. Commonly used methods inject random noise directly into the actions, indirectly via entropy maximization, or add intrinsic rewards that encourage the agent to steer to novel regions of the state space. Another previously seen idea is to use the Bellman error as a separate optimization objective for exploration. In this paper, we introduce three modifications to stabilize the latter and arrive at a deterministic exploration policy. Our separate exploration agent is informed about the state of the exploitation, thus enabling it to account for previous experiences. Further components are introduced to make the exploration objective agnostic toward the episode length and to mitigate instability introduced by far-off-policy learning. Our experimental results show that our approach can outperform $\varepsilon$-greedy in dense and sparse reward settings.


DIFFER: Decomposing Individual Reward for Fair Experience Replay in Multi-Agent Reinforcement Learning

Hu, Xunhan, Zhao, Jian, Zhou, Wengang, Feng, Ruili, Li, Houqiang

arXiv.org Artificial Intelligence

Cooperative multi-agent reinforcement learning (MARL) is a challenging task, as agents must learn complex and diverse individual strategies from a shared team reward. However, existing methods struggle to distinguish and exploit important individual experiences, as they lack an effective way to decompose the team reward into individual rewards. To address this challenge, we propose DIFFER, a powerful theoretical framework for decomposing individual rewards to enable fair experience replay in MARL. By enforcing the invariance of network gradients, we establish a partial differential equation whose solution yields the underlying individual reward function. The individual TD-error can then be computed from the solved closed-form individual rewards, indicating the importance of each piece of experience in the learning task and guiding the training process. Our method elegantly achieves an equivalence to the original learning framework when individual experiences are homogeneous, while also adapting to achieve more muscular efficiency and fairness when diversity is observed.Our extensive experiments on popular benchmarks validate the effectiveness of our theory and method, demonstrating significant improvements in learning efficiency and fairness.


DeepADMR: A Deep Learning based Anomaly Detection for MANET Routing

Yahja, Alex, Kaviani, Saeed, Ryu, Bo, Kim, Jae H., Larson, Kevin A.

arXiv.org Artificial Intelligence

We developed DeepADMR, a novel neural anomaly detector for the deep reinforcement learning (DRL)-based DeepCQ+ MANET routing policy. The performance of DRL-based algorithms such as DeepCQ+ is only verified within the trained and tested environments, hence their deployment in the tactical domain induces high risks. DeepADMR monitors unexpected behavior of the DeepCQ+ policy based on the temporal difference errors (TD-errors) in real-time and detects anomaly scenarios with empirical and non-parametric cumulative-sum statistics. The DeepCQ+ design via multi-agent weight-sharing proximal policy optimization (PPO) is slightly modified to enable the real-time estimation of the TD-errors. We report the DeepADMR performance in the presence of channel disruptions, high mobility levels, and network sizes beyond the training environments, which shows its effectiveness.