Clustering-Based Weight Orthogonalization for Stabilizing Deep Reinforcement Learning

Ma, Guoqing, Zhang, Yuhan, Dai, Yuming, Hao, Guangfu, Chen, Yang, Yu, Shan

arXiv.org Artificial Intelligence 

Abstract--Reinforcement learning (RL) has made significant advancements, achieving superhuman performance in various tasks. However, RL agents often operate under the assumption of environmental stationarity, which poses a great challenge to learning efficiency since many environments are inherently non-stationary. T o address this issue, we introduce the Clustering Orthogonal Weight Modified (COWM) layer, which can be integrated into the policy network of any RL algorithm and mitigate non-stationarity effectively. The COWM layer stabilizes the learning process by employing clustering techniques and a projection matrix. Our approach not only improves learning speed but also reduces gradient interference, thereby enhancing the overall learning efficiency. Empirically, the COWM outperforms state-of-the-art methods and achieves improvements of 9% and 12.6% in vision-based and state-based DMControl benchmark. It also shows robustness and generality across various algorithms and tasks. In recent years, reinforcement learning (RL) has made significant progress across various domains, ranging from gaming to robotic control, often surpassing human performance [1]- [6]. Despite these advancements, a significant issue remains: the underlying assumption of a stationary environment [7].