Prototypical context-aware dynamics generalization for high-dimensional model-based reinforcement learning

Wang, Junjie, Mu, Yao, Li, Dong, Zhang, Qichao, Zhao, Dongbin, Zhuang, Yuzheng, Luo, Ping, Wang, Bin, Hao, Jianye

arXiv.org Artificial Intelligence 

The latent world model provides a promising way to learn policies in a compact latent space for tasks with high-dimensional observations, however, its generalization across diverse environments with unseen dynamics remains challenging. Although the recurrent structure utilized in current advances helps to capture local dynamics, modeling only state transitions without an explicit understanding of environmental context limits the generalization ability of the dynamics model. To address this issue, we propose a Prototypical Context-Aware Dynamics (ProtoCAD) model, which captures the local dynamics by time consistent latent context and enables dynamics generalization in high-dimensional control tasks. ProtoCAD extracts useful contextual information with the help of the prototypes clustered over batch and benefits model-based RL in two folds: 1) It utilizes a temporally consistent prototypical regularizer that encourages the prototype assignments produced for different time parts of the same latent trajectory to be temporally consistent instead of comparing the features; 2) A context representation is designed which combines both the projection embedding of latent states and aggregated prototypes and can significantly improve the dynamics generalization ability. Extensive experiments show that ProtoCAD surpasses existing methods in terms of dynamics generalization. Compared with the recurrent-based model RSSM, ProtoCAD delivers 13.2% and 26.7% better mean and median performance across all dynamics generalization tasks. Latent world models (Ha & Schmidhuber, 2018) summarize an agent's experience from highdimensional observations to facilitate learning complex behaviors in a compact latent space. Current advances (Hafner et al., 2019; 2020; Deng et al., 2022) leverage Recurrent Neural Networks (RNNs) to extract historical information from high-dimensional observations as compact latent representations and enable imagination in the latent space. However, modeling only latent state transitions without an explicit understanding of the environmental context characteristics limits the dynamics generalization ability of the world model. Since the changes in dynamics are not observable and can only be inferred from the observation sequence, for tasks with high-dimensional sensor inputs, dynamics generalization remains challenging.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found