M$^3$PC: Test-time Model Predictive Control for Pretrained Masked Trajectory Model

Wen, Kehan, Hu, Yutong, Mu, Yao, Ke, Lei

arXiv.org Artificial Intelligence 

Recent work in Offline Reinforcement Learning (RL) has shown that a unified Transformer trained under a masked auto-encoding objective can effectively capture the relationships between different modalities (e.g., states, actions, rewards) within given trajectory datasets. However, this information has not been fully exploited during the inference phase, where the agent needs to generate an optimal policy instead of just reconstructing masked components from unmasked ones. Given that a pretrained trajectory model can act as both a Policy Model and a World Model with appropriate mask patterns, we propose using Model Predictive Control (MPC) at test time to leverage the model's own predictive capability to guide its action selection. Empirical results on D4RL and RoboMimic show that our inference-phase MPC significantly improves the decision-making performance of a pretrained trajectory model without any additional parameter training. Furthermore, our framework can be adapted to Offline to Online (O2O) RL and Goal Reaching RL, resulting in more substantial performance gains when an additional online interaction budget is provided, and better generalization capabilities when different task targets are specified. The Masked Modeling paradigm has a simple, self-supervised training objective: predicting a randomly masked subset of the original sequence. It has become a powerful technique for generation or representation learning for sequential data, e.g., language tokens (Devlin et al., 2018) or image patches (He et al., 2022). Unlike autoregressive models like GPT (Brown et al., 2020), which condition only on the past context in the "left", bidirectional models trained with this objective learn to model the context from both sides, leading to richer representations and deeper understandings of the data's underlying dependencies. Given that a sequential decision-making trajectory inherently involves a sequence of states s and actions a, and other optional augmented properties like return-to-go (RTG) g (Chen et al., 2021) or approximate state-action value v (Yamagata et al., 2023) across T timesteps, the mask modeling paradigm can be adapted easily for sequential decision-making tasks. For example, in the case of Reinforcement Learning, the policy output P(a|s) at each time step can be regarded as predicting a masked action a conditioned on given states s.