The Surprising Ineffectiveness of Pre-Trained Visual Representations for Model-Based Reinforcement Learning Moritz Schneider 1 2 3 * Robert Krug 1 2 Luigi Palmieri

Neural Information Processing Systems 

Visual Reinforcement Learning (RL) methods often require extensive amounts of data. As opposed to model-free RL, model-based RL (MBRL) offers a potential solution with efficient data utilization through planning. Additionally, RL lacks generalization capabilities for real-world tasks. Prior work has shown that incorporating pre-trained visual representations (PVRs) enhances sample efficiency and generalization. While PVRs have been extensively studied in the context of model-free RL, their potential in MBRL remains largely unexplored.