SEMDICE: Off-policy State Entropy Maximization via Stationary Distribution Correction Estimation
Lee, Jongmin, Sun, Meiqi, Abbeel, Pieter
–arXiv.org Artificial Intelligence
In the unsupervised pre-training for reinforcement learning, the agent aims to learn a prior policy for downstream tasks without relying on task-specific reward functions. We focus on state entropy maximization (SEM), where the goal is to learn a policy that maximizes the entropy of the state stationary distribution. In this paper, we introduce SEMDICE, a principled off-policy algorithm that computes an SEM policy from an arbitrary off-policy dataset, which optimizes the policy directly within the space of stationary distributions. SEMDICE computes a single, stationary Markov state-entropy-maximizing policy from an arbitrary off-policy dataset. Experimental results demonstrate that SEMDICE outperforms baseline algorithms in maximizing state entropy while achieving the best adaptation efficiency for downstream tasks among SEM-based unsupervised RL pre-training methods.
arXiv.org Artificial Intelligence
Dec-12-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe
- North America > United States (0.04)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.48)
- Industry:
- Education (0.46)
- Technology: