Importance Weighted Actor-Critic for Optimal Conservative Offline Reinforcement Learning
–Neural Information Processing Systems
We propose A-Crab (Actor-Critic Regularized by Average Bellman error), a new practical algorithm for offline reinforcement learning (RL) in complex environments with insufficient data coverage. Our algorithm combines the marginalized importance sampling framework with the actor-critic paradigm, where the critic returns evaluations of the actor (policy) that are pessimistic relative to the offline data and have a small average (importance-weighted) Bellman error. Compared to existing methods, our algorithm simultaneously offers a number of advantages: (1) It achieves the optimal statistical rate of 1/ N--where N is the size of offline dataset--in converging to the best policy covered in the offline dataset, even when combined with general function approximators.
Neural Information Processing Systems
Mar-27-2025, 14:52:26 GMT
- Technology: