q-exponential family for policy optimization
Zhu, Lingwei, Shah, Haseeb, Wang, Han, White, Martha
–arXiv.org Artificial Intelligence
Policy optimization methods benefit from a simple and tractable policy functional, usually the Gaussian for continuous action spaces. In this paper, we consider a broader policy family that remains tractable: the q-exponential family. This family of policies is flexible, allowing the specification of both heavy-tailed policies (q > 1) and light-tailed policies (q < 1). This paper examines the interplay between q-exponential policies for several actor-critic algorithms conducted on both online and offline problems. We find that heavy-tailed policies are more effective in general and can consistently improve on Gaussian. In particular, we find the Student's t-distribution to be more stable than the Gaussian across settings and that a heavy-tailed q-Gaussian for Tsallis Advantage Weighted Actor-Critic consistently performs well in offline benchmark problems. Our code is available at https://github.com/lingweizhu/qexp.
arXiv.org Artificial Intelligence
Aug-13-2024
- Country:
- North America
- Canada > Alberta (0.14)
- United States
- Massachusetts > Middlesex County
- Cambridge (0.04)
- New York (0.04)
- Massachusetts > Middlesex County
- North America
- Genre:
- Research Report (1.00)
- Technology: