Reinforcement Learning under State and Outcome Uncertainty: A Foundational Distributional Perspective
Preuett, Larry III, Zhang, Qiuyi, Ahmad, Muhammad Aurangzeb
–arXiv.org Artificial Intelligence
In many real-world planning tasks, agents must tackle uncertainty about the environment's state and variability in the outcomes of any chosen policy. We address both forms of uncertainty as a first step toward safer algorithms in partially observable settings. Specifically, we extend Distributional Reinforcement Learning (DistRL)--which models the entire return distribution for fully observable domains--to Partially Observable Markov Decision Processes (POMDPs), allowing an agent to learn the distribution of returns for each conditional plan. Concretely, we introduce new distributional Bellman operators for partial observability and prove their convergence under the supremum p-Wasserstein metric. We also propose a finite representation of these return distributions via ψ -vectors, generalizing the classical α -vectors in POMDP solvers. Building on this, we develop Distributional Point-Based V alue Iteration (DPBVI), which integrates ψ -vectors into a standard point-based backup procedure-- bridging DistRL and POMDP planning . By tracking return distributions, DPBVI lays the foundation for future risk-sensitive control in domains where rare, high-impact events must be carefully managed. We provide source code to foster further research in robust decision-making under partial observability.
arXiv.org Artificial Intelligence
Jul-8-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America > United States
- California > San Francisco County
- San Francisco (0.14)
- Washington > King County
- Bothell (0.04)
- California > San Francisco County
- Asia > Middle East
- Genre:
- Research Report (0.82)
- Industry:
- Health & Medicine (0.93)
- Technology: