Reinforcement Learning with $ω$-Regular Objectives and Constraints
Wagner, Dominik, Witzman, Leon, Ong, Luke
–arXiv.org Artificial Intelligence
Reinforcement learning (RL) commonly relies on scalar rewards with limited ability to express temporal, conditional, or safety-critical goals, and can lead to reward hacking. Temporal logic expressible via the more general class of $ω$-regular objectives addresses this by precisely specifying rich behavioural properties. Even still, measuring performance by a single scalar (be it reward or satisfaction probability) masks safety-performance trade-offs that arise in settings with a tolerable level of risk. We address both limitations simultaneously by combining $ω$-regular objectives with explicit constraints, allowing safety requirements and optimisation targets to be treated separately. We develop a model-based RL algorithm based on linear programming, which in the limit produces a policy maximising the probability of satisfying an $ω$-regular objective while also adhering to $ω$-regular constraints within specified thresholds. Furthermore, we establish a translation to constrained limit-average problems with optimality-preserving guarantees.
arXiv.org Artificial Intelligence
Nov-26-2025
- Country:
- Asia
- Singapore (0.04)
- South Korea (0.04)
- Europe
- Austria > Vienna (0.14)
- Switzerland (0.04)
- North America
- Canada > British Columbia
- United States
- California > Alameda County
- Berkeley (0.04)
- Hawaii > Honolulu County
- Honolulu (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- Massachusetts > Middlesex County
- Cambridge (0.04)
- California > Alameda County
- Asia
- Genre:
- Research Report (0.50)
- Technology: