A Simple Reward-free Approach to Constrained Reinforcement Learning
–arXiv.org Artificial Intelligence
In a wide range of modern reinforcement learning (RL) applications, it is not sufficient for the learning agents to only maximize a scalar reward. More importantly, they must satisfy various constraints. For instance, such constraints can be the physical limit of power consumption or torque in motors for robotics tasks [27]; the budget for computation and the frequency of actions for real-time strategy games [28]; and the requirement for safety, fuel efficiency and human comfort for autonomous drive [16]. In addition, constraints are also crucial in tasks such as dynamic pricing with limited supply [5, 4], scheduling of resources on a computer cluster [18], imitation learning [26, 35, 25], as well as reinforcement learning with fairness [12]. These huge demand in practice gives rise to a subfield--constrained RL, which focuses on designing efficient algorithms to find near-optimal policies for RL problems under linear or general convex constraints. Most constrained RL works directly combine the existing techniques such as value iteration and optimism from unconstrained literature, with new techniques specifically designed to deal with linear constraints [9, 8, 22] or general convex constraints [7, 32]. The end product is a single new complex algorithm which is tasked to solve all the challenges of learning dynamics, exploration, planning as well as constraints satisfaction simultaneously.
arXiv.org Artificial Intelligence
Jul-12-2021
- Country:
- North America > United States (0.28)
- Genre:
- Research Report (0.64)
- Industry:
- Leisure & Entertainment > Games > Computer Games (0.48)
- Technology: