Rethinking Inverse Reinforcement Learning: from Data Alignment to Task Alignment
–arXiv.org Artificial Intelligence
Many imitation learning (IL) algorithms use inverse reinforcement learning (IRL) to infer a reward function that aligns with the demonstrations. However, the inferred reward function often fails to capture the underlying task objective. In this paper, we propose a novel framework for IRL-based IL that prioritizes task alignment over conventional data alignment. Our framework is a semi-supervised approach that leverages expert demonstrations as weak supervision signals to derive a set of candidate reward functions that align with the task rather than only with the data. It adopts an adversarial mechanism to train a policy with this set of reward functions to gain a collective validation of the policy's ability to accomplish the task. We provide theoretical insights into this framework's ability to mitigate task-reward misalignment and present a practical implementation. Our experimental results show that our framework outperforms conventional IL baselines in complex and transfer learning scenarios. The complete code are available at https://github.com/zwc662/PAGAR.
arXiv.org Artificial Intelligence
Oct-31-2024
- Country:
- Asia
- Japan > Honshū
- Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
- Middle East > Jordan (0.04)
- Japan > Honshū
- North America > United States
- California
- Los Angeles County > Los Angeles (0.14)
- San Francisco County > San Francisco (0.14)
- Massachusetts > Suffolk County
- Boston (0.04)
- New York > New York County
- New York City (0.04)
- California
- Asia
- Genre:
- Research Report > New Finding (0.34)
- Technology: