Toward Computationally Efficient Inverse Reinforcement Learning via Reward Shaping

Cooke, Lauren H., Klyne, Harvey, Zhang, Edwin, Laidlaw, Cassidy, Tambe, Milind, Doshi-Velez, Finale

arXiv.org Machine Learning 

Inverse reinforcement learning (IRL) is computationally challenging, with common approaches requiring the solution of multiple reinforcement learning (RL) sub-problems. This work motivates the use of potential-based reward shaping to reduce the computational burden of each RL sub-problem. This work serves as a proof-of-concept and we hope will inspire future developments towards computationally efficient IRL.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found