Goto

Collaborating Authors

 trajectory-space smoothing


Learning Guidance Rewards with Trajectory-space Smoothing

Neural Information Processing Systems

Long-term temporal credit assignment is an important challenge in deep reinforcement learning (RL). It refers to the ability of the agent to attribute actions to consequences that may occur after a long time interval. Existing policy-gradient and Q-learning algorithms typically rely on dense environmental rewards that provide rich short-term supervision and help with credit assignment. However, they struggle to solve tasks with delays between an action and the corresponding rewarding feedback. To make credit assignment easier, recent works have proposed algorithms to learn dense guidance rewards that could be used in place of the sparse or delayed environmental rewards. This paper is in the same vein -- starting with a surrogate RL objective that involves smoothing in the trajectory-space, we arrive at a new algorithm for learning guidance rewards. We show that the guidance rewards have an intuitive interpretation, and can be obtained without training any additional neural networks. Due to the ease of integration, we use the guidance rewards in a few popular algorithms (Q-learning, Actor-Critic, Distributional-RL) and present results in single-agent and multi-agent tasks that elucidate the benefit of our approach when the environmental rewards are sparse or delayed.


Review for NeurIPS paper: Learning Guidance Rewards with Trajectory-space Smoothing

Neural Information Processing Systems

Weaknesses: While the experimental results are promising in the episodic Mujoco tasks, it is unclear whether IRCR works in a more complicated setting (e.g., hard exploration games in Atari, sparse reward tasks, robotic arm pick-and-place tasks, etc). Specifically, in the Mujoco tasks considered in the paper, the episodic reward can be obtained by performing simple, repetitive patterns (i.e., keep going forward). But often, the solution to an environment may involve action sequences that are hardly overlapped or repetitive (e.g., a robotic agent that needs to pick up a block and place it in a specified location, a maze-exploring agent that needs to pick up a key to open the exit). In such cases, the algorithm needs to correctly associate the non-repetitive events with the final return to solve the problem. I wonder if IRCR works in such cases as well.


Learning Guidance Rewards with Trajectory-space Smoothing

Neural Information Processing Systems

Long-term temporal credit assignment is an important challenge in deep reinforcement learning (RL). It refers to the ability of the agent to attribute actions to consequences that may occur after a long time interval. Existing policy-gradient and Q-learning algorithms typically rely on dense environmental rewards that provide rich short-term supervision and help with credit assignment. However, they struggle to solve tasks with delays between an action and the corresponding rewarding feedback. To make credit assignment easier, recent works have proposed algorithms to learn dense "guidance" rewards that could be used in place of the sparse or delayed environmental rewards. This paper is in the same vein -- starting with a surrogate RL objective that involves smoothing in the trajectory-space, we arrive at a new algorithm for learning guidance rewards.