multi-task batch reinforcement learning
Multi-task Batch Reinforcement Learning with Metric Learning
We tackle the Multi-task Batch Reinforcement Learning problem. Given multiple datasets collected from different tasks, we train a multi-task policy to perform well in unseen tasks sampled from the same distribution. The task identities of the unseen tasks are not provided. To perform well, the policy must infer the task identity from collected transitions by modelling its dependency on states, actions and rewards. Because the different datasets may have state-action distributions with large divergence, the task inference module can learn to ignore the rewards and spuriously correlate \textit{only} state-action pairs to the task identity, leading to poor test time performance. To robustify task inference, we propose a novel application of the triplet loss.
Review for NeurIPS paper: Multi-task Batch Reinforcement Learning with Metric Learning
Weaknesses: The main weakness of the method is a reliance on accurate relabelling. The paper argues that actor-critic networks got casually confused due to (almost) disjoint task distributions and then hopes that reward models will not have the same problem. However, it seems that the problem also affects reward models, as a reward ensemble is used in the experiments. There is no ablation study to investigate the necessity of this ensemble in the offline setting. Can you explain why you did not use the setting from 5.1 and 5.2 to evaluate this component of your model?
Review for NeurIPS paper: Multi-task Batch Reinforcement Learning with Metric Learning
Reviewers find the paper well-motivated and concisely written. While most of the techniques employed in the paper have been investigated in the literature, the work finds a bag of good tricks to solve the phenomenon the authors observed in multi-task batch RL where agents rely on shortcuts to identify tasks and hence do not generalize. Reviewers would like to see more expansion on related works, and better baselines and experiment environment to strengthen the work. Please try to incorporate these feedback when revising your draft.
Multi-task Batch Reinforcement Learning with Metric Learning
We tackle the Multi-task Batch Reinforcement Learning problem. Given multiple datasets collected from different tasks, we train a multi-task policy to perform well in unseen tasks sampled from the same distribution. The task identities of the unseen tasks are not provided. To perform well, the policy must infer the task identity from collected transitions by modelling its dependency on states, actions and rewards. Because the different datasets may have state-action distributions with large divergence, the task inference module can learn to ignore the rewards and spuriously correlate \textit{only} state-action pairs to the task identity, leading to poor test time performance.