Joint Representations for Reinforcement Learning with Multiple Sensors

Becker, Philipp, Markgraf, Sebastian, Otto, Fabian, Neumann, Gerhard

arXiv.org Artificial Intelligence 

Combining inputs from multiple sensor modalities effectively in reinforcement learning (RL) is an open problem. While many self-supervised representation learning approaches exist to improve performance and sample complexity for image-based RL, they usually neglect other available information, such as robot proprioception. However, using this proprioception for representation learning can help algorithms to focus on relevant aspects and guide them toward finding better representations. In this work, we systematically analyze representation learning for RL from multiple sensors by building on Recurrent State Space Models. We propose a combination of reconstruction-based and contrastive losses, which allows us to choose the most appropriate method for each sensor modality. We demonstrate the benefits of joint representations, particularly with distinct loss functions for each modality, for model-free and model-based RL on complex tasks. Those include tasks where the images contain distractions or occlusions and a new locomotion suite. We show that combining reconstruction-based and contrastive losses for joint representation learning improves performance significantly compared to a post hoc combination of image representations and proprioception and can also improve the quality of learned models for model-based RL.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found