Towards Self-Supervised High Level Sensor Fusion
Khan, Qadeer, Schön, Torsten, Wenzel, Patrick
Abstract--In this paper, we present a framework to control a self-driving car by fusing raw information from RGB images and depth maps. A deep neural network architecture is used for mapping the vision and depth information, respectively, to steering commands. This fusion of information from two sensor sources allows to provide redundancy and fault tolerance in the presence of sensor failures. Even if one of the input sensors fails to produce the correct output, the other functioning sensor would still be able to maneuver the car. Such redundancy is crucial in the critical application of self-driving cars. The experimental results have showed that our method is capable of learning to use the relevant sensor information even when one of the sensors fail without any explicit signal. I. INTRODUCTION The fusion of different sensor modalities in the context of autonomous driving is a crucial aspect in order to be robust against sensor failures. We consider RGB and Depth camera sensors for training a control module to maneuver a self-driving car.
Feb-12-2019
- Country:
- North America > United States (0.28)
- Genre:
- Research Report (0.64)
- Industry:
- Transportation > Ground > Road (1.00)
- Technology: