Understanding Bird's-Eye View Semantic HD-Maps Using an Onboard Monocular Camera
Can, Yigit Baran, Liniger, Alexander, Unal, Ozan, Paudel, Danda, Van Gool, Luc
–arXiv.org Artificial Intelligence
Autonomous navigation requires scene understanding of the action-space to move or anticipate events. For planner agents moving on the ground plane, such as autonomous vehicles, this translates to scene understanding in the bird's-eye view. However, the onboard cameras of autonomous cars are customarily mounted horizontally for a better view of the surrounding. In this work, we study scene understanding in the form of online estimation of semantic bird's-eye-view HD-maps using the video input from a single onboard camera. We study three key aspects of this task, image-level understanding, BEV level understanding, and the aggregation of temporal information. Based on these three pillars we propose a novel architecture that combines these three aspects. In our extensive experiments, we demonstrate that the considered aspects are complementary to each other for HD-map understanding. Furthermore, the proposed architecture significantly surpasses the current state-of-the-art.
arXiv.org Artificial Intelligence
Dec-5-2020
- Country:
- Genre:
- Research Report (1.00)
- Industry:
- Transportation > Ground > Road (0.90)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks (1.00)
- Robots (1.00)
- Vision (1.00)
- Information Technology > Artificial Intelligence