TerrainMesh: Metric-Semantic Terrain Reconstruction from Aerial Images Using Joint 2D-3D Learning

Feng, Qiaojun, Atanasov, Nikolay

arXiv.org Artificial Intelligence 

Abstract--This paper considers outdoor terrain mapping using RGB images obtained from an aerial vehicle. While feature-based localization and mapping techniques deliver real-time vehicle odometry and sparse keypoint depth reconstruction, a dense model of the environment geometry and semantics (vegetation, buildings, etc.) is usually recovered offline with significant computation and storage. This paper develops a joint 2D-3D learning approach to reconstruct a local metric-semantic mesh at each camera keyframe maintained by a visual odometry algorithm. Given the estimated camera trajectory, the local meshes can be assembled into a global environment model to capture the terrain topology and semantics during online operation. A local mesh is reconstructed using an initialization and refinement stage. In the initialization stage, we estimate the mesh vertex elevation by solving a least squares problem relating the vertex barycentric coordinates to the sparse keypoint depth measurements. In the refinement stage, we associate 2D image and semantic features with the 3D mesh vertices using camera projection and apply graph convolution to refine the mesh vertex spatial coordinates and semantic features based on joint 2D and 3D supervision. Quantitative and qualitative evaluation using real aerial images show the potential of our method to support environmental monitoring and surveillance applications. The color, elevation, and semantics of the mesh are visualized in the top-right, bottom-left and bottom-right plots. However, range sensors and, hence, dense robot systems to impact environmental monitoring, security depth information are not available during outdoor flight. This paper considers While specialized sensors and algorithms exist for real-time the problem of building a metric-semantic terrain model, dense stereo matching, they are restricted to a limited depth represented as a triangular mesh, of an outdoor environment range, much smaller than the distances commonly present using a sequence of overhead RGB images obtained onboard a in aerial images. Moreover, due to limited depth variation, UAV. Figure 1 shows an example input and mesh reconstruction. Recently, images, where the depth variation is small compared to the there has also been increasing interest in supplementing absolute depth values.