Feng, Qiaojun
TerrainMesh: Metric-Semantic Terrain Reconstruction from Aerial Images Using Joint 2D-3D Learning
Feng, Qiaojun, Atanasov, Nikolay
Abstract--This paper considers outdoor terrain mapping using RGB images obtained from an aerial vehicle. While feature-based localization and mapping techniques deliver real-time vehicle odometry and sparse keypoint depth reconstruction, a dense model of the environment geometry and semantics (vegetation, buildings, etc.) is usually recovered offline with significant computation and storage. This paper develops a joint 2D-3D learning approach to reconstruct a local metric-semantic mesh at each camera keyframe maintained by a visual odometry algorithm. Given the estimated camera trajectory, the local meshes can be assembled into a global environment model to capture the terrain topology and semantics during online operation. A local mesh is reconstructed using an initialization and refinement stage. In the initialization stage, we estimate the mesh vertex elevation by solving a least squares problem relating the vertex barycentric coordinates to the sparse keypoint depth measurements. In the refinement stage, we associate 2D image and semantic features with the 3D mesh vertices using camera projection and apply graph convolution to refine the mesh vertex spatial coordinates and semantic features based on joint 2D and 3D supervision. Quantitative and qualitative evaluation using real aerial images show the potential of our method to support environmental monitoring and surveillance applications. The color, elevation, and semantics of the mesh are visualized in the top-right, bottom-left and bottom-right plots. However, range sensors and, hence, dense robot systems to impact environmental monitoring, security depth information are not available during outdoor flight. This paper considers While specialized sensors and algorithms exist for real-time the problem of building a metric-semantic terrain model, dense stereo matching, they are restricted to a limited depth represented as a triangular mesh, of an outdoor environment range, much smaller than the distances commonly present using a sequence of overhead RGB images obtained onboard a in aerial images. Moreover, due to limited depth variation, UAV. Figure 1 shows an example input and mesh reconstruction. Recently, images, where the depth variation is small compared to the there has also been increasing interest in supplementing absolute depth values.
Constructing Effective Personalized Policies Using Counterfactual Inference from Biased Data Sets with Many Features
Atan, Onur, Zame, William R., Feng, Qiaojun, van der Schaar, Mihaela
This paper proposes a novel approach for constructing effective personalized policies when the observed data lacks counter-factual information, is biased and possesses many features. The approach is applicable in a wide variety of settings from healthcare to advertising to education to finance. These settings have in common that the decision maker can observe, for each previous instance, an array of features of the instance, the action taken in that instance, and the reward realized -- but not the rewards of actions that were not taken: the counterfactual information. Learning in such settings is made even more difficult because the observed data is typically biased by the existing policy (that generated the data) and because the array of features that might affect the reward in a particular instance -- and hence should be taken into account in deciding on an action in each particular instance -- is often vast. The approach presented here estimates propensity scores for the observed data, infers counterfactuals, identifies a (relatively small) number of features that are (most) relevant for each possible action and instance, and prescribes a policy to be followed. Comparison of the proposed algorithm against the state-of-art algorithm on actual datasets demonstrates that the proposed algorithm achieves a significant improvement in performance.