Category-Level Articulated Object Pose Estimation

Li, Xiaolong, Wang, He, Yi, Li, Guibas, Leonidas, Abbott, A. Lynn, Song, Shuran

arXiv.org Artificial Intelligence 

This paper addresses the task of category-level pose estimation for articulated objects from a single depth image. W e present a novel category-level approach that correctly accommodates object instances not previously seen during training. A key aspect of the work is the new Articulation-Aware Normalized Coordinate Space Hierarchy (A-NCSH), which represents the different articulated objects for a given object category. This approach not only provides the canonical representation of each rigid part, but also normalizes the joint parameters and joint states. W e developed a deep network based on PointNet that is capable of predicting an A-NCSH representation for unseen object instances from single depth input. The predicted A-NCSH representation is then used for global pose optimization using kinematic constraints. W e demonstrate that constraints associated with joints in the kinematic chain lead to improved performance in estimating pose and relative scale for each part of the object. W e also demonstrate that the approach can tolerate cases of severe occlusion in the observed data.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found