Ditto in the House: Building Articulation Models of Indoor Scenes through Interactive Perception

Hsu, Cheng-Chun, Jiang, Zhenyu, Zhu, Yuke

arXiv.org Artificial Intelligence 

Abstract-- Virtualizing the physical world into virtual models has been a critical technique for robot navigation and planning in the real world. We introduce an interactive perception approach to this task. After that, the robot collects the observations before and after the interactions. Virtualizing the real world into virtual models is a crucial primarily focus on individual objects, whereas scaling to step for robots to operate in everyday environments. Intelligent room-sized environments requires the robot to efficiently and robots rely on these models to understand the surroundings effectively explore the large-scale 3D space for meaningful and plan their actions in unstructured scenes. The robot discovers and facilitate mobile robots to localize themselves and navigate physically interacts with the articulated objects in the environment. Nevertheless, real-world manipulation would require Based on the visual observations before and after a robot to depart from reconstructing a static scene to the interactions, the robot infers the articulation properties unraveling the physical properties of objects.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found