Not enough data to create a plot.
Try a different view from the menu above.
Jiang, Zhenyu
Ditto in the House: Building Articulation Models of Indoor Scenes through Interactive Perception
Hsu, Cheng-Chun, Jiang, Zhenyu, Zhu, Yuke
Abstract-- Virtualizing the physical world into virtual models has been a critical technique for robot navigation and planning in the real world. We introduce an interactive perception approach to this task. After that, the robot collects the observations before and after the interactions. Virtualizing the real world into virtual models is a crucial primarily focus on individual objects, whereas scaling to step for robots to operate in everyday environments. Intelligent room-sized environments requires the robot to efficiently and robots rely on these models to understand the surroundings effectively explore the large-scale 3D space for meaningful and plan their actions in unstructured scenes. The robot discovers and facilitate mobile robots to localize themselves and navigate physically interacts with the articulated objects in the environment. Nevertheless, real-world manipulation would require Based on the visual observations before and after a robot to depart from reconstructing a static scene to the interactions, the robot infers the articulation properties unraveling the physical properties of objects.
Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations
Jiang, Zhenyu, Zhu, Yifeng, Svetlik, Maxwell, Fang, Kuan, Zhu, Yuke
Abstract--Grasp detection in clutter requires the robot to reason about the 3D scene from incomplete and noisy perception. In this work, we draw insight that 3D reconstruction and grasp learning are two intimately connected tasks, both of which require a fine-grained understanding of local geometry details. We train the model on self-supervised grasp trials data in simulation. Evaluation is conducted on a clutter removal task, where the robot clears cluttered objects by grasping them one at a time. Supervision from grasp, in turn, produce better 3D reconstruction in graspable regions. Generating robust grasps from raw perception is an essential to-end on large-scale grasping datasets, either through manual task for robots to physically interact with objects in unstructured labeling [17] or self-exploration [18, 40]. This task demands the robots to reason have enabled direct grasp prediction from noisy perception. Here we consider and data-driven approaches to grasping, we investigate the the problem of 6-DoF grasp detection in clutter from 3D point synergistic relations between geometry reasoning and grasp cloud of the robot's on-board depth camera. Our goal is to learning. Our key intuition is that a learned representation predict a set of candidate grasps on a clutter of objects from capable of reconstructing the 3D scene encodes relevant geometry partial point cloud for grasping and decluttering.