Tanwani, Ajay Kumar
VisuoSpatial Foresight for Physical Sequential Fabric Manipulation
Hoque, Ryan, Seita, Daniel, Balakrishna, Ashwin, Ganapathi, Aditya, Tanwani, Ajay Kumar, Jamali, Nawid, Yamane, Katsu, Iba, Soshi, Goldberg, Ken
Robotic fabric manipulation has applications in home robotics, textiles, senior care and surgery. Existing fabric manipulation techniques, however, are designed for specific tasks, making it difficult to generalize across different but related tasks. We build upon the Visual Foresight framework to learn fabric dynamics that can be efficiently reused to accomplish different sequential fabric manipulation tasks with a single goal-conditioned policy. We extend our earlier work on VisuoSpatial Foresight (VSF), which learns visual dynamics on domain randomized RGB images and depth maps simultaneously and completely in simulation. In this earlier work, we evaluated VSF on multi-step fabric smoothing and folding tasks against 5 baseline methods in simulation and on the da Vinci Research Kit (dVRK) surgical robot without any demonstrations at train or test time. A key finding was that depth sensing significantly improves performance: RGBD data yields an 80% improvement in fabric folding success rate in simulation over pure RGB data. In this work, we vary 4 components of VSF, including data generation, the choice of visual dynamics model, cost function, and optimization procedure. Results suggest that training visual dynamics models using longer, corner-based actions can improve the efficiency of fabric folding by 76% and enable a physical sequential fabric folding task that VSF could not previously perform with 90% reliability. Code, data, videos, and supplementary material are available at https://sites.google.com/view/fabric-vsf/.
Deep Imitation Learning of Sequential Fabric Smoothing Policies
Seita, Daniel, Ganapathi, Aditya, Hoque, Ryan, Hwang, Minho, Cen, Edward, Tanwani, Ajay Kumar, Balakrishna, Ashwin, Thananjeyan, Brijen, Ichnowski, Jeffrey, Jamali, Nawid, Yamane, Katsu, Iba, Soshi, Canny, John, Goldberg, Ken
Sequential pulling policies to flatten and smooth fabrics have applications from surgery to manufacturing to home tasks such as bed making and folding clothes. Due to the complexity of fabric states and dynamics, we apply deep imitation learning to learn policies that, given color or depth images of a rectangular fabric sample, estimate pick points and pull vectors to spread the fabric to maximize coverage. To generate data, we develop a fabric simulator and an algorithmic demonstrator that has access to complete state information. We train policies in simulation using domain randomization and dataset aggregation (DAgger) on three tiers of difficulty in the initial randomized configuration. We present results comparing five baseline policies to learned policies and report systematic comparisons of color vs. depth images as inputs. In simulation, learned policies achieve comparable or superior performance to analytic baselines. In 120 physical experiments with the da Vinci Research Kit (dVRK) surgical robot, policies trained in simulation attain 86% and 69% final coverage for color and depth inputs, respectively, suggesting the feasibility of learning fabric smoothing policies from simulation. Supplementary material is available at https://sites.google.com/view/ fabric-smoothing.
Robot Bed-Making: Deep Transfer Learning Using Depth Sensing of Deformable Fabric
Seita, Daniel, Jamali, Nawid, Laskey, Michael, Berenstein, Ron, Tanwani, Ajay Kumar, Baskaran, Prakash, Iba, Soshi, Canny, John, Goldberg, Ken
Abstract-- Bed-making is a common task well-suited for home robots since it is tolerant to error and not time-critical. Bed-making can also be difficult for senior citizens and those with limited mobility due to the bending and reaching movements required. Autonomous bed-making combines multiple challenges in robotics: perception in unstructured environments, deformable object manipulation, transfer learning, and sequential decision making. We formalize the bed-making problem as one of maximizing surface coverage with a blanket, and explore algorithmic approaches that use deep learning on depth images to be invariant to the color and pattern of the blankets. We train two networks: one to identify a corner of the blanket and another to determine when to transition to the other side of the bed. Using the first network, the robot grasps at its estimate of the blanket corner and then pulls it to the appropriate corner of the bed frame. The second network estimates if the robot has sufficiently covered one side and can transition to the other, or if it should attempt another grasp from the same side. We evaluate with two robots, the Toyota HSR and the Fetch, and three blankets. Using 2018 and 654 depth images for training the grasp and transition networks respectively, experiments with a quarter-scale twin bed achieve an average of 91.7% blanket coverage, nearly matching human supervisors with 95.0% coverage. Data is available at https: //sites.google.com/view/bed-make. A common home task is bed-making [4], which is rarely enjoyed and can be physically challenging due to bending and leaning movements. Surveys of older adults in the United States [9], [3], suggest that they are willing to have a robot assistant in their homes, particularly for physically demanding tasks.