Flat'n'Fold: A Diverse Multi-Modal Dataset for Garment Perception and Manipulation
Zhuang, Lipeng, Fan, Shiyu, Ru, Yingdong, Audonnet, Florent, Henderson, Paul, Aragon-Camarasa, Gerardo
–arXiv.org Artificial Intelligence
Abstract-- We present Flat'n'Fold, a novel large-scale dataset for garment manipulation that addresses critical gaps in existing datasets. We quantify the dataset's diversity and complexity compared to existing benchmarks and show that our dataset features natural and diverse manipulations of real-world demonstrations of human and robot demonstrations in terms of visual and action information. To showcase Flat'n'Fold's utility, we establish new benchmarks for grasping point prediction and This underscores Flat'n'Fold's potential to drive advances in robotic perception and manipulation of deformable objects. Human-controlled Robot Demonstrations, where an expert Manipulating garments remains a significant challenge in human operator controls a robot to execute similar robotics. Tasks such as flattening and folding require understanding garment manipulation tasks, aiming to replicate natural, the vast space of configurations that garments can human-like approaches within the robot's operational adopt [1], [2], and planning complex sequences of actions limitations.
arXiv.org Artificial Intelligence
Sep-26-2024