Cloth-Splatting: 3D Cloth State Estimation from RGB Supervision
Longhini, Alberta, Büsching, Marcel, Duisterhof, Bardienus P., Lundell, Jens, Ichnowski, Jeffrey, Björkman, Mårten, Kragic, Danica
–arXiv.org Artificial Intelligence
Teaching robots to fold, drape, or manipulate deformable objects such as cloths is fundamental to unlock a variety of applications ranging from healthcare to domestic and industrial environments [1]. While considerable progress has been made in rigid-object manipulation, manipulating deformables poses unique challenges, including infinite-dimensional state spaces, complex physical dynamics, and state estimation of self-occluded configurations [2]. Specifically, the problem of state estimation has led existing works on visual manipulation to either rely exclusively on 2D images, overlooking the cloth's 3D structure [3, 4, 5], or to use 3D representations that neglect valuable information in RGB observations [6, 7, 8]. Prior work on cloth state estimation often relies on 3D particle-based representations derived from depth sensors, including graphs [9, 10] and point clouds [11]. While point clouds effectively capture the object's observable state, they lack comprehensive structural information [6].
arXiv.org Artificial Intelligence
Jan-3-2025
- Country:
- Europe (0.28)
- Genre:
- Research Report > New Finding (0.67)
- Industry:
- Energy (0.31)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks (0.46)
- Representation & Reasoning (1.00)
- Robots (1.00)
- Vision (1.00)
- Information Technology > Artificial Intelligence