Goto

Collaborating Authors

 Sipos, Andrea


GelSlim 4.0: Focusing on Touch and Reproducibility

arXiv.org Artificial Intelligence

Tactile sensing provides robots with rich feedback during manipulation, enabling a host of perception and controls capabilities. Here, we present a new open-source, vision-based tactile sensor designed to promote reproducibility and accessibility across research and hobbyist communities. Building upon the GelSlim 3.0 sensor, our design features two key improvements: a simplified, modifiable finger structure and easily manufacturable lenses. To complement the hardware, we provide an open-source perception library that includes depth and shear field estimation algorithms to enable in-hand pose estimation, slip detection, and other manipulation tasks. Our sensor is accompanied by comprehensive manufacturing documentation, ensuring the design can be readily produced by users with varying levels of expertise. We validate the sensor's reproducibility through extensive human usability testing. For documentation, code, and data, please visit the project website: https://www.mmintlab.com/research/gelslim-4-0/


MultiSCOPE: Disambiguating In-Hand Object Poses with Proprioception and Tactile Feedback

arXiv.org Artificial Intelligence

Abstract--In this paper, we propose a method for estimating in-hand object poses using proprioception and tactile feedback from a bimanual robotic system. Our method addresses the problem of reducing pose uncertainty through a sequence of frictional contact interactions between the grasped objects. As part of our method, we propose 1) a tool segmentation routine that facilitates contact location and object pose estimation, 2) a loss that allows reasoning over solution consistency between interactions, and 3) a loss to promote converging to object poses and contact locations that explain the external forcetorque experienced by each arm. We demonstrate the efficacy of our method in a task-based demonstration both in simulation and on a real-world bimanual platform and show significant improvement in object pose estimation over single interactions. These failure modes result in unreliable interactions between the robot and its environment when performing tasks such as Our method works by iteratively bringing the two objects into tool use and assembly. These filters visual feedback to identify objects in the scene and estimate exploit mutual information between known object geometries their poses [6, 9, 11, 12, 15, 19, 21, 25, 26, 29].


Simultaneous Contact Location and Object Pose Estimation Using Proprioception and Tactile Feedback

arXiv.org Artificial Intelligence

Joint estimation of grasped object pose and extrinsic contacts is central to robust and dexterous manipulation. In this paper, we propose a novel state-estimation algorithm that jointly estimates contact location and object pose in 3D using exclusively proprioception and tactile feedback. Our approach leverages two complementary particle filters: one to estimate contact location (CPFGrasp) and another to estimate object poses (SCOPE). We implement and evaluate our approach on real-world single-arm and dual-arm robotic systems. We demonstrate that by bringing two objects into contact, the robots can infer contact location and object poses simultaneously. Our proposed method can be applied to a number of downstream tasks that require accurate pose estimates, such as tool use and assembly. Code and data can be found at https://github.com/MMintLab/scope.