Touch2Touch: Cross-Modal Tactile Generation for Object Manipulation
Rodriguez, Samanta, Dou, Yiming, Oller, Miquel, Owens, Andrew, Fazeli, Nima
–arXiv.org Artificial Intelligence
Today's touch sensors come in many shapes and sizes. This has made it challenging to develop general-purpose touch processing methods since models are generally tied to one specific sensor design. We address this problem by performing cross-modal prediction between touch sensors: given the tactile signal from one sensor, we use a generative model to estimate how the same physical contact would be perceived by another sensor. This allows us to apply sensor-specific methods to the generated signal. We implement this idea by training a diffusion model to translate between the popular GelSlim and Soft Bubble sensors. As a downstream task, we perform in-hand object pose estimation using GelSlim sensors while using an algorithm that operates only on Soft Bubble signals. The dataset, the code, and additional details can be found at https://www.mmintlab.com/research/touch2touch/.
arXiv.org Artificial Intelligence
Sep-12-2024
- Country:
- Europe
- Germany > Bavaria
- Upper Bavaria > Munich (0.04)
- Italy > Calabria
- Catanzaro Province > Catanzaro (0.04)
- Germany > Bavaria
- North America > United States
- Michigan (0.04)
- Europe
- Genre:
- Research Report (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Robots (1.00)
- Vision (1.00)
- Information Technology > Artificial Intelligence