tactile data
OSMO: Open-Source Tactile Glove for Human-to-Robot Skill Transfer
Yin, Jessica, Qi, Haozhi, Wi, Youngsun, Kundu, Sayantan, Lambeta, Mike, Yang, William, Wang, Changhao, Wu, Tingfan, Malik, Jitendra, Hellebrekers, Tess
Abstract-- Human video demonstrations provide abundant training data for learning robot policies, but video alone cannot capture the rich contact signals critical for mastering manipulation. We introduce OSMO, an open-source wearable tactile glove designed for human-to-robot skill transfer . The glove features 12 three-axis tactile sensors across the fingertips and palm and is designed to be compatible with state-of-the-art hand-tracking methods for in-the-wild data collection. We demonstrate that a robot policy trained exclusively on human demonstrations collected with OSMO, without any real robot data, is capable of executing a challenging contact-rich manipulation task. On a real-world wiping task requiring sustained contact pressure, our tactile-aware policy achieves a 72% success rate, outperforming vision-only baselines by eliminating contact-related failure modes. We release complete hardware designs, firmware, and assembly instructions to support community adoption. Tactile sensing enables humans to excel at manipulation by providing real-time feedback about contact forces that vision alone cannot capture. Consider trying to dice a carrot from video alone; one cannot observe the nuanced force control that makes the task successful. Many different applied forces can result in nearly identical visual appearances, leaving critical information about force control invisible to vision.
- North America > United States > Pennsylvania (0.04)
- North America > United States > Michigan (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
Tactile Data Recording System for Clothing with Motion-Controlled Robotic Sliding
Eguchi, Michikuni, Kitagishi, Takekazu, Hiroi, Yuichi, Hiraki, Takefumi
The tactile sensation of clothing is critical to wearer comfort. To reveal physical properties that make clothing comfortable, systematic collection of tactile data during sliding motion is required. We propose a robotic arm-based system for collecting tactile data from intact garments. The system performs stroking measurements with a simulated fingertip while precisely controlling speed and direction, enabling creation of motion-labeled, multimodal tactile databases. Machine learning evaluation showed that including motion-related parameters improved identification accuracy for audio and acceleration data, demonstrating the efficacy of motion-related labels for characterizing clothing tactile sensation. This system provides a scalable, non-destructive method for capturing tactile data of clothing, contributing to future studies on fabric perception and reproduction.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.94)
- Asia > Japan > Honshū > Kantō > Ibaraki Prefecture > Tsukuba (0.42)
- North America > United States > California > San Francisco County > San Francisco (0.15)
- (2 more...)
A Humanoid Visual-Tactile-Action Dataset for Contact-Rich Manipulation
Kwon, Eunju, Oh, Seungwon, Baek, In-Chang, Park, Yucheon, Kim, Gyungbo, Moon, JaeYoung, Choi, Yunho, Kim, Kyung-Joong
Abstract--Contact-rich manipulation has become increasingly important in robot learning. However, previous studies on robot learning datasets have focused on rigid objects and underrepre-sented the diversity of pressure conditions for real-world manipulation. T o address this gap, we present a humanoid visual-tactile-action dataset designed for manipulating deformable soft objects. The dataset was collected via teleoperation using a humanoid robot equipped with dexterous hands, capturing multi-modal interactions under varying pressure conditions. Contact-rich interaction represents a critical gateway for enabling robots to perform complex tasks in real-world environments, yet it remains one of the fundamental challenges in robotic manipulation [1].
Representing Data in Robotic Tactile Perception -- A Review
Albini, Alessandro, Kaboli, Mohsen, Cannata, Giorgio, Maiolino, Perla
Robotic tactile perception is a complex process involving several computational steps performed at different levels. Tactile information is shaped by the interplay of robot actions, the mechanical properties of its body, and the software that processes the data. In this respect, high-level computation, required to process and extract information, is commonly performed by adapting existing techniques from other domains, such as computer vision, which expects input data to be properly structured. Therefore, it is necessary to transform tactile sensor data to match a specific data structure. This operation directly affects the tactile information encoded and, as a consequence, the task execution. This survey aims to address this specific aspect of the tactile perception pipeline, namely Data Representation. The paper first clearly defines its contributions to the perception pipeline and then reviews how previous studies have dealt with the problem of representing tactile information, investigating the relationships among hardware, representations, and high-level computation methods. The analysis has led to the identification of six structures commonly used in the literature to represent data. The manuscript provides discussions and guidelines for properly selecting a representation depending on operating conditions, including the available hardware, the tactile information required to be encoded, and the task at hand.
- Africa > Central African Republic > Ombella-M'Poko > Bimbo (0.04)
- North America > United States > New York (0.04)
- Europe > United Kingdom (0.04)
- (7 more...)
- Overview (1.00)
- Research Report (0.81)
OmniVTLA: Vision-Tactile-Language-Action Model with Semantic-Aligned Tactile Sensing
Cheng, Zhengxue, Zhang, Yiqian, Zhang, Wenkang, Li, Haoyu, Wang, Keyu, Song, Li, Zhang, Hengdi
Recent vision-language-action (VLA) models build upon vision-language foundations, and have achieved promising results and exhibit the possibility of task generalization in robot manipulation. However, due to the heterogeneity of tactile sensors and the difficulty of acquiring tactile data, current VLA models significantly overlook the importance of tactile perception and fail in contact-rich tasks. To address this issue, this paper proposes OmniVTLA, a novel architecture involving tactile sensing. Specifically, our contributions are threefold. First, our OmniVTLA features a dual-path tactile encoder framework. This framework enhances tactile perception across diverse vision-based and force-based tactile sensors by using a pretrained vision transformer (ViT) and a semantically-aligned tactile ViT (SA-ViT). Second, we introduce ObjTac, a comprehensive force-based tactile dataset capturing textual, visual, and tactile information for 56 objects across 10 categories. With 135K tri-modal samples, ObjTac supplements existing visuo-tactile datasets. Third, leveraging this dataset, we train a semantically-aligned tactile encoder to learn a unified tactile representation, serving as a better initialization for OmniVTLA. Real-world experiments demonstrate substantial improvements over state-of-the-art VLA baselines, achieving 96.9% success rates with grippers, (21.9% higher over baseline) and 100% success rates with dexterous hands (6.2% higher over baseline) in pick-and-place tasks. Besides, OmniVTLA significantly reduces task completion time and generates smoother trajectories through tactile sensing compared to existing VLA. Our ObjTac dataset can be found at https://readerek.github.io/Objtac.github.io
- North America > Montserrat (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
Tactile-Guided Robotic Ultrasound: Mapping Preplanned Scan Paths for Intercostal Imaging
Zhang, Yifan, Huang, Dianye, Navab, Nassir, Jiang, Zhongliang
Medical ultrasound (US) imaging is widely used in clinical examinations due to its portability, real-time capability, and radiation-free nature. To address inter- and intra-operator variability, robotic ultrasound systems have gained increasing attention. However, their application in challenging intercostal imaging remains limited due to the lack of an effective scan path generation method within the constrained acoustic window. To overcome this challenge, we explore the potential of tactile cues for characterizing subcutaneous rib structures as an alternative signal for ultrasound segmentation-free bone surface point cloud extraction. Compared to 2D US images, 1D tactile-related signals offer higher processing efficiency and are less susceptible to acoustic noise and artifacts. By leveraging robotic tracking data, a sparse tactile point cloud is generated through a few scans along the rib, mimicking human palpation. To robustly map the scanning trajectory into the intercostal space, the sparse tactile bone location point cloud is first interpolated to form a denser representation. This refined point cloud is then registered to an image-based dense bone surface point cloud, enabling accurate scan path mapping for individual patients. Additionally, to ensure full coverage of the object of interest, we introduce an automated tilt angle adjustment method to visualize structures beneath the bone. To validate the proposed method, we conducted comprehensive experiments on four distinct phantoms. The final scanning waypoint mapping achieved Mean Nearest Neighbor Distance (MNND) and Hausdorff distance (HD) errors of 3.41 mm and 3.65 mm, respectively, while the reconstructed object beneath the bone had errors of 0.69 mm and 2.2 mm compared to the CT ground truth.
- North America > United States (0.28)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.05)
- Asia > Japan > Honshū > Kansai > Kyoto Prefecture > Kyoto (0.04)
- Asia > China > Hong Kong (0.04)
ConViTac: Aligning Visual-Tactile Fusion with Contrastive Representations
Wu, Zhiyuan, Zhao, Yongqiang, Luo, Shan
Vision and touch are two fundamental sensory modalities for robots, offering complementary information that enhances perception and manipulation tasks. Previous research has attempted to jointly learn visual-tactile representations to extract more meaningful information. However, these approaches often rely on direct combination, such as feature addition and concatenation, for modality fusion, which tend to result in poor feature integration. In this paper, we propose ConViTac, a visual-tactile representation learning network designed to enhance the alignment of features during fusion using contrastive representations. Our key contribution is a Contrastive Embedding Conditioning (CEC) mechanism that leverages a contrastive encoder pretrained through self-supervised contrastive learning to project visual and tactile inputs into unified latent embeddings. These embeddings are used to couple visual-tactile feature fusion through cross-modal attention, aiming at aligning the unified representations and enhancing performance on downstream tasks. We conduct extensive experiments to demonstrate the superiority of ConViTac in real world over current state-of-the-art methods and the effectiveness of our proposed CEC mechanism, which improves accuracy by up to 12.0% in material classification and grasping prediction tasks.
- Europe > United Kingdom (0.04)
- Africa > Central African Republic > Ombella-M'Poko > Bimbo (0.04)
Improving Robotic Manipulation: Techniques for Object Pose Estimation, Accommodating Positional Uncertainty, and Disassembly Tasks from Examples
To use robots in more unstructured environments, we have to accommodate for more complexities. Robotic systems need more awareness of the environment to adapt to uncertainty and variability. Although cameras have been predominantly used in robotic tasks, the limitations that come with them, such as occlusion, visibility and breadth of information, have diverted some focus to tactile sensing. In this thesis, we explore the use of tactile sensing to determine the pose of the object using the temporal features. We then use reinforcement learning with tactile collisions to reduce the number of attempts required to grasp an object resulting from positional uncertainty from camera estimates. Finally, we use information provided by these tactile sensors to a reinforcement learning agent to determine the trajectory to take to remove an object from a restricted passage while reducing training time by pertaining from human examples.
- Asia > India (0.14)
- North America > Canada > Newfoundland and Labrador > Newfoundland > St. John's (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- Research Report > New Finding (1.00)
- Workflow (0.93)
- Information Technology > Artificial Intelligence > Robots > Manipulation (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
In-Hand Object Pose Estimation via Visual-Tactile Fusion
Nonnengießer, Felix, Kshirsagar, Alap, Belousov, Boris, Peters, Jan
-- Accurate in-hand pose estimation is crucial for robotic object manipulation, but visual occlusion remains a major challenge for vision-based approaches. This paper presents an approach to robotic in-hand object pose estimation, combining visual and tactile information to accurately determine the position and orientation of objects grasped by a robotic hand. We address the challenge of visual occlusion by fusing visual information from a wrist-mounted RGB-D camera with tactile information from vision-based tactile sensors mounted on the fingertips of a robotic gripper . Our approach employs a weighting and sensor fusion module to combine point clouds from heterogeneous sensor types and control each modality's contribution to the pose estimation process. We use an augmented Iterative Closest Point (ICP) algorithm adapted for weighted point clouds to estimate the 6D object pose. Our experiments show that incorporating tactile information significantly improves pose estimation accuracy, particularly when occlusion is high. Our method achieves an average pose estimation error of 7.5 mm and 16.7 degrees, outperforming vision-only baselines by up to 20%. We also demonstrate the ability of our method to perform precise object manipulation in a real-world insertion task. In-hand pose estimation describes the process of determining the position and orientation of an object held within a robotic hand.
- Europe > Germany > Hesse > Darmstadt Region > Darmstadt (0.04)
- Europe > Germany > Hesse > Darmstadt Region > Frankfurt (0.04)
ViTaPEs: Visuotactile Position Encodings for Cross-Modal Alignment in Multimodal Transformers
Lygerakis, Fotios, Özdenizci, Ozan, Rückert, Elmar
Tactile sensing provides local essential information that is complementary to visual perception, such as texture, compliance, and force. Despite recent advances in visuotactile representation learning, challenges remain in fusing these modalities and generalizing across tasks and environments without heavy reliance on pre-trained vision-language models. Moreover, existing methods do not study positional encodings, thereby overlooking the multi-scale spatial reasoning needed to capture fine-grained visuotactile correlations. We introduce ViTaPEs, a transformer-based framework that robustly integrates visual and tactile input data to learn task-agnostic representations for visuotactile perception. Our approach exploits a novel multi-scale positional encoding scheme to capture intra-modal structures, while simultaneously modeling cross-modal cues. Unlike prior work, we provide provable guarantees in visuotactile fusion, showing that our encodings are injective, rigid-motion-equivariant, and information-preserving, validating these properties empirically. Experiments on multiple large-scale real-world datasets show that ViTaPEs not only surpasses state-of-the-art baselines across various recognition tasks but also demonstrates zero-shot generalization to unseen, out-of-domain scenarios. We further demonstrate the transfer-learning strength of ViTaPEs in a robotic grasping task, where it outperforms state-of-the-art baselines in predicting grasp success. Project page: https://sites.google.com/view/vitapes
- Europe > Austria > Styria > Leoben (0.04)
- Oceania > New Zealand > North Island > Auckland Region > Auckland (0.04)
- Europe > Austria > Styria > Graz (0.04)