Goto

Collaborating Authors

 tactile sensation


Tactile Data Recording System for Clothing with Motion-Controlled Robotic Sliding

Eguchi, Michikuni, Kitagishi, Takekazu, Hiroi, Yuichi, Hiraki, Takefumi

arXiv.org Artificial Intelligence

The tactile sensation of clothing is critical to wearer comfort. To reveal physical properties that make clothing comfortable, systematic collection of tactile data during sliding motion is required. We propose a robotic arm-based system for collecting tactile data from intact garments. The system performs stroking measurements with a simulated fingertip while precisely controlling speed and direction, enabling creation of motion-labeled, multimodal tactile databases. Machine learning evaluation showed that including motion-related parameters improved identification accuracy for audio and acceleration data, demonstrating the efficacy of motion-related labels for characterizing clothing tactile sensation. This system provides a scalable, non-destructive method for capturing tactile data of clothing, contributing to future studies on fabric perception and reproduction.


Curiosity-Driven Co-Development of Action and Language in Robots Through Self-Exploration

Tinker, Theodore Jerome, Doya, Kenji, Tani, Jun

arXiv.org Machine Learning

A central question in both cognitive science and artificial intelligence is how humans and artificial systems can acquire competencies for language and motor command in a co-developmental manner, despite having access to only limited learning experiences. This question is exemplified in human infants, who achieve remarkable generalization with sparse input. This is a stark contrast to large-scale models which rely on massive training corpora, to reach similar capabilities. This raises the issue of what mechanisms enable such efficient developmental learning. From the perspective of developmental psychology, infants acquire language through rich interaction with their embodied environments. T omasello's "verb-island" hypothesis argues that children initially learn verbs in specific, isolated contexts before generalizing across broader linguistic structures (1). He also emphasized the importance of embodiment in language acquisition, suggesting that grounding linguistic symbols in sensorimotor experiences is fundamental to language learning (2).


Silicone-made Tactile Actuator Integrated with Hot Thermo-fiber Finger Sleeve

Hashem, Mohammad Shadman, Raza, Ahsan, Jeon, Seokhee

arXiv.org Artificial Intelligence

Multi-mode haptic feedback is essential to achieve high realism and immersion in virtual environments. This paper proposed a novel silicone fingertip actuator integrated with a hot thermal fabric finger sleeve to render pressure, vibration, and hot thermal feedback simultaneously. The actuator is pneumatically actuated to render a realistic and effective tactile experience in accordance with hot thermal sensation. The silicone actuator, with two air chambers controlled by pneumatic valves connected to compressed air tanks. Simultaneously, a PWM signal from a microcontroller regulates the temperature of the thermal fabric sleeve, enhancing overall system functionality. The lower chamber of the silicone actuator is responsible for pressure feedback, whereas the upper chamber is devoted to vibrotactile feedback. The conductive yarn or thread was utilized to spread the thermal feedback actuation points on the thermal fabric's surface. To demonstrate the actuator's capability, a VR environment consisting of a bowl of liquid and a stove with fire was designed. Based on different functionalities the scenario can simulate the tactile perception of pressure, vibration, and temperature simultaneously or consecutively.


Cognitive Process during Palpation and Basic Concept of Remote Palpation System

Itkonen, Matti, Okajima, Shotaro, Ueda, Sayako, Costa-Garcia, Alvaro, Ningjia, Yang, Kurogi, Tadatoshi, Fujiwara, Takeshi, Kurimoto, Shigeru, Oyama, Shintaro, Saeki, Masaomi, Yamamoto, Michiro, Yoneda, Hidemasa, Hirata, Hitoshi, Shimoda, Shingo

arXiv.org Artificial Intelligence

This paper will examine the cognitive processes involved in palpation in order to develop an appropriate remote palpation system. In a conventional remote palpation system, the tactile condition of the patient is conveyed to the doctors using a force feedback system. A clarification of the cognitive process during palpation suggests that the purpose of palpation is to formulate a clear idea about the patient's medical problems using the tactile sensation as a trigger to combine the results of other assessments, past experience and memory, and patient reactions to the doctor's touch. This is in contrast to the objective of acquiring the detailed tactile condition of the affected body part. In order to demonstrate this purpose, we will describe the two significant signal pathways for the perception of tactile sensation, both in doctors and patients. The perception of doctors progresses as the result of active touch to the affected part, thereby implying that the simultaneous stimulation of kinaesthetic and tactile sensation is necessary. Conversely, the tactile sensation experienced by patients is the result of passive touch, which evokes a more subjective and emotional response. Patients both explicitly and implicitly perceive the stimulation, and doctors use these perceptions as reactions of the pain to the doctors' touch. This paper proposes the fundamental concept of a remote palpation system, ``Palpation Reality beyond Real'', to achieve the purpose of palpation. Palpation reality implies a system in which the whole cognitive process progresses at the same level or better than palpation in the standard examination, rather than presenting the real tactile sensation.


Touch100k: A Large-Scale Touch-Language-Vision Dataset for Touch-Centric Multimodal Representation

Cheng, Ning, Guan, Changhao, Gao, Jing, Wang, Weihao, Li, You, Meng, Fandong, Zhou, Jie, Fang, Bin, Xu, Jinan, Han, Wenjuan

arXiv.org Artificial Intelligence

Touch holds a pivotal position in enhancing the perceptual and interactive capabilities of both humans and robots. Despite its significance, current tactile research mainly focuses on visual and tactile modalities, overlooking the language domain. Inspired by this, we construct Touch100k, a paired touch-language-vision dataset at the scale of 100k, featuring tactile sensation descriptions in multiple granularities (i.e., sentence-level natural expressions with rich semantics, including contextual and dynamic relationships, and phrase-level descriptions capturing the key features of tactile sensations). Based on the dataset, we propose a pre-training method, Touch-Language-Vision Representation Learning through Curriculum Linking (TLV-Link, for short), inspired by the concept of curriculum learning. TLV-Link aims to learn a tactile representation for the GelSight sensor and capture the relationship between tactile, language, and visual modalities. We evaluate our representation's performance across two task categories (namely, material property identification and robot grasping prediction), focusing on tactile representation and zero-shot touch understanding. The experimental evaluation showcases the effectiveness of our representation. By enabling TLV-Link to achieve substantial improvements and establish a new state-of-the-art in touch-centric multimodal representation learning, Touch100k demonstrates its value as a valuable resource for research. Project page: https://cocacola-lab.github.io/Touch100k/.


Action Conditioned Tactile Prediction: case study on slip prediction

Mandil, Willow, Nazari, Kiyanoush, E, Amir Ghalamzan

arXiv.org Artificial Intelligence

Tactile predictive models can be useful across several robotic manipulation tasks, e.g. robotic pushing, robotic grasping, slip avoidance, and in-hand manipulation. However, available tactile prediction models are mostly studied for image-based tactile sensors and there is no comparison study indicating the best performing models. In this paper, we presented two novel data-driven action-conditioned models for predicting tactile signals during real-world physical robot interaction tasks (1) action condition tactile prediction and (2) action conditioned tactile-video prediction models. We use a magnetic-based tactile sensor that is challenging to analyse and test state-of-the-art predictive models and the only existing bespoke tactile prediction model. We compare the performance of these models with those of our proposed models. We perform the comparison study using our novel tactile-enabled dataset containing 51,000 tactile frames of a real-world robotic manipulation task with 11 flat-surfaced household objects. Our experimental results demonstrate the superiority of our proposed tactile prediction models in terms of qualitative, quantitative and slip prediction scores.


Telextiles: End-to-end Remote Transmission of Fabric Tactile Sensation

Kitagishi, Takekazu, Hiroi, Yuichi, Watanabe, Yuna, Itoh, Yuta, Rekimoto, Jun

arXiv.org Artificial Intelligence

The tactile sensation of textiles is critical in determining the comfort of clothing. For remote use, such as online shopping, users cannot physically touch the textile of clothes, making it difficult to evaluate its tactile sensation. Tactile sensing and actuation devices are required to transmit the tactile sensation of textiles. The sensing device needs to recognize different garments, even with hand-held sensors. In addition, the existing actuation device can only present a limited number of known patterns and cannot transmit unknown tactile sensations of textiles. To address these issues, we propose Telextiles, an interface that can remotely transmit tactile sensations of textiles by creating a latent space that reflects the proximity of textiles through contrastive self-supervised learning. We confirm that textiles with similar tactile features are located close to each other in the latent space through a two-dimensional plot. We then compress the latent features for known textile samples into the 1D distance and apply the 16 textile samples to the rollers in the order of the distance. The roller is rotated to select the textile with the closest feature if an unknown textile is detected.


A Touch, Vision, and Language Dataset for Multimodal Alignment

Fu, Letian, Datta, Gaurav, Huang, Huang, Panitch, William Chung-Ho, Drake, Jaimyn, Ortiz, Joseph, Mukadam, Mustafa, Lambeta, Mike, Calandra, Roberto, Goldberg, Ken

arXiv.org Artificial Intelligence

Touch is an important sensing modality for humans, but it has not yet been incorporated into a multimodal generative language model. This is partially due to the difficulty of obtaining natural language labels for tactile data and the complexity of aligning tactile readings with both visual observations and language descriptions. As a step towards bridging that gap, this work introduces a new dataset of 44K in-the-wild vision-touch pairs, with English language labels annotated by humans (10%) and textual pseudo-labels from GPT-4V (90%). We use this dataset to train a vision-language-aligned tactile encoder for open-vocabulary classification and a touch-vision-language (TVL) model for text generation using the trained encoder. Results suggest that by incorporating touch, the TVL model improves (+29% classification accuracy) touch-vision-language alignment over existing models trained on any pair of those modalities. Although only a small fraction of the dataset is human-labeled, the TVL model demonstrates improved visual-tactile understanding over GPT-4V (+12%) and open-source vision-language models (+32%) on a new touch-vision understanding benchmark. Code and data: https://tactile-vlm.github.io.


Combining Vision and Tactile Sensation for Video Prediction

Mandil, Willow, Ghalamzan-E, Amir

arXiv.org Artificial Intelligence

In this paper, we explore the impact of adding tactile sensation to video prediction models for physical robot interactions. Predicting the impact of robotic actions on the environment is a fundamental challenge in robotics. Current methods leverage visual and robot action data to generate video predictions over a given time period, which can then be used to adjust robot actions. However, humans rely on both visual and tactile feedback to develop and maintain a mental model of their physical surroundings. In this paper, we investigate the impact of integrating tactile feedback into video prediction models for physical robot interactions. We propose three multi-modal integration approaches and compare the performance of these tactile-enhanced video prediction models. Additionally, we introduce two new datasets of robot pushing that use a magnetic-based tactile sensor for unsupervised learning. The first dataset contains visually identical objects with different physical properties, while the second dataset mimics existing robot-pushing datasets of household object clusters. Our results demonstrate that incorporating tactile feedback into video prediction models improves scene prediction accuracy and enhances the agent's perception of physical interactions and understanding of cause-effect relationships during physical robot interactions.


More skin-like, electronic skin that can feel

#artificialintelligence

The challenge for electronic skin, being developed for use in artificial skins or humanlike robots like the humanoids, is to make it feel the temperatures and movements like how human skin feels them as much as possible. So far, there are electronic skins that can detect movement or temperature separately, but none are able to recognize both simultaneously like the human skin. A joint research team consisting of POSTECH professor Unyong Jeong and Dr. Insang You of the Department of Materials Science and Engineering, and Professor Zhenan Bao of Stanford University have together developed the multimodal ion-electronic skin that can measure the temperature and mechanical stimulation at the same time. The research findings, published on November 20th edition of Science, are characterized by making very simple structures through applying special properties of the ion conductors. There are various tactile receptors in the human skin that can detect hot or cold temperatures as well as other tactile sensations such as pinching, twisting or pushing.