tactile pattern
DeepXPalm: Tilt and Position Rendering using Palm-worn Haptic Display and CNN-based Tactile Pattern Recognition
Miguel, Altamirano Cabrera, Oleg, Sautenkov, Jonathan, Tirado, Aleksey, Fedoseev, Pavel, Kopanev, Hiroyuki, Kajimoto, Dzmitry, Tsetserukou
Telemanipulation of deformable objects requires high precision and dexterity from the users, which can be increased by kinesthetic and tactile feedback. However, the object shape can change dynamically, causing ambiguous perception of its alignment and hence errors in the robot positioning. Therefore, the tilt angle and position classification problem has to be solved to present a clear tactile pattern to the user. This work presents a telemanipulation system for plastic pipettes consisting of a multi-contact haptic device LinkGlide to deliver haptic feedback at the users' palm and two tactile sensors array embedded in the 2-finger Robotiq gripper. We propose a novel approach based on Convolutional Neural Networks (CNN) to detect the tilt and position while grasping deformable objects. The CNN generates a mask based on recognized tilt and position data to render further multi-contact tactile stimuli provided to the user during the telemanipulation. The study has shown that using the CNN algorithm and the preset mask, tilt, and position recognition by users is increased from 9.67% using the direct data to 82.5%.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > Costa Rica > Heredia Province > Heredia (0.04)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- Research Report > New Finding (0.48)
- Research Report > Experimental Study (0.48)
HapticVLM: VLM-Driven Texture Recognition Aimed at Intelligent Haptic Interaction
Khan, Muhammad Haris, Cabrera, Miguel Altamirano, Iarchuk, Dmitrii, Mahmoud, Yara, Trinitatova, Daria, Tokmurziyev, Issatay, Tsetserukou, Dzmitry
-- This paper introduces HapticVLM, a novel mul-timodal system that integrates vision-language reasoning with deep convolutional networks to enable real-time haptic feedback. HapticVLM leverages a ConvNeXt-based material recognition module to generate robust visual embeddings for accurate identification of object materials, while a state-of-the-art Vision-Language Model (Qwen2-VL-2B-Instruct) infers ambient temperature from environmental cues. Experimental evaluations demonstrate an average recognition accuracy of 84.67% across five distinct auditory-tactile patterns and a temperature estimation accuracy of 86.7% based on a tolerance-based evaluation method with an 8 C margin of error across 15 scenarios. Although promising, the current study is limited by the use of a small set of prominent patterns and a modest participant pool. Future work will focus on expanding the range of tactile patterns and increasing user studies to further refine and validate the system's performance. Overall, HapticVLM presents a significant step toward context-aware, multimodal haptic interaction with potential applications in virtual reality, and assistive technologies. I. INTRODUCTION The ability to perceive and distinguish material properties such as texture, temperature, and stiffness is a fundamental aspect of human interaction with the physical world. Human tactile perception integrates visual, auditory, and haptic cues to form a comprehensive understanding of object surfaces, enabling precise material recognition and interaction [1].
- Research Report > New Finding (0.47)
- Research Report > Experimental Study (0.46)
Tactile Displays Driven by Projected Light
Linnander, Max, Goetz, Dustin, Reardon, Gregory, Hawkes, Elliot, Visell, Yon
Tactile displays that lend tangible form to digital content could transform computing interactions. However, achieving the resolution, speed, and dynamic range needed for perceptual fidelity remains challenging. We present a tactile display that directly converts projected light into visible tactile patterns via a photomechanical surface populated with millimeter-scale optotactile pixels. The pixels transduce incident light into mechanical displacements through photostimulated thermal gas expansion, yielding millimeter scale displacements with response times of 2 to 100 milliseconds. Employing projected light for power transmission and addressing renders these displays highly scalable. We demonstrate devices with up to 1511 addressable pixels. Perceptual studies confirm that they can reproduce diverse spatiotemporal tactile patterns with high fidelity. This research establishes a foundation for practical, versatile high-resolution tactile displays driven by light.
- Health & Medicine (0.93)
- Materials > Chemicals (0.48)
- Energy > Oil & Gas > Upstream (0.46)
- Energy > Power Industry (0.34)
TiltXter: CNN-based Electro-tactile Rendering of Tilt Angle for Telemanipulation of Pasteur Pipettes
Cabrera, Miguel Altamirano, Tirado, Jonathan, Fedoseev, Aleksey, Sautenkov, Oleg, Poliakov, Vladimir, Kopanev, Pavel, Tsetserukou, Dzmitry
The shape of deformable objects can change drastically during grasping by robotic grippers, causing an ambiguous perception of their alignment and hence resulting in errors in robot positioning and telemanipulation. Rendering clear tactile patterns is fundamental to increasing users' precision and dexterity through tactile haptic feedback during telemanipulation. Therefore, different methods have to be studied to decode the sensors' data into haptic stimuli. This work presents a telemanipulation system for plastic pipettes that consists of a Force Dimension Omega.7 haptic interface endowed with two electro-stimulation arrays and two tactile sensor arrays embedded in the 2-finger Robotiq gripper. We propose a novel approach based on convolutional neural networks (CNN) to detect the tilt of deformable objects. The CNN generates a tactile pattern based on recognized tilt data to render further electro-tactile stimuli provided to the user during the telemanipulation. The study has shown that using the CNN algorithm, tilt recognition by users increased from 23.13\% with the downsized data to 57.9%, and the success rate during teleoperation increased from 53.12% using the downsized data to 92.18% using the tactile patterns generated by the CNN.
- North America > United States > Rhode Island > Newport County > Newport (0.04)
- North America > Costa Rica > Heredia Province > Heredia (0.04)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- (2 more...)
- Research Report > New Finding (0.96)
- Research Report > Experimental Study (0.71)
- Government > Regional Government (0.47)
- Health & Medicine (0.46)
DroneARchery: Human-Drone Interaction through Augmented Reality with Haptic Feedback and Multi-UAV Collision Avoidance Driven by Deep Reinforcement Learning
Dorzhieva, Ekaterina, Baza, Ahmed, Gupta, Ayush, Fedoseev, Aleksey, Cabrera, Miguel Altamirano, Karmanova, Ekaterina, Tsetserukou, Dzmitry
We propose a novel concept of augmented reality (AR) human-drone interaction driven by RL-based swarm behavior to achieve intuitive and immersive control of a swarm formation of unmanned aerial vehicles. The DroneARchery system developed by us allows the user to quickly deploy a swarm of drones, generating flight paths simulating archery. The haptic interface LinkGlide delivers a tactile stimulus of the bowstring tension to the forearm to increase the precision of aiming. The swarm of released drones dynamically avoids collisions between each other, the drone following the user, and external obstacles with behavior control based on deep reinforcement learning. The developed concept was tested in the scenario with a human, where the user shoots from a virtual bow with a real drone to hit the target. The human operator observes the ballistic trajectory of the drone in an AR and achieves a realistic and highly recognizable experience of the bowstring tension through the haptic display. The experimental results revealed that the system improves trajectory prediction accuracy by 63.3% through applying AR technology and conveying haptic feedback of pulling force. DroneARchery users highlighted the naturalness (4.3 out of 5 point Likert scale) and increased confidence (4.7 out of 5) when controlling the drone. We have designed the tactile patterns to present four sliding distances (tension) and three applied force levels (stiffness) of the haptic display. Users demonstrated the ability to distinguish tactile patterns produced by the haptic display representing varying bowstring tension(average recognition rate is of 72.8%) and stiffness (average recognition rate is of 94.2%). The novelty of the research is the development of an AR-based approach for drone control that does not require special skills and training from the operator.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Russia (0.04)
- Asia > Russia (0.04)
- Leisure & Entertainment > Games (0.93)
- Information Technology (0.66)
- Transportation (0.65)