Goto

Collaborating Authors

 haptic signal


HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals

Hu, Guimin, Hershcovich, Daniel, Seifi, Hasti

arXiv.org Artificial Intelligence

Haptic signals, from smartphone vibrations to virtual reality touch feedback, can effectively convey information and enhance realism, but designing signals that resonate meaningfully with users is challenging. To facilitate this, we introduce a multimodal dataset and task, of matching user descriptions to vibration haptic signals, and highlight two primary challenges: (1) lack of large haptic vibration datasets annotated with textual descriptions as collecting haptic descriptions is time-consuming, and (2) limited capability of existing tasks and models to describe vibration signals in text. To advance this area, we create HapticCap, the first fully human-annotated haptic-captioned dataset, containing 92,070 haptic-text pairs for user descriptions of sensory, emotional, and associative attributes of vibrations. Based on HapticCap, we propose the haptic-caption retrieval task and present the results of this task from a supervised contrastive learning framework that brings together text representations within specific categories and vibrations. Overall, the combination of language model T5 and audio model AST yields the best performance in the haptic-caption retrieval task, especially when separately trained for each description category.


Haptic-Based User Authentication for Tele-robotic System

Yu, Rongyu, Chen, Kan, Deng, Zeyu, Wang, Chen, Kizilkaya, Burak, Li, Liying Emma

arXiv.org Artificial Intelligence

Tele-operated robots rely on real-time user behavior mapping for remote tasks, but ensuring secure authentication remains a challenge. Traditional methods, such as passwords and static biometrics, are vulnerable to spoofing and replay attacks, particularly in high-stakes, continuous interactions. This paper presents a novel anti-spoofing and anti-replay authentication approach that leverages distinctive user behavioral features extracted from haptic feedback during human-robot interactions. To evaluate our authentication approach, we collected a time-series force feedback dataset from 15 participants performing seven distinct tasks. We then developed a transformer-based deep learning model to extract temporal features from the haptic signals. By analyzing user-specific force dynamics, our method achieves over 90 percent accuracy in both user identification and task classification, demonstrating its potential for enhancing access control and identity assurance in tele-robotic systems.


A Modular Haptic Display with Reconfigurable Signals for Personalized Information Transfer

Valdivia, Antonio Alvarez, Christie, Benjamin A., Losey, Dylan P., Blumenschein, Laura H.

arXiv.org Artificial Intelligence

We present a customizable soft haptic system that integrates modular hardware with an information-theoretic algorithm to personalize feedback for different users and tasks. Our platform features modular, multi-degree-of-freedom pneumatic displays, where different signal types, such as pressure, frequency, and contact area, can be activated or combined using fluidic logic circuits. These circuits simplify control by reducing reliance on specialized electronics and enabling coordinated actuation of multiple haptic elements through a compact set of inputs. Our approach allows rapid reconfiguration of haptic signal rendering through hardware-level logic switching without rewriting code. Personalization of the haptic interface is achieved through the combination of modular hardware and software-driven signal selection. To determine which display configurations will be most effective, we model haptic communication as a signal transmission problem, where an agent must convey latent information to the user. We formulate the optimization problem to identify the haptic hardware setup that maximizes the information transfer between the intended message and the user's interpretation, accounting for individual differences in sensitivity, preferences, and perceptual salience. We evaluate this framework through user studies where participants interact with reconfigurable displays under different signal combinations. Our findings support the role of modularity and personalization in creating multimodal haptic interfaces and advance the development of reconfigurable systems that adapt with users in dynamic human-machine interaction contexts.


Grounding Emotional Descriptions to Electrovibration Haptic Signals

Hu, Guimin, Zhao, Zirui, Heilmann, Lukas, Vardar, Yasemin, Seifi, Hasti

arXiv.org Artificial Intelligence

Designing and displaying haptic signals with sensory and emotional attributes can improve the user experience in various applications. Free-form user language provides rich sensory and emotional information for haptic design (e.g., ``This signal feels smooth and exciting''), but little work exists on linking user descriptions to haptic signals (i.e., language grounding). To address this gap, we conducted a study where 12 users described the feel of 32 signals perceived on a surface haptics (i.e., electrovibration) display. We developed a computational pipeline using natural language processing (NLP) techniques, such as GPT-3.5 Turbo and word embedding methods, to extract sensory and emotional keywords and group them into semantic clusters (i.e., concepts). We linked the keyword clusters to haptic signal features (e.g., pulse count) using correlation analysis. The proposed pipeline demonstrates the viability of a computational approach to analyzing haptic experiences. We discuss our future plans for creating a predictive model of haptic experience.


Kitchen Artist: Precise Control of Liquid Dispensing for Gourmet Plating

Huang, Hung-Jui, Xiang, Jingyi, Yuan, Wenzhen

arXiv.org Artificial Intelligence

Manipulating liquid is widely required for many tasks, especially in cooking. A common way to address this is extruding viscous liquid from a squeeze bottle. In this work, our goal is to create a sauce plating robot, which requires precise control of the thickness of squeezed liquids on a surface. Different liquids demand different manipulation policies. We command the robot to tilt the container and monitor the liquid response using a force sensor to identify liquid properties. Based on the liquid properties, we predict the liquid behavior with fixed squeezing motions in a data-driven way and calculate the required drawing speed for the desired stroke size. This open-loop system works effectively even without sensor feedback. Our experiments demonstrate accurate stroke size control across different liquids and fill levels. We show that understanding liquid properties can facilitate effective liquid manipulation. More importantly, our dish garnishing robot has a wide range of applications and holds significant commercialization potential.


Learning an Efficient Terrain Representation for Haptic Localization of a Legged Robot

Sójka, Damian, Nowicki, Michał R., Skrzypczyński, Piotr

arXiv.org Artificial Intelligence

Although haptic sensing has recently been used for legged robot localization in extreme environments where a camera or LiDAR might fail, the problem of efficiently representing the haptic signatures in a learned prior map is still open. This paper introduces an approach to terrain representation for haptic localization inspired by recent trends in machine learning. It combines this approach with the proven Monte Carlo algorithm to obtain an accurate, computation-efficient, and practical method for localizing legged robots under adversarial environmental conditions. We apply the triplet loss concept to learn highly descriptive embeddings in a transformer-based neural network. As the training haptic data are not labeled, the positive and negative examples are discriminated by their geometric locations discovered while training. We demonstrate experimentally that the proposed approach outperforms by a large margin the previous solutions to haptic localization of legged robots concerning the accuracy, inference time, and the amount of data stored in the map. As far as we know, this is the first approach that completely removes the need to use a dense terrain map for accurate haptic localization, thus paving the way to practical applications.


Brain-computer interface restores sense of touch with haptic signals

#artificialintelligence

In 2014, Burkhart underwent brain surgery at the OSU Wexner Medical Center to implant the chip. About the size of a pea, this chip, made by Blackrock Microsystems, Inc., sits in his motor cortex, an area of the brain responsible for generating voluntary movements. "It has small wires that act like microphones; each one listens to a handful of brain cells," says Ganzer. With the chip in place, the research team was ready to work on the second phase with a more complex interface. Using MATLAB, the team developed machine learning algorithms that could decode Burkhart's thoughts as the chip recorded his brain activity.


Boosted Semantic Embedding based Discriminative Feature Generation for Texture Analysis

Kumari, Priyadarshini, Chaudhuri, Subhasis

arXiv.org Machine Learning

Learning discriminative features is crucial for various robotic applications such as object detection and classification. In this paper, we present a general framework for the analysis of the discriminative properties of haptic signals. Our focus is on two crucial components of a robotic perception system: discriminative feature extraction and metric-based feature transformation to enhance the separability of haptic signals in the projected space. We propose a set of hand-crafted haptic features (generated only from acceleration data), which enables discrimination of real-world textures. Since the Euclidean space does not reflect the underlying pattern in the data, we propose to learn an appropriate transformation function to project the feature onto the new space and apply different pattern recognition algorithms for texture classification and discrimination tasks. Unlike other existing methods, we use a triplet-based method for improved discrimination in the embedded space. We further demonstrate how to build a haptic vocabulary by selecting a compact set of the most distinct and representative signals in the embedded space. The experimental results show that the proposed features augmented with learned embedding improves the performance of semantic discrimination tasks such as classification and clustering and outperforms the related state-of-the-art.


PerceptNet: Learning Perceptual Similarity of Haptic Textures in Presence of Unorderable Triplets

K, Priyadarshini, Chaudhuri, Siddhartha, Chaudhuri, Subhasis

arXiv.org Machine Learning

In order to design haptic icons or build a haptic vocabulary, we require a set of easily distinguishable haptic signals to avoid perceptual ambiguity, which in turn requires a way to accurately estimate the perceptual (dis)similarity of such signals. In this work, we present a novel method to learn such a perceptual metric based on data from human studies. Our method is based on a deep neural network that projects signals to an embedding space where the natural Euclidean distance accurately models the degree of dissimilarity between two signals. The network is trained only on non-numerical comparisons of triplets of signals, using a novel triplet loss that considers both types of triplets that are easy to order (inequality constraints), as well as those that are unorderable/ambiguous (equality constraints). Unlike prior MDS-based non-parametric approaches, our method can be trained on a partial set of comparisons and can embed new haptic signals without retraining the model from scratch. Extensive experimental evaluations show that our method is significantly more effective at modeling perceptual dissimilarity than alternatives.