Goto

Collaborating Authors

 flex sensor


A Bimanual Gesture Interface for ROS-Based Mobile Manipulators Using TinyML and Sensor Fusion

Bhuiyan, Najeeb Ahmed, Huq, M. Nasimul, Chowdhury, Sakib H., Mangharam, Rahul

arXiv.org Artificial Intelligence

Gesture-based control for mobile manipulators faces persistent challenges in reliability, efficiency, and intuitiveness. This paper presents a dual-hand gesture interface that integrates TinyML, spectral analysis, and sensor fusion within a ROS framework to address these limitations. The system uses left-hand tilt and finger flexion, captured using accelerometer and flex sensors, for mobile base navigation, while right-hand IMU signals are processed through spectral analysis and classified by a lightweight neural network. This pipeline enables TinyML-based gesture recognition to control a 7-DOF Kinova Gen3 manipulator. By supporting simultaneous navigation and manipulation, the framework improves efficiency and coordination compared to sequential methods. Key contributions include a bimanual control architecture, real-time low-power gesture recognition, robust multimodal sensor fusion, and a scalable ROS-based implementation. The proposed approach advances Human-Robot Interaction (HRI) for industrial automation, assistive robotics, and hazardous environments, offering a cost-effective, open-source solution with strong potential for real-world deployment and further optimization.


Understanding Grasp Synergies during Reach-to-grasp using an Instrumented Data Glove

Pratap, Subhash, Hatta, Yoshiyuki, Ito, Kazuaki, Hazarika, Shyamanta M.

arXiv.org Artificial Intelligence

Data gloves play a crucial role in study of human grasping, and could provide insights into grasp synergies. Grasp synergies lead to identification of underlying patterns to develop control strategies for hand exoskeletons. This paper presents the design and implementation of a data glove that has been enhanced with instrumentation and fabricated using 3D printing technology. The glove utilizes flexible sensors for the fingers and force sensors integrated into the glove at the fingertips to accurately capture grasp postures and forces. Understanding the kinematics and dynamics of human grasp including reach-to-grasp is undertaken. A comprehensive study involving 10 healthy subjects was conducted. Grasp synergy analysis is carried out to identify underlying patterns for robotic grasping. The t-SNE visualization showcased clusters of grasp postures and forces, unveiling similarities and patterns among different GTs. These findings could serve as a comprehensive guide in design and control of tendon-driven soft hand exoskeletons for rehabilitation applications, enabling the replication of natural hand movements and grasp forces.


A Systematic Review on Custom Data Gloves

Belcamino, Valerio, Carfì, Alessandro, Mastrogiovanni, Fulvio

arXiv.org Artificial Intelligence

Abstract--Hands are a fundamental tool humans use to interact with the environment and objects. Through hand motions, we can obtain information about the shape and materials of the surfaces we touch, modify our surroundings by interacting with objects, manipulate objects and tools, or communicate with other people by leveraging the power of gestures. For these reasons, sensorized gloves, which can collect information about hand motions and interactions, have been of interest since the 1980s in various fields, such as Human-Machine Interaction (HMI) and the analysis and control of human motions. Over the last 40 years, research in this field explored different technological approaches and contributed to the popularity of wearable custom and commercial products targeting hand sensorization. Despite a positive research trend, these instruments are not widespread yet outside research environments and devices aimed at research are often ad hoc solutions with a low chance of being reused. This paper aims to provide a systematic literature review for custom gloves to analyze their main characteristics and critical issues, from the type and number of sensors to the limitations due to device encumbrance. The collection of this information lays the foundation for a standardization process necessary for future breakthroughs in this research field. Figure 1: Hands are of the utmost importance for a variety of I. Human hands are peculiar body parts where two Studies in hand motion analysis can be categorized into two senses, namely proprioception and touch, are closely affected classes based on the adopted sensing modality, i.e., imagebased by each other. Approaches belonging to In general, proprioception relates to estimating one's motion the first class rely on suitably located cameras to collect and posture. Instead, traits of human behaviour, such as those related to motor approaches from the second class usually leverage sensors control and the associated cognitive processes. For these reasons, we will refer to the two the preferred physical medium enabling human-machine interaction, classes, respectively, with the more technology-oriented terms e.g., to use interfaces such as touchscreens or virtual vision-and wearable-based.


Sign Language Conversation Interpretation Using Wearable Sensors and Machine Learning

Kalandar, Basma, Dworakowski, Ziemowit

arXiv.org Artificial Intelligence

The count of people suffering from various levels of hearing loss reached 1.57 billion in 2019. This huge number tends to suffer on many personal and professional levels and strictly needs to be included with the rest of society healthily. This paper presents a proof of concept of an automatic sign language recognition system based on data obtained using a wearable device of 3 flex sensors. The system is designed to interpret a selected set of American Sign Language (ASL) dynamic words by collecting data in sequences of the performed signs and using machine learning methods. The built models achieved high-quality performances, such as Random Forest with 99% accuracy, Support Vector Machine (SVM) with 99%, and two K-Nearest Neighbor (KNN) models with 98%. This indicates many possible paths toward the development of a full-scale system.


Multi-tap Resistive Sensing and FEM Modeling enables Shape and Force Estimation in Soft Robots

Tian, Sizhe, Cangan, Barnabas Gavin, Navarro, Stefan Escaida, Beger, Artem, Duriez, Christian, Katzschmann, Robert K.

arXiv.org Artificial Intelligence

We address the challenge of reliable and accurate proprioception in soft robots, specifically those with tight packaging constraints and relying only on internally embedded sensors. While various sensing approaches with single sensors have been tried, often with a constant curvature assumption, we look into sensing local deformations at multiple locations of the sensor. In our approach, we multi-tap an off-the-shelf resistive sensor by creating multiple electrical connections onto the resistive layer of the sensor and we insert the sensor into a soft body. This modification allows us to measure changes in resistance at multiple segments throughout the length of the sensor, providing improved resolution of local deformations in the soft body. These measurements inform a model based on a finite element method (FEM) that estimates the shape of the soft body and the magnitude of an external force acting at a known arbitrary location. Our model-based approach estimates soft body deformation with approximately 3% average relative error while taking into account internal fluidic actuation. Our estimate of external force disturbance has an 11% relative error within a range of 0 to 5 N. The combined sensing and modeling approach can be integrated, for instance, into soft manipulation platforms to enable features such as identifying the shape and material properties of an object being grasped. Such manipulators can benefit from the inherent softness and compliance while being fully proprioceptive, relying only on embedded sensing and not on external systems such as motion capture. Such proprioception is essential for the deployment of soft robots in real-world scenarios.


IoT - A Support Vector Machine Implementation for Sign Language Recognition on Intel Edison.

#artificialintelligence

Currently, more than 30 million people in the world have speech impairments and thus to communicate have to use sign language resulting in a language barrier between sign language and non-sign language users. This project explores the development of a sign language to speech translation glove by implementing a Support Vector Machine(SVM) on the Intel Edison to recognize various letters signed by sign language users. The data for the predicted signed gesture is then transmitted to an Android application where it is vocalized. The sign language glove has five flex sensors mounted on each finger to quantify how much a finger is bent. Flex sensors are sensors that change their resistance depending on the amount of bend on the sensor.