hand movement
Science confirms hand gestures make you seem more persuasive
Breakthroughs, discoveries, and DIY tips sent every weekday. A study recently published in the journal suggests something many Italians already knew--certain hand gestures really do make people seem more competent and persuasive. "One of the key takeaways for marketers is that you can use the same content, but if you pay more attention to how that content is delivered, it could have a big impact on persuasiveness," Mi Zhou, study co-author and University of British Columbia digital market research scientist, said in a statement . Zhou and her colleagues analyzed 2,184 TED Talks using AI and automated video analysis. They compared hundreds of thousands of video clips of hand features to audience engagement metrics, and asked study participants to rate the speakers and products in videos of sales pitches with different hand movements.
- North America > Canada > British Columbia (0.25)
- North America > United States (0.16)
A Real-Time BCI for Stroke Hand Rehabilitation Using Latent EEG Features from Healthy Subjects
Omar, F. M., Omar, A. M., Eyada, K. H., Rabie, M., Kamel, M. A., Azab, A. M.
This study presents a real-time, portable brain-computer interface (BCI) system designed to support hand rehabilitation for stroke patients. The system combines a low cost 3D-printed robotic exoskeleton with an embedded controller that converts brain signals into physical hand movements. EEG signals are recorded using a 14-channel Emotiv EPOC+ headset and processed through a supervised convolutional autoencoder (CAE) to extract meaningful latent features from single-trial data. The model is trained on publicly available EEG data from healthy individuals (WAY-EEG-GAL dataset), with electrode mapping adapted to match the Emotiv headset layout. Among several tested classifiers, Ada Boost achieved the highest accuracy (89.3%) and F1-score (0.89) in offline evaluations. The system was also tested in real time on five healthy subjects, achieving classification accuracies between 60% and 86%. The complete pipeline - EEG acquisition, signal processing, classification, and robotic control - is deployed on an NVIDIA Jetson Nano platform with a real-time graphical interface. These results demonstrate the system's potential as a low-cost, standalone solution for home-based neurorehabilitation.
- Africa > Middle East > Egypt > Cairo Governorate > Cairo (0.05)
- North America > Canada > Ontario (0.04)
- Africa > Middle East > Egypt > Ismailia Governorate > Ismailia (0.04)
- Research Report > New Finding (0.48)
- Research Report > Experimental Study (0.34)
EEG-based AI-BCI Wheelchair Advancement: Hybrid Deep Learning with Motor Imagery for Brain Computer Interface
Thapa, Bipul, Paneru, Biplov, Paneru, Bishwash, Poudyal, Khem Narayan
This paper presents an Artificial Intelligence (AI) integrated novel approach to Brain - Computer Interface (BCI) - based wheelchair development, utilizing a motor imagery r ight - l eft - h and m ovement mechanism for control. The system is designed to simulate wheelchair navigation based on motor imagery right and left - hand movements using electroencephalogram (EEG) data. A pre - filtered dataset, obtained from an open - source EEG repository, was seg mented into arrays of 19x200 to capture the onset of hand movements. Th e data was acquired at a sampling frequency of 200Hz. The system integrates a Tkinter - based interface for simulating wheelchair movements, offering users a functional and intuitive control system. We propose a BiLSTM - BiGRU model that shows a superior test accuracy of 92. 26 % as compared with v arious machine learning baseline models, including XGBoost, EEGNet, and a transformer - based model . The Bi - LSTM - BiGRU attention - based model achieved a mean accuracy of 90.13 % through cross - validation, showcasing the potential of attention mechanisms in BCI applications. Keywords: Brain Computer Interface (BCI), BiLSTM - BiGRU, Raspberry Pi, E lectroencephalogram (EEG), Hybrid Deep learning 1. Introduction Brain - Computer Interfaces (BCIs) are advanced systems that establish direct communication between the human brain and external devices . In recent years, BCIs have been widely investigated for their potential to assist individuals with mobility impairments, offering novel pathways for restoring autonomy. This paper proposes a BCI - based wheelchair control system driven by electroencephalogra phy (EEG) signals associated with motor imagery. The proposed framework incorporates a variety of machine learning models with tailored hyperparameter optimization techniques, culminating in the deployment of a BiLSTM - BiGRU hybrid deep learning model for effective EEG signal classification.
- Europe > Switzerland (0.04)
- Asia > Nepal > Gandaki Province > Kaski District > Pokhara (0.04)
- Asia > Nepal > Bagmati Province > Kathmandu District > Kathmandu (0.04)
- Asia > China > Tianjin Province > Tianjin (0.04)
- Overview (0.66)
- Research Report > Promising Solution (0.34)
Modular Soft Wearable Glove for Real-Time Gesture Recognition and Dynamic 3D Shape Reconstruction
Dong, Huazhi, Wang, Chunpeng, Jiang, Mingyuan, Giorgio-Serchi, Francesco, Yang, Yunjie
With the increasing demand for human-computer interaction (HCI), flexible wearable gloves have emerged as a promising solution in virtual reality, medical rehabilitation, and industrial automation. However, the current technology still has problems like insufficient sensitivity and limited durability, which hinder its wide application. This paper presents a highly sensitive, modular, and flexible capacitive sensor based on line-shaped electrodes and liquid metal (EGaIn), integrated into a sensor module tailored to the human hand's anatomy. The proposed system independently captures bending information from each finger joint, while additional measurements between adjacent fingers enable the recording of subtle variations in inter-finger spacing. This design enables accurate gesture recognition and dynamic hand morphological reconstruction of complex movements using point clouds. Experimental results demonstrate that our classifier based on Convolution Neural Network (CNN) and Multilayer Perceptron (MLP) achieves an accuracy of 99.15% across 30 gestures. Meanwhile, a transformer-based Deep Neural Network (DNN) accurately reconstructs dynamic hand shapes with an Average Distance (AD) of 2.076\pm3.231 mm, with the reconstruction accuracy at individual key points surpassing SOTA benchmarks by 9.7% to 64.9%. The proposed glove shows excellent accuracy, robustness and scalability in gesture recognition and hand reconstruction, making it a promising solution for next-generation HCI systems.
- Materials > Chemicals (0.47)
- Health & Medicine (0.46)
ALVI Interface: Towards Full Hand Motion Decoding for Amputees Using sEMG
Kovalev, Aleksandr, Makarova, Anna, Chizhov, Petr, Antonov, Matvey, Duplin, Gleb, Lomtev, Vladislav, Gostevskii, Viacheslav, Bessonov, Vladimir, Tsurkan, Andrey, Korobok, Mikhail, Timčenko, Aleksejs
We present a system for decoding hand movements using surface EMG signals. The interface provides real-time (25 Hz) reconstruction of finger joint angles across 20 degrees of freedom, designed for upper limb amputees. Our offline analysis shows 0.8 correlation between predicted and actual hand movements. The system functions as an integrated pipeline with three key components: (1) a VR-based data collection platform, (2) a transformer-based model for EMG-to-motion transformation, and (3) a real-time calibration and feedback module called ALVI Interface. Using eight sEMG sensors and a VR training environment, users can control their virtual hand down to finger joint movement precision, as demonstrated in our video: youtube link.
- Education (0.68)
- Health & Medicine (0.48)
Neural Lab's AirTouch brings gesture control to Windows and Android devices with just a webcam
Some of the best tech we see at CES feels pulled straight from sci-fi. Yesterday at CES 2025, I tested out Neural Lab's AirTouch technology, which lets you interact with a display using hand gestures alone, exactly what movies like Minority Report and Iron Man promised. Of course, plenty of companies have delivered on varying forms of gesture control. Microsoft's Kinect is an early example while the Apple Watch's double tap feature and Vision Pro's pinch gestures are just two of many current iterations. But I was impressed with how well AirTouch delivered and, unlike most gesture technology out there, it requires no special equipment -- just a standard webcam -- and works with a wide range of devices.
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence > Vision > Gesture Recognition (0.93)
Modeling the minutia of motor manipulation with AI
In neuroscience and biomedical engineering, accurately modeling the complex movements of the human hand has long been a significant challenge. Current models often struggle to capture the intricate interplay between the brain's motor commands and the physical actions of muscles and tendons. This gap not only hinders scientific progress but also limits the development of effective neuroprosthetics aimed at restoring hand function for those with limb loss or paralysis. EPFL professor Alexander Mathis and his team have developed an AI-driven approach that advances our understanding of these complex motor functions. The team used a creative machine learning strategy that combined curriculum-based reinforcement learning with detailed biomechanical simulations.
Allo-AVA: A Large-Scale Multimodal Conversational AI Dataset for Allocentric Avatar Gesture Animation
The scarcity of high-quality, multimodal training data severely hinders the creation of lifelike avatar animations for conversational AI in virtual environments. Existing datasets often lack the intricate synchronization between speech, facial expressions, and body movements that characterize natural human communication. To address this critical gap, we introduce Allo-AVA, a large-scale dataset specifically designed for text and audio-driven avatar gesture animation in an allocentric (third person point-of-view) context. Allo-AVA consists of $\sim$1,250 hours of diverse video content, complete with audio, transcripts, and extracted keypoints. Allo-AVA uniquely maps these keypoints to precise timestamps, enabling accurate replication of human movements (body and facial gestures) in synchronization with speech. This comprehensive resource enables the development and evaluation of more natural, context-aware avatar animation models, potentially transforming applications ranging from virtual reality to digital assistants.
Capturing complex hand movements and object interactions using machine learning-powered stretchable smart textile gloves
Tashakori, Arvin, Jiang, Zenan, Servati, Amir, Soltanian, Saeid, Narayana, Harishkumar, Le, Katherine, Nakayama, Caroline, Yang, Chieh-ling, Wang, Z. Jane, Eng, Janice J., Servati, Peyman
Accurate real-time tracking of dexterous hand movements and interactions has numerous applications in human-computer interaction, metaverse, robotics, and tele-health. Capturing realistic hand movements is challenging because of the large number of articulations and degrees of freedom. Here, we report accurate and dynamic tracking of articulated hand and finger movements using stretchable, washable smart gloves with embedded helical sensor yarns and inertial measurement units. The sensor yarns have a high dynamic range, responding to low 0.005 % to high 155 % strains, and show stability during extensive use and washing cycles. We use multi-stage machine learning to report average joint angle estimation root mean square errors of 1.21 and 1.45 degrees for intra- and inter-subjects cross-validation, respectively, matching accuracy of costly motion capture cameras without occlusion or field of view limitations. We report a data augmentation technique that enhances robustness to noise and variations of sensors. We demonstrate accurate tracking of dexterous hand movements during object interactions, opening new avenues of applications including accurate typing on a mock paper keyboard, recognition of complex dynamic and static gestures adapted from American Sign Language and object identification.
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > Switzerland (0.04)
- Asia > Taiwan (0.04)
- North America > United States > Massachusetts (0.04)
- Materials > Chemicals (0.68)
- Energy (0.68)
- Health & Medicine > Health Care Providers & Services (0.46)
AI-Powered Camera and Sensors for the Rehabilitation Hand Exoskeleton
Sarker, Md Abdul Baset, Sola-thomas, Juan Pablo, Imtiaz, Masudul H.
Due to Motor Neurone Diseases, a large population remains disabled worldwide, negatively impacting their independence and quality of life. This typically involves a weakness in the hand and forearm muscles, making it difficult to perform fine motor tasks such as writing, buttoning a shirt, or gripping objects. This project presents a vision-enabled rehabilitation hand exoskeleton to assist disabled persons in their hand movements. The design goal was to create an accessible tool to help with a simple interface requiring no training. This prototype is built on a commercially available glove where a camera and embedded processor were integrated to help open and close the hand, using air pressure, thus grabbing an object. An accelerometer is also implemented to detect the characteristic hand gesture to release the object when desired. This passive vision-based control differs from active EMG-based designs as it does not require individualized training. Continuing the research will reduce the cost, weight, and power consumption to facilitate mass implementation.
- South America > Uruguay > Maldonado > Maldonado (0.04)
- North America > United States (0.04)
- Europe > Germany > Brandenburg > Potsdam (0.04)
- Asia > Bangladesh (0.04)