imu signal
A Comparative Study of EMG- and IMU-based Gesture Recognition at the Wrist and Forearm
Baghernezhad, Soroush, Mohammadreza, Elaheh, da Fonseca, Vinicius Prado, Zou, Ting, Jiang, Xianta
Gestures are an integral part of our daily interactions with the environment. Hand gesture recognition (HGR) is the process of interpreting human intent through various input modalities, such as visual data (images and videos) and bio-signals. Bio-signals are widely used in HGR due to their ability to be captured non-invasively via sensors placed on the arm. Among these, surface electromyography (sEMG), which measures the electrical activity of muscles, is the most extensively studied modality. However, less-explored alternatives such as inertial measurement units (IMUs) can provide complementary information on subtle muscle movements, which makes them valuable for gesture recognition. In this study, we investigate the potential of using IMU signals from different muscle groups to capture user intent. Our results demonstrate that IMU signals contain sufficient information to serve as the sole input sensor for static gesture recognition. Moreover, we compare different muscle groups and check the quality of pattern recognition on individual muscle groups. We further found that tendon-induced micro-movement captured by IMUs is a major contributor to static gesture recognition. We believe that leveraging muscle micro-movement information can enhance the usability of prosthetic arms for amputees. This approach also offers new possibilities for hand gesture recognition in fields such as robotics, teleoperation, sign language interpretation, and beyond.
- North America > Canada > Newfoundland and Labrador > Newfoundland > St. John's (0.04)
- North America > United States (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Health Care Technology (0.88)
- Health & Medicine > Therapeutic Area > Neurology (0.48)
- Health & Medicine > Therapeutic Area > Musculoskeletal (0.46)
- (2 more...)
CKANIO: Learnable Chebyshev Polynomials for Inertial Odometry
Zhang, Shanshan, Wang, Siyue, Wen, Tianshui, Wu, Liqin, Zhang, Qi, Zhou, Ziheng, Peng, Ao, Hong, Xuemin, Zheng, Lingxiang, Yang, Yu
ABSTRACT Inertial odometry (IO) relies exclusively on signals from an inertial measurement unit (IMU) for localization and offers a promising avenue for consumer-grade positioning. However, accurate modeling of the nonlinear motion patterns present in IMU signals remains the principal limitation on IO accuracy. To address this challenge, we propose CKANIO, an IO framework that integrates Chebyshev-based Kolmogorov-Arnold Networks (Chebyshev KAN). To the best of our knowledge, this work represents the first application of an interpretable KAN model to IO. Experimental results on five publicly available datasets demonstrate the effectiveness of CKANIO. Index T erms-- Chebyshev KAN, Inertial Odometry, Inertial Measurement Unit signals 1. INTRODUCTION Inertial odometry (IO) estimates the position and orientation of an IMU-equipped platform using acceleration and angular velocity signals provided by the inertial measurement unit (IMU) [1].
FTIN: Frequency-Time Integration Network for Inertial Odometry
Zhang, Shanshan, Zhang, Qi, Wang, Siyue, Wu, Liqin, Wen, Tianshui, Zhou, Ziheng, Peng, Ao, Hong, Xuemin, Zheng, Lingxiang, Yang, Yu
However, high IMU sampling rates introduce substantial redundancy that impedes IO's ability to attend to salient components, thereby creating an information bottleneck. To address this challenge, we propose a cross-domain IO framework that fuses information from the frequency and time domains. Specifically, we exploit the global context and energy-compaction properties of frequency-domain representations to capture holistic motion patterns and alleviate the bottleneck. To the best of our knowledge, this is among the first attempts to incorporate frequency-domain feature processing into IO. Experimental results on multiple public datasets demonstrate the effectiveness of the proposed frequency-time-domain fusion strategy. Index T erms-- Frequency-Domain Learning, Inertial Odometry, Inertial Measurement Unit signals 1. INTRODUCTION Inertial odometry (IO) aims to reconstruct motion trajectories from high-frequency inertial measurement unit (IMU) signals--comprising tri-axial accelerometer and gyroscope data--in order to enable low-cost and robust localization [1, 2].
StarIO: A Lightweight Inertial Odometry for Nonlinear Motion
Zhang, Shanshan, Wang, Siyue, Wu, Qi Zhang Liqin, Wen, Tianshui, Zhou, Ziheng, Hong, Xuemin, Zheng, Lingxiang, Yang, Yu
Inertial odometry (IO) directly estimates the position of a carrier from inertial sensor measurements and serves as a core technology for the widespread deployment of consumer grade localization systems. While existing IO methods can accurately reconstruct simple and near linear motion trajectories, they often fail to account for drift errors caused by complex motion patterns such as turning. This limitation significantly degrades localization accuracy and restricts the applicability of IO systems in real world scenarios. To address these challenges, we propose a lightweight IO framework. Specifically, inertial data is projected into a high dimensional implicit nonlinear feature space using the Star Operation method, enabling the extraction of complex motion features that are typically overlooked. We further introduce a collaborative attention mechanism that jointly models global motion dynamics across both channel and temporal dimensions. In addition, we design Multi Scale Gated Convolution Units to capture fine grained dynamic variations throughout the motion process, thereby enhancing the model's ability to learn rich and expressive motion representations. Extensive experiments demonstrate that our proposed method consistently outperforms SOTA baselines across six widely used inertial datasets. Compared to baseline models on the RoNIN dataset, it achieves reductions in ATE ranging from 2.26% to 65.78%, thereby establishing a new benchmark in the field.
- Asia > China > Fujian Province > Xiamen (0.05)
- North America > United States (0.04)
IMU-Enhanced EEG Motion Artifact Removal with Fine-Tuned Large Brain Models
Zhang, Yuhong, Zhu, Xusheng, Xu, Yuchen, Lu, ChiaEn, Shih, Hsinyu, Cauwenberghs, Gert, Jung, Tzyy-Ping
-- Electroencephalography (EEG) is a non-invasive method for measuring brain activity with high temporal resolution; however, EEG signals often exhibit low signal-to-noise ratios because of contamination from physiological and environmental artifacts. One of the major challenges hindering the real-world deployment of brain-computer interfaces (BCIs) involves the frequent occurrence of motion-related EEG artifacts. Most prior studies on EEG motion artifact removal rely on single-modality approaches, such as Artifact Subspace Reconstruction (ASR) and Independent Component Analysis (ICA), without incorporating simultaneously recorded modalities like inertial measurement units (IMUs), which directly capture the extent and dynamics of motion. This work proposes a fine-tuned large brain model (LaBraM)-based correlation attention mapping method that leverages spatial channel relationships in IMU data to identify motion-related artifacts in EEG signals. The fine-tuned model contains approximately 9.2 million parameters and uses 5.9 hours of EEG and IMU recordings for training, just 0.2346% of the 2500 hours used to train the base model. We compare our results against the established ASR-ICA benchmark across varying time scales and motion activities, showing that incorporating IMU reference signals significantly improves robustness under diverse motion scenarios.
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > United States > California > San Diego County > La Jolla (0.04)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Health Care Technology (1.00)
SSPINNpose: A Self-Supervised PINN for Inertial Pose and Dynamics Estimation
Gambietz, Markus, Dorschky, Eva, Akat, Altan, Schöckel, Marcel, Miehling, Jörg, Koelewijn, Anne D.
Accurate real-time estimation of human movement dynamics, including internal joint moments and muscle forces, is essential for applications in clinical diagnostics and sports performance monitoring. Inertial measurement units (IMUs) provide a minimally intrusive solution for capturing motion data, particularly when used in sparse sensor configurations. However, current real-time methods rely on supervised learning, where a ground truth dataset needs to be measured with laboratory measurement systems, such as optical motion capture. These systems are known to introduce measurement and processing errors and often fail to generalize to real-world or previously unseen movements, necessitating new data collection efforts that are time-consuming and impractical. To overcome these limitations, we propose SSPINNpose, a self-supervised, physics-informed neural network that estimates joint kinematics and kinetics directly from IMU data, without requiring ground truth labels for training. We run the network output through a physics model of the human body to optimize physical plausibility and generate virtual measurement data. Using this virtual sensor data, the network is trained directly on the measured sensor data instead of a ground truth. When compared to optical motion capture, SSPINNpose is able to accurately estimate joint angles and joint moments at an RMSD of 8.7 deg and 4.9 BWBH%, respectively, for walking and running at speeds up to 4.9 m/s at a latency of 3.5 ms. Furthermore, the framework demonstrates robustness across sparse sensor configurations and can infer the anatomical locations of the sensors. These results underscore the potential of SSPINNpose as a scalable and adaptable solution for real-time biomechanical analysis in both laboratory and field environments.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Germany (0.04)
- Asia (0.04)
- (6 more...)
Mojito: LLM-Aided Motion Instructor with Jitter-Reduced Inertial Tokens
Shan, Ziwei, He, Yaoyu, Zhao, Chengfeng, Du, Jiashen, Zhang, Jingyan, Zhang, Qixuan, Yu, Jingyi, Xu, Lan
Human bodily movements convey critical insights into action intentions and cognitive processes, yet existing multimodal systems primarily focused on understanding human motion via language, vision, and audio, which struggle to capture the dynamic forces and torques inherent in 3D motion. Inertial measurement units (IMUs) present a promising alternative, offering lightweight, wearable, and privacy-conscious motion sensing. However, processing of streaming IMU data faces challenges such as wireless transmission instability, sensor noise, and drift, limiting their utility for long-term real-time motion capture (MoCap), and more importantly, online motion analysis. To address these challenges, we introduce Mojito, an intelligent motion agent that integrates inertial sensing with large language models (LLMs) for interactive motion capture and behavioral analysis.
Deep Learning for Motion Classification in Ankle Exoskeletons Using Surface EMG and IMU Signals
Estévez, Silas Ruhrberg, Mallah, Josée, Kazieczko, Dominika, Tang, Chenyu, Occhipinti, Luigi G.
Ankle exoskeletons have garnered considerable interest for their potential to enhance mobility and reduce fall risks, particularly among the aging population. The efficacy of these devices relies on accurate real-time prediction of the user's intended movements through sensor-based inputs. This paper presents a novel motion prediction framework that integrates three Inertial Measurement Units (IMUs) and eight surface Electromyography (sEMG) sensors to capture both kinematic and muscular activity data. A comprehensive set of activities, representative of everyday movements in barrier-free environments, was recorded for the purpose. Our findings reveal that Convolutional Neural Networks (CNNs) slightly outperform Long Short-Term Memory (LSTM) networks on a dataset of five motion tasks, achieving classification accuracies of $96.5 \pm 0.8 \%$ and $87.5 \pm 2.9 \%$, respectively. Furthermore, we demonstrate the system's proficiency in transfer learning, enabling accurate motion classification for new subjects using just ten samples per class for finetuning. The robustness of the model is demonstrated by its resilience to sensor failures resulting in absent signals, maintaining reliable performance in real-world scenarios. These results underscore the potential of deep learning algorithms to enhance the functionality and safety of ankle exoskeletons, ultimately improving their usability in daily life.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
- North America > United States (0.04)
- Asia > China (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
A Lightweight Human Pose Estimation Approach for Edge Computing-Enabled Metaverse with Compressive Sensing
Hieu, Nguyen Quang, Hoang, Dinh Thai, Nguyen, Diep N.
The ability to estimate 3D movements of users over edge computing-enabled networks, such as 5G/6G networks, is a key enabler for the new era of extended reality (XR) and Metaverse applications. Recent advancements in deep learning have shown advantages over optimization techniques for estimating 3D human poses given spare measurements from sensor signals, i.e., inertial measurement unit (IMU) sensors attached to the XR devices. However, the existing works lack applicability to wireless systems, where transmitting the IMU signals over noisy wireless networks poses significant challenges. Furthermore, the potential redundancy of the IMU signals has not been considered, resulting in highly redundant transmissions. In this work, we propose a novel approach for redundancy removal and lightweight transmission of IMU signals over noisy wireless environments. Our approach utilizes a random Gaussian matrix to transform the original signal into a lower-dimensional space. By leveraging the compressive sensing theory, we have proved that the designed Gaussian matrix can project the signal into a lower-dimensional space and preserve the Set-Restricted Eigenvalue condition, subject to a power transmission constraint. Furthermore, we develop a deep generative model at the receiver to recover the original IMU signals from noisy compressed data, thus enabling the creation of 3D human body movements at the receiver for XR and Metaverse applications. Simulation results on a real-world IMU dataset show that our framework can achieve highly accurate 3D human poses of the user using only $82\%$ of the measurements from the original signals. This is comparable to an optimization-based approach, i.e., Lasso, but is an order of magnitude faster.
IMU2CLIP: Multimodal Contrastive Learning for IMU Motion Sensors from Egocentric Videos and Text
Moon, Seungwhan, Madotto, Andrea, Lin, Zhaojiang, Dirafzoon, Alireza, Saraf, Aparajita, Bearman, Amy, Damavandi, Babak
ABSTRACT We present IMU2CLIP, a novel pre-training approach to align Inertial Measurement Unit (IMU) motion sensor recordings with video and text, by projecting them into the joint representation space of Contrastive Language-Image Pre-training (CLIP). The proposed approach allows IMU2CLIP to translate human motions (as measured by IMU sensors) into their corresponding textual descriptions and videos - while preserving the transitivity across these modalities. We explore several new IMU-based applications that IMU2CLIP enables, such as motion-based media retrieval and natural language reasoning tasks with motion data. In addition, we show that IMU2CLIP can significantly improve the downstream performance when fine-tuned for each application Figure 1: Illustration of IMU2CLIP (I2C): (a) The model aligns (e.g. Our code trained, IMU2CLIP is used as a retriever for both (b) IMU will be made publicly available.