wheelchair
Feasibility of Embodied Dynamics Based Bayesian Learning for Continuous Pursuit Motion Control of Assistive Mobile Robots in the Built Environment
Zhou, Xiaoshan, Menassa, Carol C., Kamat, Vineet R.
Non-invasive electroencephalography (EEG)-based brain-computer interfaces (BCIs) offer an intuitive means for individuals with severe motor impairments to independently operate assistive robotic wheelchairs and navigate built environments. Despite considerable progress in BCI research, most current motion control systems are limited to discrete commands, rather than supporting continuous pursuit, where users can freely adjust speed and direction in real time. Such natural mobility control is, however, essential for wheelchair users to navigate complex public spaces, such as transit stations, airports, hospitals, and indoor corridors, to interact socially with the dynamic populations with agility, and to move flexibly and comfortably as autonomous driving is refined to allow movement at will. In this study, we address the gap of continuous pursuit motion control in BCIs by proposing and validating a brain-inspired Bayesian inference framework, where embodied dynamics in acceleration-based motor representations are decoded. This approach contrasts with conventional kinematics-level decoding and deep learning-based methods. Using a public dataset with sixteen hours of EEG from four subjects performing motor imagery-based target-following, we demonstrate that our method, utilizing Automatic Relevance Determination for feature selection and continual online learning, reduces the normalized mean squared error between predicted and true velocities by 72% compared to autoregressive and EEGNet-based methods in a session-accumulative transfer learning setting. Theoretically, these findings empirically support embodied cognition theory and reveal the brain's intrinsic motor control dynamics in an embodied and predictive nature. Practically, grounding EEG decoding in the same dynamical principles that govern biological motion offers a promising path toward more stable and intuitive BCI control.
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > Utah (0.04)
- (2 more...)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Health Care Technology (1.00)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- (2 more...)
OpenRoboCare: A Multimodal Multi-Task Expert Demonstration Dataset for Robot Caregiving
Liang, Xiaoyu, Liu, Ziang, Lin, Kelvin, Gu, Edward, Ye, Ruolin, Nguyen, Tam, Hsu, Cynthia, Wu, Zhanxin, Yang, Xiaoman, Cheung, Christy Sum Yu, Soh, Harold, Dimitropoulou, Katherine, Bhattacharjee, Tapomayukh
We present OpenRoboCare, a multimodal dataset for robot caregiving, capturing expert occupational therapist demonstrations of Activities of Daily Living (ADLs). Caregiving tasks involve complex physical human-robot interactions, requiring precise perception under occlusions, safe physical contact, and long-horizon planning. While recent advances in robot learning from demonstrations have shown promise, there is a lack of a large-scale, diverse, and expert-driven dataset that captures real-world caregiving routines. To address this gap, we collect data from 21 occupational therapists performing 15 ADL tasks on two manikins. The dataset spans five modalities: RGB-D video, pose tracking, eye-gaze tracking, task and action annotations, and tactile sensing, providing rich multimodal insights into caregiver movement, attention, force application, and task execution strategies. We further analyze expert caregiving principles and strategies, offering insights to improve robot efficiency and task feasibility. Additionally, our evaluations demonstrate that OpenRoboCare presents challenges for state-of-the-art robot perception and human activity recognition methods, both critical for developing safe and adaptive assistive robots, highlighting the value of our contribution. See our website for additional visualizations: https://emprise.cs.cornell.edu/robo-care/.
- North America > United States > Massachusetts > Middlesex County > Lowell (0.14)
- North America > United States > New York > Tompkins County > Ithaca (0.04)
- Europe > Greece (0.04)
- Asia > Singapore > Central Region > Singapore (0.04)
Follow-Me in Micro-Mobility with End-to-End Imitation Learning
Salimpour, Sahar, Catalano, Iacopo, Westerlund, Tomi, Falahi, Mohsen, Queralta, Jorge Peña
Autonomous micro-mobility platforms face challenges from the perspective of the typical deployment environment: large indoor spaces or urban areas that are potentially crowded and highly dynamic. While social navigation algorithms have progressed significantly, optimizing user comfort and overall user experience over other typical metrics in robotics (e.g., time or distance traveled) is understudied. Specifically, these metrics are critical in commercial applications. In this paper, we show how imitation learning delivers smoother and overall better controllers, versus previously used manually-tuned controllers. We demonstrate how DAAV's autonomous wheelchair achieves state-of-the-art comfort in follow-me mode, in which it follows a human operator assisting persons with reduced mobility (PRM). This paper analyzes different neural network architectures for end-to-end control and demonstrates their usability in real-world production-level deployments.
- Research Report (1.00)
- Overview (0.87)
- Transportation (0.47)
- Information Technology (0.34)
EEG-based AI-BCI Wheelchair Advancement: Hybrid Deep Learning with Motor Imagery for Brain Computer Interface
Thapa, Bipul, Paneru, Biplov, Paneru, Bishwash, Poudyal, Khem Narayan
This paper presents an Artificial Intelligence (AI) integrated novel approach to Brain - Computer Interface (BCI) - based wheelchair development, utilizing a motor imagery r ight - l eft - h and m ovement mechanism for control. The system is designed to simulate wheelchair navigation based on motor imagery right and left - hand movements using electroencephalogram (EEG) data. A pre - filtered dataset, obtained from an open - source EEG repository, was seg mented into arrays of 19x200 to capture the onset of hand movements. Th e data was acquired at a sampling frequency of 200Hz. The system integrates a Tkinter - based interface for simulating wheelchair movements, offering users a functional and intuitive control system. We propose a BiLSTM - BiGRU model that shows a superior test accuracy of 92. 26 % as compared with v arious machine learning baseline models, including XGBoost, EEGNet, and a transformer - based model . The Bi - LSTM - BiGRU attention - based model achieved a mean accuracy of 90.13 % through cross - validation, showcasing the potential of attention mechanisms in BCI applications. Keywords: Brain Computer Interface (BCI), BiLSTM - BiGRU, Raspberry Pi, E lectroencephalogram (EEG), Hybrid Deep learning 1. Introduction Brain - Computer Interfaces (BCIs) are advanced systems that establish direct communication between the human brain and external devices . In recent years, BCIs have been widely investigated for their potential to assist individuals with mobility impairments, offering novel pathways for restoring autonomy. This paper proposes a BCI - based wheelchair control system driven by electroencephalogra phy (EEG) signals associated with motor imagery. The proposed framework incorporates a variety of machine learning models with tailored hyperparameter optimization techniques, culminating in the deployment of a BiLSTM - BiGRU hybrid deep learning model for effective EEG signal classification.
- Europe > Switzerland (0.04)
- Asia > Nepal > Gandaki Province > Kaski District > Pokhara (0.04)
- Asia > Nepal > Bagmati Province > Kathmandu District > Kathmandu (0.04)
- Asia > China > Tianjin Province > Tianjin (0.04)
- Overview (0.66)
- Research Report > Promising Solution (0.34)
A Standing Support Mobility Robot for Enhancing Independence in Elderly Daily Living
Manríquez-Cisterna, Ricardo J., Ravankar, Ankit A., Luces, Jose V. Salazar, Hatsukari, Takuro, Hirata, Yasuhisa
-- This paper presents a standing support mobility robot "Moby" developed to enhance independence and safety for elderly individuals during daily activities such as toilet transfers. Unlike conventional seated mobility aids, the robot maintains users in an upright posture, reducing physical strain, supporting natural social interaction at eye level, and fostering a greater sense of self-efficacy. Moby offers a novel alternative by functioning both passively and with mobility support, enabling users to perform daily tasks more independently. Its main advantages include ease of use, lightweight design, comfort, versatility, and effective sit-to-stand assistance. A custom control system enables safe and intuitive interaction, while the integration with NA V2 and LiDAR allows for robust navigation capabilities. This paper reviews existing mobility solutions and compares them to Moby, details the robot's design, and presents objective and subjective experimental results using the NASA-TLX method and time comparisons to other methods to validate our design criteria and demonstrate the advantages of our contribution. I. INTRODUCTION As global life expectancy continues to rise, societies around the world are confronting the challenge of supporting an aging population with limited caregiving resources [1]. This issue is particularly pronounced in countries like Japan, where nearly one in three individuals will be aged 65 or older by year 2036 [2], [3].
Shared Control of Holonomic Wheelchairs through Reinforcement Learning
Bähler, Jannis, Paez-Granados, Diego, Peña-Queralta, Jorge
--Smart electric wheelchairs can improve user experience by supporting the driver with shared control. State-of-the-art work showed the potential of shared control in improving safety in navigation for non-holonomic robots. However, for holonomic systems, current approaches often lead to unintuitive behavior for the user and fail to utilize the full potential of omnidirectional driving. Therefore, we propose a reinforcement learning-based method, which takes a 2D user input and outputs a 3D motion while ensuring user comfort and reducing cognitive load on the driver . Our approach is trained in Isaac Gym and tested in simulation in Gazebo. We compare different RL agent architectures and reward functions based on metrics considering cognitive load and user comfort. We show that our method ensures collision-free navigation while smartly orienting the wheelchair and showing better or competitive smoothness compared to a previous non-learning-based method. We further perform a sim-to-real transfer and demonstrate, to the best of our knowledge, the first real-world implementation of RL-based shared control for an omnidirectional mobility platform.
CoNav Chair: Development and Evaluation of a Shared Control based Wheelchair for the Built Environment
Xu, Yifan, Wang, Qianwei, Lillie, Jordan, Kamat, Vineet, Menassa, Carol, D'Souza, Clive
As the global population of people with disabilities (PWD) continues to grow, so will the need for mobility solutions that promote independent living and social integration. Wheelchairs are vital for the mobility of PWD in both indoor and outdoor environments. The current SOTA in powered wheelchairs is based on either manually controlled or fully autonomous modes of operation, offering limited flexibility and often proving difficult to navigate in spatially constrained environments. Moreover, research on robotic wheelchairs has focused predominantly on complete autonomy or improved manual control; approaches that can compromise efficiency and user trust. To overcome these challenges, this paper introduces the CoNav Chair, a smart wheelchair based on the Robot Operating System (ROS) and featuring shared control navigation and obstacle avoidance capabilities that are intended to enhance navigational efficiency, safety, and ease of use for the user. The paper outlines the CoNav Chair's design and presents a preliminary usability evaluation comparing three distinct navigation modes, namely, manual, shared, and fully autonomous, conducted with 21 healthy, unimpaired participants traversing an indoor building environment. Study findings indicated that the shared control navigation framework had significantly fewer collisions and performed comparably, if not superior to the autonomous and manual modes, on task completion time, trajectory length, and smoothness; and was perceived as being safer and more efficient based on user reported subjective assessments of usability. Overall, the CoNav system demonstrated acceptable safety and performance, laying the foundation for subsequent usability testing with end users, namely, PWDs who rely on a powered wheelchair for mobility.
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- Asia > Middle East > Jordan (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
PulseRide: A Robotic Wheelchair for Personalized Exertion Control with Human-in-the-Loop Reinforcement Learning
Zahid, Azizul, Poudel, Bibek, Scott, Danny, Scott, Jason, Crouter, Scott, Li, Weizi, Swaminathan, Sai
Maintaining an active lifestyle is vital for quality of life, yet challenging for wheelchair users. For instance, powered wheelchairs face increasing risks of obesity and deconditioning due to inactivity. Conversely, manual wheelchair users, who propel the wheelchair by pushing the wheelchair's handrims, often face upper extremity injuries from repetitive motions. These challenges underscore the need for a mobility system that promotes activity while minimizing injury risk. Maintaining optimal exertion during wheelchair use enhances health benefits and engagement, yet the variations in individual physiological responses complicate exertion optimization. To address this, we introduce PulseRide, a novel wheelchair system that provides personalized assistance based on each user's physiological responses, helping them maintain their physical exertion goals. Unlike conventional assistive systems focused on obstacle avoidance and navigation, PulseRide integrates real-time physiological data-such as heart rate and ECG-with wheelchair speed to deliver adaptive assistance. Using a human-in-the-loop reinforcement learning approach with Deep Q-Network algorithm (DQN), the system adjusts push assistance to keep users within a moderate activity range without under- or over-exertion. We conducted preliminary tests with 10 users on various terrains, including carpet and slate, to assess PulseRide's effectiveness. Our findings show that, for individual users, PulseRide maintains heart rates within the moderate activity zone as much as 71.7 percent longer than manual wheelchairs. Among all users, we observed an average reduction in muscle contractions of 41.86 percent, delaying fatigue onset and enhancing overall comfort and engagement. These results indicate that PulseRide offers a healthier, adaptive mobility solution, bridging the gap between passive and physically taxing mobility options.
- North America > United States > Tennessee > Knox County > Knoxville (0.15)
- North America > United States > New York > New York County > Manhattan (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- (2 more...)
Our Best Friend Is Dying. This Controversial Tool Helped Us Laugh.
Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. Two winters ago, more than a year after my old college roommate and dear friend Paul was diagnosed with ALS, he started making pictures. By then, he was gradually losing the ability to do almost everything else. He could still walk at that point, often through the leafy corner of his Boston neighborhood, Jamaica Plain, where the old tree limbs cradled the houses and the streets were barely wide enough for a car, but only with the help of a cane. A condition of the disease called bulbar palsy slowed his tongue to the point his words wobbled enough that he sounded as if he were drunk. He could eat solid foods, albeit with some trouble, and could drink the Relyvrio medication powder he swirled with a spoon into a glass of water twice daily--a prescription for ALS that last year clinical trials suggested was ineffective, and a cocktail so bitter it made him physically wince--but he began coughing more and more as he labored to swallow anything at all.
- North America > United States > Missouri > Jackson County > Kansas City (0.29)
- North America > Jamaica (0.25)
- North America > United States > Missouri > Boone County > Columbia (0.05)
- (3 more...)
- Research Report > New Finding (0.89)
- Research Report > Experimental Study (0.89)
CART-MPC: Coordinating Assistive Devices for Robot-Assisted Transferring with Multi-Agent Model Predictive Control
Ye, Ruolin, Chen, Shuaixing, Yan, Yunting, Yang, Joyce, Ge, Christina, Barreiros, Jose, Tsui, Kate, Silver, Tom, Bhattacharjee, Tapomayukh
Bed-to-wheelchair transferring is a ubiquitous activity of daily living (ADL), but especially challenging for caregiving robots with limited payloads. We develop a novel algorithm that leverages the presence of other assistive devices: a Hoyer sling and a wheelchair for coarse manipulation of heavy loads, alongside a robot arm for fine-grained manipulation of deformable objects (Hoyer sling straps). We instrument the Hoyer sling and wheelchair with actuators and sensors so that they can become intelligent agents in the algorithm. We then focus on one subtask of the transferring ADL -- tying Hoyer sling straps to the sling bar -- that exemplifies the challenges of transfer: multi-agent planning, deformable object manipulation, and generalization to varying hook shapes, sling materials, and care recipient bodies. To address these challenges, we propose CART-MPC, a novel algorithm based on turn-taking multi-agent model predictive control that uses a learned neural dynamics model for a keypoint-based representation of the deformable Hoyer sling strap, and a novel cost function that leverages linking numbers from knot theory and neural amortization to accelerate inference. We validate it in both RCareWorld simulation and real-world environments. In simulation, CART-MPC successfully generalizes across diverse hook designs, sling materials, and care recipient body shapes. In the real world, we show zero-shot sim-to-real generalization capabilities to tie deformable Hoyer sling straps on a sling bar towards transferring a manikin from a hospital bed to a wheelchair. See our website for supplementary materials: https://emprise.cs.cornell.edu/cart-mpc/.
- Asia (0.93)
- North America > United States (0.68)
- Health & Medicine (1.00)
- Energy > Oil & Gas > Downstream (1.00)
- Energy > Oil & Gas > Upstream (0.71)