assistive device
OpenRoboCare: A Multimodal Multi-Task Expert Demonstration Dataset for Robot Caregiving
Liang, Xiaoyu, Liu, Ziang, Lin, Kelvin, Gu, Edward, Ye, Ruolin, Nguyen, Tam, Hsu, Cynthia, Wu, Zhanxin, Yang, Xiaoman, Cheung, Christy Sum Yu, Soh, Harold, Dimitropoulou, Katherine, Bhattacharjee, Tapomayukh
We present OpenRoboCare, a multimodal dataset for robot caregiving, capturing expert occupational therapist demonstrations of Activities of Daily Living (ADLs). Caregiving tasks involve complex physical human-robot interactions, requiring precise perception under occlusions, safe physical contact, and long-horizon planning. While recent advances in robot learning from demonstrations have shown promise, there is a lack of a large-scale, diverse, and expert-driven dataset that captures real-world caregiving routines. To address this gap, we collect data from 21 occupational therapists performing 15 ADL tasks on two manikins. The dataset spans five modalities: RGB-D video, pose tracking, eye-gaze tracking, task and action annotations, and tactile sensing, providing rich multimodal insights into caregiver movement, attention, force application, and task execution strategies. We further analyze expert caregiving principles and strategies, offering insights to improve robot efficiency and task feasibility. Additionally, our evaluations demonstrate that OpenRoboCare presents challenges for state-of-the-art robot perception and human activity recognition methods, both critical for developing safe and adaptive assistive robots, highlighting the value of our contribution. See our website for additional visualizations: https://emprise.cs.cornell.edu/robo-care/.
- North America > United States > Massachusetts > Middlesex County > Lowell (0.14)
- North America > United States > New York > Tompkins County > Ithaca (0.04)
- Europe > Greece (0.04)
- Asia > Singapore > Central Region > Singapore (0.04)
Analyzing Gait Adaptation with Hemiplegia Simulation Suits and Digital Twins
Chen, Jialin, Clos, Jeremie, Price, Dominic, Caleb-Solly, Praminda
To advance the development of assistive and rehabilitation robots, it is essential to conduct experiments early in the design cycle. However, testing early prototypes directly with users can pose safety risks. To address this, we explore the use of condition-specific simulation suits worn by healthy participants in controlled environments as a means to study gait changes associated with various impairments and support rapid prototyping. This paper presents a study analyzing the impact of a hemiplegia simulation suit on gait. We collected biomechanical data using a Vicon motion capture system and Delsys Trigno EMG and IMU sensors under four walking conditions: with and without a rollator, and with and without the simulation suit. The gait data was integrated into a digital twin model, enabling machine learning analyses to detect the use of the simulation suit and rollator, identify turning behavior, and evaluate how the suit affects gait over time. Our findings show that the simulation suit significantly alters movement and muscle activation patterns, prompting users to compensate with more abrupt motions. We also identify key features and sensor modalities that are most informative for accurately capturing gait dynamics and modeling human-rollator interaction within the digital twin framework.
- Europe > United Kingdom > England > Nottinghamshire > Nottingham (0.16)
- South America (0.04)
- North America > Central America (0.04)
- Asia (0.04)
- Health & Medicine > Consumer Health (0.89)
- Health & Medicine > Therapeutic Area > Neurology (0.68)
Unraveling the Connection: How Cognitive Workload Shapes Intent Recognition in Robot-Assisted Surgery
Sharma, Mansi, Kruger, Antonio
Robot-assisted surgery has revolutionized the healthcare industry by providing surgeons with greater precision, reducing invasiveness, and improving patient outcomes. However, the success of these surgeries depends heavily on the robotic system ability to accurately interpret the intentions of the surgical trainee or even surgeons. One critical factor impacting intent recognition is the cognitive workload experienced during the procedure. In our recent research project, we are building an intelligent adaptive system to monitor cognitive workload and improve learning outcomes in robot-assisted surgery. The project will focus on achieving a semantic understanding of surgeon intents and monitoring their mental state through an intelligent multi-modal assistive framework. This system will utilize brain activity, heart rate, muscle activity, and eye tracking to enhance intent recognition, even in mentally demanding situations. By improving the robotic system ability to interpret the surgeons intentions, we can further enhance the benefits of robot-assisted surgery and improve surgery outcomes.
- Health & Medicine > Surgery (0.98)
- Health & Medicine > Therapeutic Area (0.90)
A Framework for Adaptive Load Redistribution in Human-Exoskeleton-Cobot Systems
Mobedi, Emir, Solak, Gokhan, Ajoudani, Arash
--Wearable devices like exoskeletons are designed to reduce excessive loads on specific joints of the body. Specifically, single-or two-degrees-of-freedom (DOF) upper-body industrial exoskeletons typically focus on compensating for the strain on the elbow and shoulder joints. However, during daily activities, there is no assurance that external loads are correctly aligned with the supported joints. Optimizing work processes to ensure that external loads are primarily (to the extent that they can be compensated by the exoskeleton) directed onto the supported joints can significantly enhance the overall usability of these devices and the ergonomics of their users. Collaborative robots (cobots) can play a role in this optimization, complementing the collaborative aspects of human work. In this study, we propose an adaptive and coordinated control system for the human-cobot-exoskeleton interaction. This system adjusts the task coordinates to maximize the utilization of the supported joints. When the torque limits of the exoskeleton are exceeded, the framework continuously adapts the task frame, redistributing excessive loads to non-supported body joints to prevent overloading the supported ones. We validated our approach in an equivalent industrial painting task involving a single-DOF elbow exoskeleton, a cobot, and four subjects, each tested in four different initial arm configurations with five distinct optimisation weight matrices and two different payloads. Personal use of this material is permitted. ANUAL operations such as packaging [1], assembly [2] and painting [3] are essential in many industries, though they can place a significant strain on the physical health of human workers.
- Asia > Middle East > Republic of Türkiye > İzmir Province > İzmir (0.04)
- North America > United States > Massachusetts > Middlesex County > Natick (0.04)
- North America > United States > Colorado > Douglas County > Highlands Ranch (0.04)
- Europe > Italy > Liguria > Genoa (0.04)
- Information Technology > Human Computer Interaction > Interfaces (1.00)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Assistive Technologies (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.67)
Exo-muscle: A semi-rigid assistive device for the knee
Zhang, Yifang, Ajoudani, Arash, Tsagarakis, Nikos G
In this work, we introduce the principle, design and mechatronics of Exo-Muscle, a novel assistive device for the knee joint. Different from the existing systems based on rigid exoskeleton structures or soft-tendon driven approaches, the proposed device leverages a new semi-rigid principle that explores the benefits of both rigid and soft systems. The use of a novel semi-rigid chain mechanism around the knee joint eliminates the presence of misalignment between the device and the knee joint center of rotation, while at the same time, it forms a well-defined route for the tendon. This results in more deterministic load compensation functionality compared to the fully soft systems. The proposed device can provide up to 38Nm assistive torque to the knee joint. In the experiment section, the device was successfully validated through a series of experiments demonstrating the capacity of the device to provide the target assistive functionality in the knee joint.
VocalEyes: Enhancing Environmental Perception for the Visually Impaired through Vision-Language Models and Distance-Aware Object Detection
Chavan, Kunal, Balaji, Keertan, Barigidad, Spoorti, Chiluveru, Samba Raju
With an increasing demand for assistive technologies that promote the independence and mobility of visually impaired people, this study suggests an innovative real-time system that gives audio descriptions of a user's surroundings to improve situational awareness. The system acquires live video input and processes it with a quantized and fine-tuned Florence-2 big model, adjusted to 4-bit accuracy for efficient operation on low-power edge devices such as the NVIDIA Jetson Orin Nano. By transforming the video signal into frames with a 5-frame latency, the model provides rapid and contextually pertinent descriptions of objects, pedestrians, and barriers, together with their estimated distances. The system employs Parler TTS Mini, a lightweight and adaptable Text-to-Speech (TTS) solution, for efficient audio feedback. It accommodates 34 distinct speaker types and enables customization of speech tone, pace, and style to suit user requirements. This study examines the quantization and fine-tuning techniques utilized to modify the Florence-2 model for this application, illustrating how the integration of a compact model architecture with a versatile TTS component improves real-time performance and user experience. The proposed system is assessed based on its accuracy, efficiency, and usefulness, providing a viable option to aid vision-impaired users in navigating their surroundings securely and successfully.
- Asia > India > Tamil Nadu > Vellore (0.04)
- Asia > India > Tamil Nadu > Chennai (0.04)
- Research Report > Experimental Study (0.54)
- Research Report > New Finding (0.47)
- Health & Medicine (0.90)
- Information Technology > Hardware (0.35)
CART-MPC: Coordinating Assistive Devices for Robot-Assisted Transferring with Multi-Agent Model Predictive Control
Ye, Ruolin, Chen, Shuaixing, Yan, Yunting, Yang, Joyce, Ge, Christina, Barreiros, Jose, Tsui, Kate, Silver, Tom, Bhattacharjee, Tapomayukh
Bed-to-wheelchair transferring is a ubiquitous activity of daily living (ADL), but especially challenging for caregiving robots with limited payloads. We develop a novel algorithm that leverages the presence of other assistive devices: a Hoyer sling and a wheelchair for coarse manipulation of heavy loads, alongside a robot arm for fine-grained manipulation of deformable objects (Hoyer sling straps). We instrument the Hoyer sling and wheelchair with actuators and sensors so that they can become intelligent agents in the algorithm. We then focus on one subtask of the transferring ADL -- tying Hoyer sling straps to the sling bar -- that exemplifies the challenges of transfer: multi-agent planning, deformable object manipulation, and generalization to varying hook shapes, sling materials, and care recipient bodies. To address these challenges, we propose CART-MPC, a novel algorithm based on turn-taking multi-agent model predictive control that uses a learned neural dynamics model for a keypoint-based representation of the deformable Hoyer sling strap, and a novel cost function that leverages linking numbers from knot theory and neural amortization to accelerate inference. We validate it in both RCareWorld simulation and real-world environments. In simulation, CART-MPC successfully generalizes across diverse hook designs, sling materials, and care recipient body shapes. In the real world, we show zero-shot sim-to-real generalization capabilities to tie deformable Hoyer sling straps on a sling bar towards transferring a manikin from a hospital bed to a wheelchair. See our website for supplementary materials: https://emprise.cs.cornell.edu/cart-mpc/.
- Asia (0.93)
- North America > United States (0.68)
- Health & Medicine (1.00)
- Energy > Oil & Gas > Downstream (1.00)
- Energy > Oil & Gas > Upstream (0.71)
A User Study Method on Healthy Participants for Assessing an Assistive Wearable Robot Utilising EMG Sensing
Suulker, Cem, Greenway, Alexander, Skach, Sophie, Farkhatdinov, Ildar, Miller, Stuart Charles, Althoefer, Kaspar
Hand-wearable robots, specifically exoskeletons, are designed to aid hands in daily activities, playing a crucial role in post-stroke rehabilitation and assisting the elderly. Our contribution to this field is a textile robotic glove with integrated actuators. These actuators, powered by pneumatic pressure, guide the user's hand to a desired position. Crafted from textile materials, our soft robotic glove prioritizes safety, lightweight construction, and user comfort. Utilizing the ruffles technique, integrated actuators guarantee high performance in blocking force and bending effectiveness. Here, we present a participant study confirming the effectiveness of our robotic device on a healthy participant group, exploiting EMG sensing.
- South America > Uruguay > Maldonado > Maldonado (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
Embracing Large Language and Multimodal Models for Prosthetic Technologies
Dey, Sharmita, Schilling, Arndt F.
This article presents a vision for the future of prosthetic devices, leveraging the advancements in large language models (LLMs) and Large Multimodal Models (LMMs) to revolutionize the interaction between humans and assistive technologies. Unlike traditional prostheses, which rely on limited and predefined commands, this approach aims to develop intelligent prostheses that understand and respond to users' needs through natural language and multimodal inputs. The realization of this vision involves developing a control system capable of understanding and translating a wide array of natural language and multimodal inputs into actionable commands for prosthetic devices. This includes the creation of models that can extract and interpret features from both textual and multimodal data, ensuring devices not only follow user commands but also respond intelligently to the environment and user intent, thus marking a significant leap forward in prosthetic technology.
Grasp Force Assistance via Throttle-based Wrist Angle Control on a Robotic Hand Orthosis for C6-C7 Spinal Cord Injury
Palacios, Joaquin, Deli-Ivanov, Alexandra, Chen, Ava, Winterbottom, Lauren, Nilsen, Dawn M., Stein, Joel, Ciocarlie, Matei
Individuals with hand paralysis resulting from C6-C7 spinal cord injuries frequently rely on tenodesis for grasping. However, tenodesis generates limited grasping force and demands constant exertion to maintain a grasp, leading to fatigue and sometimes pain. We introduce the MyHand-SCI, a wearable robot that provides grasping assistance through motorized exotendons. Our user-driven device enables independent, ipsilateral operation via a novel Throttle-based Wrist Angle control method, which allows users to maintain grasps without continued wrist extension. A pilot case study with a person with C6 spinal cord injury shows an improvement in functional grasping and grasping force, as well as a preserved ability to modulate grasping force while using our device, thus improving their ability to manipulate everyday objects. This research is a step towards developing effective and intuitive wearable assistive devices for individuals with spinal cord injury.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)