Majidi, Carmel
Towards Wearable Interfaces for Robotic Caregiving
Padmanabha, Akhil, Majidi, Carmel, Erickson, Zackory
Physically assistive robots in home environments can enhance the autonomy of individuals with impairments, allowing them to regain the ability to conduct self-care and household tasks. Individuals with physical limitations may find existing interfaces challenging to use, highlighting the need for novel interfaces that can effectively support them. In this work, we present insights on the design and evaluation of an active control wearable interface named HAT, Head-Worn Assistive Teleoperation. To tackle challenges in user workload while using such interfaces, we propose and evaluate a shared control algorithm named Driver Assistance. Finally, we introduce the concept of passive control, in which wearable interfaces detect implicit human signals to inform and guide robotic actions during caregiving tasks, with the aim of reducing user workload while potentially preserving the feeling of control.
Q-learning-based Model-free Safety Filter
Sue, Guo Ning, Choudhary, Yogita, Desatnik, Richard, Majidi, Carmel, Dolan, John, Shi, Guanya
Ensuring safety via safety filters in real-world robotics presents significant challenges, particularly when the system dynamics is complex or unavailable. To handle this issue, learning-based safety filters recently gained popularity, which can be classified as model-based and model-free methods. Existing model-based approaches requires various assumptions on system model (e.g., control-affine), which limits their application in complex systems, and existing model-free approaches need substantial modifications to standard RL algorithms and lack versatility. This paper proposes a simple, plugin-and-play, and effective model-free safety filter learning framework. We introduce a novel reward formulation and use Q-learning to learn Q-value functions to safeguard arbitrary task specific nominal policies via filtering out their potentially unsafe actions. The threshold used in the filtering process is supported by our theoretical analysis. Due to its model-free nature and simplicity, our framework can be seamlessly integrated with various RL algorithms. We validate the proposed approach through simulations on double integrator and Dubin's car systems and demonstrate its effectiveness in real-world experiments with a soft robotic limb.
Towards an LLM-Based Speech Interface for Robot-Assisted Feeding
Yuan, Jessie, Gupta, Janavi, Padmanabha, Akhil, Karachiwalla, Zulekha, Majidi, Carmel, Admoni, Henny, Erickson, Zackory
Physically assistive robots present an opportunity to significantly increase the well-being and independence of individuals with motor impairments or other forms of disability who are unable to complete activities of daily living (ADLs). Speech interfaces, especially ones that utilize Large Language Models (LLMs), can enable individuals to effectively and naturally communicate high-level commands and nuanced preferences to robots. In this work, we demonstrate an LLM-based speech interface for a commercially available assistive feeding robot. Our system is based on an iteratively designed framework, from the paper "VoicePilot: Harnessing LLMs as Speech Interfaces for Physically Assistive Robots," that incorporates human-centric elements for integrating LLMs as interfaces for robots. It has been evaluated through a user study with 11 older adults at an independent living facility. Videos are located on our project website: https://sites.google.com/andrew.cmu.edu/voicepilot/.
VoicePilot: Harnessing LLMs as Speech Interfaces for Physically Assistive Robots
Padmanabha, Akhil, Yuan, Jessie, Gupta, Janavi, Karachiwalla, Zulekha, Majidi, Carmel, Admoni, Henny, Erickson, Zackory
Physically assistive robots present an opportunity to significantly increase the well-being and independence of individuals with motor impairments or other forms of disability who are unable to complete activities of daily living. Speech interfaces, especially ones that utilize Large Language Models (LLMs), can enable individuals to effectively and naturally communicate high-level commands and nuanced preferences to robots. Frameworks for integrating LLMs as interfaces to robots for high level task planning and code generation have been proposed, but fail to incorporate human-centric considerations which are essential while developing assistive interfaces. In this work, we present a framework for incorporating LLMs as speech interfaces for physically assistive robots, constructed iteratively with 3 stages of testing involving a feeding robot, culminating in an evaluation with 11 older adults at an independent living facility. We use both quantitative and qualitative data from the final study to validate our framework and additionally provide design guidelines for using LLMs as speech interfaces for assistive robots. Videos and supporting files are located on our project website: https://sites.google.com/andrew.cmu.edu/voicepilot/
Hierarchical State Space Models for Continuous Sequence-to-Sequence Modeling
Bhirangi, Raunaq, Wang, Chenyu, Pattabiraman, Venkatesh, Majidi, Carmel, Gupta, Abhinav, Hellebrekers, Tess, Pinto, Lerrel
Reasoning from sequences of raw sensory data is a ubiquitous problem across fields ranging from medical devices to robotics. These problems often involve using long sequences of raw sensor data (e.g. magnetometers, piezoresistors) to predict sequences of desirable physical quantities (e.g. force, inertial measurements). While classical approaches are powerful for locally-linear prediction problems, they often fall short when using real-world sensors. These sensors are typically non-linear, are affected by extraneous variables (e.g. vibration), and exhibit data-dependent drift. For many problems, the prediction task is exacerbated by small labeled datasets since obtaining ground-truth labels requires expensive equipment. In this work, we present Hierarchical State-Space Models (HiSS), a conceptually simple, new technique for continuous sequential prediction. HiSS stacks structured state-space models on top of each other to create a temporal hierarchy. Across six real-world sensor datasets, from tactile-based state prediction to accelerometer-based inertial measurement, HiSS outperforms state-of-the-art sequence models such as causal Transformers, LSTMs, S4, and Mamba by at least 23% on MSE. Our experiments further indicate that HiSS demonstrates efficient scaling to smaller datasets and is compatible with existing data-filtering techniques. Code, datasets and videos can be found on https://hiss-csp.github.io.
Independence in the Home: A Wearable Interface for a Person with Quadriplegia to Teleoperate a Mobile Manipulator
Padmanabha, Akhil, Gupta, Janavi, Chen, Chen, Yang, Jehan, Nguyen, Vy, Weber, Douglas J., Majidi, Carmel, Erickson, Zackory
Teleoperation of mobile manipulators within a home environment can significantly enhance the independence of individuals with severe motor impairments, allowing them to regain the ability to perform self-care and household tasks. There is a critical need for novel teleoperation interfaces to offer effective alternatives for individuals with impairments who may encounter challenges in using existing interfaces due to physical limitations. In this work, we iterate on one such interface, HAT (Head-Worn Assistive Teleoperation), an inertial-based wearable integrated into any head-worn garment. We evaluate HAT through a 7-day in-home study with Henry Evans, a non-speaking individual with quadriplegia who has participated extensively in assistive robotics studies. We additionally evaluate HAT with a proposed shared control method for mobile manipulators termed Driver Assistance and demonstrate how the interface generalizes to other physical devices and contexts. Our results show that HAT is a strong teleoperation interface across key metrics including efficiency, errors, learning curve, and workload. Code and videos are located on our project website.
A Multimodal Sensing Ring for Quantification of Scratch Intensity
Padmanabha, Akhil, Choudhary, Sonal, Majidi, Carmel, Erickson, Zackory
An objective measurement of chronic itch is necessary for improvements in patient care for numerous medical conditions. While wearables have shown promise for scratch detection, they are currently unable to estimate scratch intensity, preventing a comprehensive understanding of the effect of itch on an individual. In this work, we present a framework for the estimation of scratch intensity in addition to the detection of scratch. This is accomplished with a multimodal ring device, consisting of an accelerometer and a contact microphone, a pressure-sensitive tablet for capturing ground truth intensity values, and machine learning algorithms for regression of scratch intensity on a 0-600 milliwatts (mW) power scale that can be mapped to a 0-10 continuous scale. We evaluate the performance of our algorithms on 20 individuals using leave one subject out cross-validation and using data from 14 additional participants, we show that our algorithms achieve clinically-relevant discrimination of scratching intensity levels. By doing so, our device enables the quantification of the substantial variations in the interpretation of the 0-10 scale frequently utilized in patient self-reported clinical assessments. This work demonstrates that a finger-worn device can provide multidimensional, objective, real-time measures for the action of scratching.
HAT: Head-Worn Assistive Teleoperation of Mobile Manipulators
Padmanabha, Akhil, Wang, Qin, Han, Daphne, Diyora, Jashkumar, Kacker, Kriti, Khalid, Hamza, Chen, Liang-Jung, Majidi, Carmel, Erickson, Zackory
Mobile manipulators in the home can provide increased autonomy to individuals with severe motor impairments, who often cannot complete activities of daily living (ADLs) without the help of a caregiver. Teleoperation of an assistive mobile manipulator could enable an individual with motor impairments to independently perform self-care and household tasks, yet limited motor function can impede one's ability to interface with a robot. In this work, we present a unique inertial-based wearable assistive interface, embedded in a familiar head-worn garment, for individuals with severe motor impairments to teleoperate and perform physical tasks with a mobile manipulator. We evaluate this wearable interface with both able-bodied (N = 16) and individuals with motor impairments (N = 2) for performing ADLs and everyday household tasks. Our results show that the wearable interface enabled participants to complete physical tasks with low error rates, high perceived ease of use, and low workload measures. Overall, this inertial-based wearable serves as a new assistive interface option for control of mobile manipulators in the home.
All the Feels: A dexterous hand with large area sensing
Bhirangi, Raunaq, DeFranco, Abigail, Adkins, Jacob, Majidi, Carmel, Gupta, Abhinav, Hellebrekers, Tess, Kumar, Vikash
High cost and lack of reliability has precluded the widespread adoption of dexterous hands in robotics. Furthermore, the lack of a viable tactile sensor capable of sensing over the entire area of the hand impedes the rich, low-level feedback that would improve learning of dexterous manipulation skills. This paper introduces an inexpensive, modular, robust, and scalable platform -- the DManus -- aimed at resolving these challenges while satisfying the large-scale data collection capabilities demanded by deep robot learning paradigms. Studies on human manipulation point to the criticality of low-level tactile feedback in performing everyday dexterous tasks. The DManus comes with ReSkin sensing on the entire surface of the palm as well as the fingertips. We demonstrate effectiveness of the fully integrated system in a tactile aware task -- bin picking and sorting. Code, documentation, design files, detailed assembly instructions, trained models, task videos, and all supplementary materials required to recreate the setup can be found on http://roboticsbenchmarks.org/platforms/dmanus
Design and Characterization of Viscoelastic McKibben Actuators with Tunable Force-Velocity Curves
Bennington, Michael J., Wang, Tuo, Yin, Jiaguo, Bergbreiter, Sarah, Majidi, Carmel, Webster-Wood, Victoria A.
The McKibben pneumatic artificial muscle is a commonly studied soft robotic actuator, and its quasistatic force-length properties have been well characterized and modeled. However, its damping and force-velocity properties are less well studied. Understanding these properties will allow for more robust dynamic modeling of soft robotic systems. The force-velocity response of these actuators is of particular interest because these actuators are often used as hardware models of skeletal muscles for bioinspired robots, and this force-velocity relationship is fundamental to muscle physiology. In this work, we investigated the force-velocity response of McKibben actuators and the ability to tune this response through the use of viscoelastic polymer sheaths. These viscoelastic McKibben actuators (VMAs) were characterized using iso-velocity experiments inspired by skeletal muscle physiology tests. A simplified 1D model of the actuators was developed to connect the shape of the force-velocity curve to the material parameters of the actuator and sheaths. Using these viscoelastic materials, we were able to modulate the shape and magnitude of the actuators' force-velocity curves, and using the developed model, these changes were connected back to the material properties of the sheaths.