Goto

Collaborating Authors

 Nemlekar, Heramb


Kiri-Spoon: A Kirigami Utensil for Robot-Assisted Feeding

arXiv.org Artificial Intelligence

For millions of adults with mobility limitations, eating meals is a daily challenge. A variety of robotic systems have been developed to address this societal need. Unfortunately, end-user adoption of robot-assisted feeding is limited, in part because existing devices are unable to seamlessly grasp, manipulate, and feed diverse foods. Recent works seek to address this issue by creating new algorithms for food acquisition and bite transfer. In parallel to these algorithmic developments, however, we hypothesize that mechanical intelligence will make it fundamentally easier for robot arms to feed humans. We therefore propose Kiri-Spoon, a soft utensil specifically designed for robot-assisted feeding. Kiri-Spoon consists of a spoon-shaped kirigami structure: when actuated, the kirigami sheet deforms into a bowl of increasing curvature. Robot arms equipped with Kiri-Spoon can leverage the kirigami structure to wrap-around morsels during acquisition, contain those items as the robot moves, and then compliantly release the food into the user's mouth. Overall, Kiri-Spoon combines the familiar and comfortable shape of a standard spoon with the increased capabilities of soft robotic grippers. In what follows, we first apply a stakeholder-driven design process to ensure that Kiri-Spoon meets the needs of caregivers and users with physical disabilities. We next characterize the dynamics of Kiri-Spoon, and derive a mechanics model to relate actuation force to the spoon's shape. The paper concludes with three separate experiments that evaluate (a) the mechanical advantage provided by Kiri-Spoon, (b) the ways users with disabilities perceive our system, and (c) how the mechanical intelligence of Kiri-Spoon complements state-of-the-art algorithms. Our results suggest that Kiri-Spoon advances robot-assisted feeding across diverse foods, multiple robotic platforms, and different manipulation algorithms.


RECON: Reducing Causal Confusion with Human-Placed Markers

arXiv.org Artificial Intelligence

Imitation learning enables robots to learn new tasks from human examples. One current fundamental limitation while learning from humans is causal confusion. Causal confusion occurs when the robot's observations include both task-relevant and extraneous information: for instance, a robot's camera might see not only the intended goal, but also clutter and changes in lighting within its environment. Because the robot does not know which aspects of its observations are important a priori, it often misinterprets the human's examples and fails to learn the desired task. To address this issue, we highlight that -- while the robot learner may not know what to focus on -- the human teacher does. In this paper we propose that the human proactively marks key parts of their task with small, lightweight beacons. Under our framework the human attaches these beacons to task-relevant objects before providing demonstrations: as the human shows examples of the task, beacons track the position of marked objects. We then harness this offline beacon data to train a task-relevant state embedding. Specifically, we embed the robot's observations to a latent state that is correlated with the measured beacon readings: in practice, this causes the robot to autonomously filter out extraneous observations and make decisions based on features learned from the beacon data. Our simulations and a real robot experiment suggest that this framework for human-placed beacons mitigates causal confusion and enables robots to learn the desired task from fewer demonstrations. See videos here: https://youtu.be/oy85xJvtLSU


Accelerating Interface Adaptation with User-Friendly Priors

arXiv.org Artificial Intelligence

Robots often need to convey information to human users. For example, robots can leverage visual, auditory, and haptic interfaces to display their intent or express their internal state. In some scenarios there are socially agreed upon conventions for what these signals mean: e.g., a red light indicates an autonomous car is slowing down. But as robots develop new capabilities and seek to convey more complex data, the meaning behind their signals is not always mutually understood: one user might think a flashing light indicates the autonomous car is an aggressive driver, while another user might think the same signal means the autonomous car is defensive. In this paper we enable robots to adapt their interfaces to the current user so that the human's personalized interpretation is aligned with the robot's meaning. We start with an information theoretic end-to-end approach, which automatically tunes the interface policy to optimize the correlation between human and robot. But to ensure that this learning policy is intuitive -- and to accelerate how quickly the interface adapts to the human -- we recognize that humans have priors over how interfaces should function. For instance, humans expect interface signals to be proportional and convex. Our approach biases the robot's interface towards these priors, resulting in signals that are adapted to the current user while still following social expectations. Our simulations and user study results across $15$ participants suggest that these priors improve robot-to-human communication. See videos here: https://youtu.be/Re3OLg57hp8


Kiri-Spoon: A Soft Shape-Changing Utensil for Robot-Assisted Feeding

arXiv.org Artificial Intelligence

Assistive robot arms have the potential to help disabled or elderly adults eat everyday meals without relying on a caregiver. To provide meaningful assistance, these robots must reach for food items, pick them up, and then carry them to the human's mouth. Current work equips robot arms with standard utensils (e.g., forks and spoons). But -- although these utensils are intuitive for humans -- they are not easy for robots to control. If the robot arm does not carefully and precisely orchestrate its motion, food items may fall out of a spoon or slide off of the fork. Accordingly, in this paper we design, model, and test Kiri-Spoon, a novel utensil specifically intended for robot-assisted feeding. Kiri-Spoon combines the familiar shape of traditional utensils with the capabilities of soft grippers. By actuating a kirigami structure the robot can rapidly adjust the curvature of Kiri-Spoon: at one extreme the utensil wraps around food items to make them easier for the robot to pick up and carry, and at the other extreme the utensil returns to a typical spoon shape so that human users can easily take a bite of food. Our studies with able-bodied human operators suggest that robot arms equipped with Kiri-Spoon carry foods more robustly than when leveraging traditional utensils. See videos here: https://youtu.be/nddAniZLFPk


Surrogate Assisted Generation of Human-Robot Interaction Scenarios

arXiv.org Artificial Intelligence

As human-robot interaction (HRI) systems advance, so does the difficulty of evaluating and understanding the strengths and limitations of these systems in different environments and with different users. To this end, previous methods have algorithmically generated diverse scenarios that reveal system failures in a shared control teleoperation task. However, these methods require directly evaluating generated scenarios by simulating robot policies and human actions. The computational cost of these evaluations limits their applicability in more complex domains. Thus, we propose augmenting scenario generation systems with surrogate models that predict both human and robot behaviors. In the shared control teleoperation domain and a more complex shared workspace collaboration task, we show that surrogate assisted scenario generation efficiently synthesizes diverse datasets of challenging scenarios. We demonstrate that these failures are reproducible in real-world interactions.