Goto

Collaborating Authors

 srinivasa


Collaborative Decision Making Using Action Suggestions

Neural Information Processing Systems

Inotherp(ost | st) 1(ost = (st)) where 1 indicator introduce 2 (0,1]. Message Reception Rate Reward Normal Perfect Naive - 1.0 Scaled - 0.99 Noisy - 5.0 Chanceof Random Suggestions Reward Normal Perfect Random Naive - 1.0 Naive - 0.25 Scaled - 0.99 Scaled - 0.25 Noisy - 5.0 Noisy - 1.0 Chanceof R...


30de9ece7cf3790c8c39ccff1a044209-Paper.pdf

Neural Information Processing Systems

One difficulty in using artificial agents for human-assistive applications lies in the challenge of accurately assisting with a person's goal(s). Existing methods tend to rely on inferring the human's goal, which is challenging when there are many potential goals or when the set of candidate goals is difficult to identify. We propose a new paradigm for assistance by instead increasing thehuman's ability tocontroltheir environment, and formalize this approach byaugmenting reinforcement learning withhuman empowerment.


Near-Optimal Edge Evaluation in Explicit Generalized Binomial Graphs

Sanjiban Choudhury, Shervin Javdani, Siddhartha Srinivasa, Sebastian Scherer

Neural Information Processing Systems

In this paper, we do so by drawing a novel equivalence between motion planning and the Bayesian active learning paradigm of decision region determination (DRD) . Unfortunately, a straight application of existing methods requires computation exponential in the number of edges in a graph.


REPeat: A Real2Sim2Real Approach for Pre-acquisition of Soft Food Items in Robot-assisted Feeding

Ha, Nayoung, Ye, Ruolin, Liu, Ziang, Sinha, Shubhangi, Bhattacharjee, Tapomayukh

arXiv.org Artificial Intelligence

The paper presents REPeat, a Real2Sim2Real framework designed to enhance bite acquisition in robot-assisted feeding for soft foods. It uses `pre-acquisition actions' such as pushing, cutting, and flipping to improve the success rate of bite acquisition actions such as skewering, scooping, and twirling. If the data-driven model predicts low success for direct bite acquisition, the system initiates a Real2Sim phase, reconstructing the food's geometry in a simulation. The robot explores various pre-acquisition actions in the simulation, then a Sim2Real step renders a photorealistic image to reassess success rates. If the success improves, the robot applies the action in reality. We evaluate the system on 15 diverse plates with 10 types of food items for a soft food diet, showing improvement in bite acquisition success rates by 27\% on average across all plates. See our project website at https://emprise.cs.cornell.edu/repeat.


Near-Optimal Edge Evaluation in Explicit Generalized Binomial Graphs

Sanjiban Choudhury, Shervin Javdani, Siddhartha Srinivasa, Sebastian Scherer

Neural Information Processing Systems

Robotic motion-planning problems, such as a UAV flying fast in a partially-known environment or a robot arm moving around cluttered objects, require finding collision-free paths quickly. Typically, this is solved by constructing a graph, where vertices represent robot configurations and edges represent potentially valid movements of the robot between these configurations. The main computational bottlenecks are expensive edge evaluations to check for collisions. State of the art planning methods do not reason about the optimal sequence of edges to evaluate in order to find a collision free path quickly. In this paper, we do so by drawing a novel equivalence between motion planning and the Bayesian active learning paradigm of decision region determination (DRD). Unfortunately, a straight application of existing methods requires computation exponential in the number of edges in a graph.


Control-Theoretic Analysis of Shared Control Systems

Aronson, Reuben M., Short, Elaine Schaertl

arXiv.org Artificial Intelligence

Users of shared control systems change their behavior in the presence of assistance, which conflicts with assumpts about user behavior that some assistance methods make. In this paper, we propose an analysis technique to evaluate the user's experience with the assistive systems that bypasses required assumptions: we model the assistance as a dynamical system that can be analyzed using control theory techniques. We analyze the shared autonomy assistance algorithm and make several observations: we identify a problem with runaway goal confidence and propose a system adjustment to mitigate it, we demonstrate that the system inherently limits the possible actions available to the user, and we show that in a simplified setting, the effect of the assistance is to drive the system to the convex hull of the goals and, once there, add a layer of indirection between the user control and the system behavior. We conclude by discussing the possible uses of this analysis for the field.


An Adaptable, Safe, and Portable Robot-Assisted Feeding System

Gordon, Ethan Kroll, Jenamani, Rajat Kumar, Nanavati, Amal, Liu, Ziang, Bolotski, Haya, Karim, Raida, Stabile, Daniel, Kashyap, Atharva, Zhu, Bernie Hao, Dai, Xilai, Schrenk, Tyler, Ko, Jonathan, Faulkner, Taylor Kessler, Bhattacharjee, Tapomayukh, Srinivasa, Siddhartha

arXiv.org Artificial Intelligence

We demonstrate a robot-assisted feeding system that enables people with mobility impairments to feed themselves. Our system design embodies Safety, Portability, and User Control, with comprehensive full-stack safety checks, the ability to be mounted on and powered by any powered wheelchair, and a custom web-app allowing care-recipients to leverage their own assistive devices for robot control. For bite acquisition, we leverage multi-modal online learning to tractably adapt to unseen food types. For bite transfer, we leverage real-time mouth perception and interaction-aware control. Co-designed with community researchers, our system has been validated through multiple end-user studies.


How Amazon Robotics researchers are solving a "beautiful problem" - Amazon Science

#artificialintelligence

The rate of innovation in machine learning is simply off the chart -- what is possible today was barely on the drawing board even a handful of years ago. At Amazon, this has manifested in a robotic system that can not only identify potential space in a cluttered storage bin, but also sensitively manipulate that bin's contents to create that space before successfully placing additional items inside -- a result that, until recently, was impossible. This journey starts when a product arrives at an Amazon fulfillment center (FC). The first order of business is to make it available to customers by adding it to the FC's available inventory. In practice, this means picking it up and stowing it in a storage pod.


Learning Visuo-Haptic Skewering Strategies for Robot-Assisted Feeding

Sundaresan, Priya, Belkhale, Suneel, Sadigh, Dorsa

arXiv.org Artificial Intelligence

Acquiring food items with a fork poses an immense challenge to a robot-assisted feeding system, due to the wide range of material properties and visual appearances present across food groups. Deformable foods necessitate different skewering strategies than firm ones, but inferring such characteristics for several previously unseen items on a plate remains nontrivial. Our key insight is to leverage visual and haptic observations during interaction with an item to rapidly and reactively plan skewering motions. We learn a generalizable, multimodal representation for a food item from raw sensory inputs which informs the optimal skewering strategy. Given this representation, we propose a zero-shot framework to sense visuo-haptic properties of a previously unseen item and reactively skewer it, all within a single interaction. Real-robot experiments with foods of varying levels of visual and textural diversity demonstrate that our multimodal policy outperforms baselines which do not exploit both visual and haptic cues or do not reactively plan. Across 6 plates of different food items, our proposed framework achieves 71% success over 69 skewering attempts total. Supplementary material, datasets, code, and videos are available on our website: https://sites.google.com/view/hapticvisualnet-corl22/home


You can now be fed by a ROBOT as engineers combine a dexterous machine with facial recognition

Daily Mail - Science & tech

A dexterous robot arm that can can automatically feed people forkfuls of food has been developed by researchers in the US. Experts studied how real people use forks to feed each other in order to teach the robot the best way to go about its task. The arm automatically adjusts both the force it uses and the angle at which it spears items to best pick up and deliver mouthfuls of food - regardless of size or texture. A dexterous robot arm that can can automatically feed people forkfuls of food has been developed by researchers in the US. 'Being dependent on a caregiver to feed every bite, every day, takes away a person's sense of independence,' said roboticist Siddhartha Srinivasa. 'Our goal with this project is to give people a bit more control over their lives.'