Goto

Collaborating Authors

 Madan, Rishabh


CushSense: Soft, Stretchable, and Comfortable Tactile-Sensing Skin for Physical Human-Robot Interaction

arXiv.org Artificial Intelligence

Whole-arm tactile feedback is crucial for robots to ensure safe physical interaction with their surroundings. This paper introduces CushSense, a fabric-based soft and stretchable tactile-sensing skin designed for physical human-robot interaction (pHRI) tasks such as robotic caregiving. Using stretchable fabric and hyper-elastic polymer, CushSense identifies contacts by monitoring capacitive changes due to skin deformation. CushSense is cost-effective ($\sim$US\$7 per taxel) and easy to fabricate. We detail the sensor design and fabrication process and perform characterization, highlighting its high sensing accuracy (relative error of 0.58%) and durability (0.054% accuracy drop after 1000 interactions). We also present a user study underscoring its perceived safety and comfort for the assistive task of limb manipulation. We open source all sensor-related resources on https://emprise.cs.cornell.edu/cushsense.


RABBIT: A Robot-Assisted Bed Bathing System with Multimodal Perception and Integrated Compliance

arXiv.org Artificial Intelligence

This paper introduces RABBIT, a novel robot-assisted bed bathing system designed to address the growing need for assistive technologies in personal hygiene tasks. It combines multimodal perception and dual (software and hardware) compliance to perform safe and comfortable physical human-robot interaction. Using RGB and thermal imaging to segment dry, soapy, and wet skin regions accurately, RABBIT can effectively execute washing, rinsing, and drying tasks in line with expert caregiving practices. Our system includes custom-designed motion primitives inspired by human caregiving techniques, and a novel compliant end-effector called Scrubby, optimized for gentle and effective interactions. We conducted a user study with 12 participants, including one participant with severe mobility limitations, demonstrating the system's effectiveness and perceived comfort. Supplementary material and videos can be found on our website https://emprise.cs.cornell.edu/rabbit.


Multimodal Trajectory Prediction via Topological Invariance for Navigation at Uncontrolled Intersections

arXiv.org Artificial Intelligence

The widespread interest in autonomous driving technology in recent years [2] has motivated extensive research in multiagent navigation in driving domains. One of the most challenging driving domains [3] is the uncontrolled intersection, i.e., a street intersection that features no traffic signs or signals. Within this domain, we focus on scenarios in which agents do not communicate explicitly or implicitly through e.g., turn signals. This model setup gives rise to challenging multi-vehicle encounters that mimic real-world situations (arising due to human distraction, violation of traffic rules or special emergencies) that result in fatal accidents [3]. The frequency and severity of such situations has motivated vivid research interest in uncontrolled intersections [4, 5, 6]. In the absence of explicit traffic signs, signals, rules or explicit communication among agents, avoiding collisions at intersections relies on the ability of agents to predict the dynamics of interaction amongst themselves. One prevalent way to model multiagent dynamics is via trajectory prediction. However, multistep multiagent trajectory prediction is NPhard [7], whereas the sample complexity of existing learning algorithms effectively prohibits the extraction of practical models. Our key insight is that the geometric structure of the intersection and the incentive of agents to move efficiently and avoid collisions with each other (rationality) compress the space of possible multiagent trajectories, effectively simplifying inference.


ExTra: Transfer-guided Exploration

arXiv.org Machine Learning

In this work we present a novel approach for transfer-guided exploration in reinforcement learning that is inspired by the human tendency to leverage experiences from similar encounters in the past while navigating a new task. Given an optimal policy in a related task-environment, we show that its bisimulation distance from the current task-environment gives a lower bound on the optimal advantage of state-action pairs in the current task-environment. Transfer-guided Exploration (ExTra) samples actions from a Softmax distribution over these lower bounds. In this way, actions with potentially higher optimum advantage are sampled more frequently. In our experiments on gridworld environments, we demonstrate that given access to an optimal policy in a related task-environment, ExTra can outperform popular domain-specific exploration strategies viz. epsilon greedy, Model-Based Interval Estimation - Exploration Based (MBIE-EB), Pursuit and Boltzmann in terms of sample complexity and rate of convergence. We further show that ExTra is robust to choices of source task and shows a graceful degradation of performance as the dissimilarity of the source task increases. We also demonstrate that ExTra, when used alongside traditional exploration algorithms, improves their rate of convergence. Thus it is capable of complimenting the efficacy of traditional exploration algorithms.