Not enough data to create a plot.
Try a different view from the menu above.
Sigal, Adam
Improving Generalization in Reinforcement Learning Training Regimes for Social Robot Navigation
Sigal, Adam, Lin, Hsiu-Chin, Moon, AJung
In order for autonomous mobile robots to navigate in human spaces, they must abide by our social norms. Reinforcement learning (RL) has emerged as an effective method to train sequential decision-making policies that are able to respect these norms. However, a large portion of existing work in the field conducts both RL training and testing in simplistic environments. This limits the generalization potential of these models to unseen environments, and the meaningfulness of their reported results. We propose a method to improve the generalization performance of RL social navigation methods using curriculum learning. By employing multiple environment types and by modeling pedestrians using multiple dynamics models, we are able to progressively diversify and escalate difficulty in training. Our results show that the use of curriculum learning in training can be used to achieve better generalization performance than previous training methods. We also show that results presented in many existing state-of-the-art RL social navigation works do not evaluate their methods outside of their training environments, and thus do not reflect their policies' failure to adequately generalize to out-of-distribution scenarios. In response, we validate our training approach on larger and more crowded testing environments than those used in training, allowing for more meaningful measurements of model performance.
SAGE: Smart home Agent with Grounded Execution
Rivkin, Dmitriy, Hogan, Francois, Feriani, Amal, Konar, Abhisek, Sigal, Adam, Liu, Steve, Dudek, Greg
The common sense reasoning abilities and vast general knowledge of Large Language Models (LLMs) make them a natural fit for interpreting user requests in a Smart Home assistant context. LLMs, however, lack specific knowledge about the user and their home limit their potential impact. SAGE (Smart Home Agent with Grounded Execution), overcomes these and other limitations by using a scheme in which a user request triggers an LLM-controlled sequence of discrete actions. These actions can be used to retrieve information, interact with the user, or manipulate device states. SAGE controls this process through a dynamically constructed tree of LLM prompts, which help it decide which action to take next, whether an action was successful, and when to terminate the process. The SAGE action set augments an LLM's capabilities to support some of the most critical requirements for a Smart Home assistant. These include: flexible and scalable user preference management ("is my team playing tonight?"), access to any smart device's full functionality without device-specific code via API reading "turn down the screen brightness on my dryer", persistent device state monitoring ("remind me to throw out the milk when I open the fridge"), natural device references using only a photo of the room ("turn on the light on the dresser"), and more. We introduce a benchmark of 50 new and challenging smart home tasks where SAGE achieves a 75% success rate, significantly outperforming existing LLM-enabled baselines (30% success rate).
Multimodal and Force-Matched Imitation Learning with a See-Through Visuotactile Sensor
Ablett, Trevor, Limoyo, Oliver, Sigal, Adam, Jilani, Affan, Kelly, Jonathan, Siddiqi, Kaleem, Hogan, Francois, Dudek, Gregory
Abstract--Kinesthetic Teaching is a popular approach to collecting expert robotic demonstrations of contact-rich tasks for imitation learning (IL), but it typically only measures motion, ignoring the force placed on the environment by the robot. Furthermore, contact-rich tasks require accurate sensing of both reaching and touching, which can be difficult to provide with conventional sensing modalities. We address these challenges with a See-Through-your-Skin (STS) visuotactile sensor, using the sensor both (i) as a measurement tool to improve kinesthetic teaching, and (ii) as a policy input in contact-rich door manipulation tasks. An STS sensor can be switched between visual and tactile modes by leveraging a semi-transparent surface and controllable lighting, allowing for both pre-contact visual sensing and during-contact tactile sensing with a single sensor. First, we propose tactile force matching, a methodology that enables a robot to match forces read during kinesthetic teaching using tactile signals. Second, we develop a policy that controls STS mode switching, allowing a policy to learn the appropriate moment to switch an STS from its visual to its tactile mode. Finally, we study multiple observation configurations to compare and contrast the value of visual and tactile data from an STS with visual data Figure 1: Our STS sensor before and during contact with a cabinet knob from a wrist-mounted eye-in-hand camera. In visual mode, the camera sees through episodes from real-world manipulation experiments, we find that the gel and allows finding and reaching the knob, while tactile mode the inclusion of force matching raises average policy success rates provides contact-based feedback, via gel deformation and resultant by 62.5%, STS mode switching by 30.3%, and STS data as a dot displacement, upon initial contact and during opening. This dot policy input by 42.5%. Our results highlight the utility of seethrough displacement can also be used to measure a signal linearly related to tactile sensing for IL, both for data collection to allow force. Red circles highlight knob in sensor view.