Goto

Collaborating Authors

 Szafir, Daniel


ReBot: Scaling Robot Learning with Real-to-Sim-to-Real Robotic Video Synthesis

arXiv.org Artificial Intelligence

Vision-language-action (VLA) models present a promising paradigm by training policies directly on real robot datasets like Open X-Embodiment. However, the high cost of real-world data collection hinders further data scaling, thereby restricting the generalizability of VLAs. In this paper, we introduce ReBot, a novel real-to-sim-to-real approach for scaling real robot datasets and adapting VLA models to target domains, which is the last-mile deployment challenge in robot manipulation. Specifically, ReBot replays real-world robot trajectories in simulation to diversify manipulated objects (real-to-sim), and integrates the simulated movements with inpainted real-world background to synthesize physically realistic and temporally consistent robot videos (sim-to-real). Our approach has several advantages: 1) it enjoys the benefit of real data to minimize the sim-to-real gap; 2) it leverages the scalability of simulation; and 3) it can generalize a pretrained VLA to a target domain with fully automated data pipelines. Extensive experiments in both simulation and real-world environments show that ReBot significantly enhances the performance and robustness of VLAs. For example, in SimplerEnv with the WidowX robot, ReBot improved the in-domain performance of Octo by 7.2% and OpenVLA by 21.8%, and out-of-domain generalization by 19.9% and 9.4%, respectively. For real-world evaluation with a Franka robot, ReBot increased the success rates of Octo by 17% and OpenVLA by 20%. More information can be found at: https://yuffish.github.io/rebot/


BOSS: Benchmark for Observation Space Shift in Long-Horizon Task

arXiv.org Artificial Intelligence

Robotics has long sought to develop visual-servoing robots capable of completing previously unseen long-horizon tasks. Hierarchical approaches offer a pathway for achieving this goal by executing skill combinations arranged by a task planner, with each visuomotor skill pre-trained using a specific imitation learning (IL) algorithm. However, even in simple long-horizon tasks like skill chaining, hierarchical approaches often struggle due to a problem we identify as Observation Space Shift (OSS), where the sequential execution of preceding skills causes shifts in the observation space, disrupting the performance of subsequent individually trained skill policies. To validate OSS and evaluate its impact on long-horizon tasks, we introduce BOSS (a Benchmark for Observation Space Shift). BOSS comprises three distinct challenges: "Single Predicate Shift", "Accumulated Predicate Shift", and "Skill Chaining", each designed to assess a different aspect of OSS's negative effect. We evaluated several recent popular IL algorithms on BOSS, including three Behavioral Cloning methods and the Visual Language Action model OpenVLA. Even on the simplest challenge, we observed average performance drops of 67%, 35%, 34%, and 54%, respectively, when comparing skill performance with and without OSS. Additionally, we investigate a potential solution to OSS that scales up the training data for each skill with a larger and more visually diverse set of demonstrations, with our results showing it is not sufficient to resolve OSS. The project page is: https://boss-benchmark.github.io/


ARCADE: Scalable Demonstration Collection and Generation via Augmented Reality for Imitation Learning

arXiv.org Artificial Intelligence

Robot Imitation Learning (IL) is a crucial technique in robot learning, where agents learn by mimicking human demonstrations. However, IL encounters scalability challenges stemming from both non-user-friendly demonstration collection methods and the extensive time required to amass a sufficient number of demonstrations for effective training. In response, we introduce the Augmented Reality for Collection and generAtion of DEmonstrations (ARCADE) framework, designed to scale up demonstration collection for robot manipulation tasks. Our framework combines two key capabilities: 1) it leverages AR to make demonstration collection as simple as users performing daily tasks using their hands, and 2) it enables the automatic generation of additional synthetic demonstrations from a single human-derived demonstration, significantly reducing user effort and time. We assess ARCADE's performance on a real Fetch robot across three robotics tasks: 3-Waypoints-Reach, Push, and Pick-And-Place. Using our framework, we were able to rapidly train a policy using vanilla Behavioral Cloning (BC), a classic IL algorithm, which excelled across these three tasks. We also deploy ARCADE on a real household task, Pouring-Water, achieving an 80% success rate.


Augmented Reality Demonstrations for Scalable Robot Imitation Learning

arXiv.org Artificial Intelligence

In contrast, Robot Imitation Learning (IL) is a widely used method for training Imitation Learning (IL) aims to empower end-users to teach robots robots to perform manipulation tasks that involve mimicking skills and behaviors through demonstrations, showing promising human demonstrations to acquire skills. However, its practicality results in controlled laboratory environments [3, 4, 11, 13]. However, has been limited due to its requirement that users be trained current methods for collecting demonstrations require users in operating real robot arms to provide demonstrations. This paper to be acquainted with the operation of specific controllers or engage presents an innovative solution: an Augmented Reality (AR)- in contact-based kinesthetic teaching on real robot arms [6, 16, 18], assisted framework for demonstration collection, empowering nonroboticist thereby impeding the widespread application of IL. users to produce demonstrations for robot IL using devices To streamline demonstration collection for non-roboticists, it like the HoloLens 2. Our framework facilitates scalable and is crucial to address two issues: 1) non-expert users typically lack diverse demonstration collection for real-world tasks. We validate understanding of robot arm controllers, and 2) non-roboticists may our approach with experiments on three classical robotics tasks: face limited access to real robot arms due to their high cost and the reach, push, and pick-and-place. The real robot performs each task specialized nature of robot manipulators. While virtual reality (VR) successfully while replaying demonstrations collected via AR. has been used for addressing these challenges [8, 21], it requires


Paper index: Designing an introductory HRI course (workshop at HRI 2024)

arXiv.org Artificial Intelligence

Human-robot interaction is now an established discipline. Dozens of HRI courses exist at universities worldwide, and some institutions even offer degrees in HRI. However, although many students are being taught HRI, there is no agreed-upon curriculum for an introductory HRI course. In this workshop, we aimed to reach community consensus on what should be covered in such a course. Through interactive activities like panels, breakout discussions, and syllabus design, workshop participants explored the many topics and pedagogical approaches for teaching HRI. This collection of articles submitted to the workshop provides examples of HRI courses being offered worldwide.


Generating Executable Action Plans with Environmentally-Aware Language Models

arXiv.org Artificial Intelligence

Large Language Models (LLMs) trained using massive text datasets have recently shown promise in generating action plans for robotic agents from high level text queries. However, these models typically do not consider the robot's environment, resulting in generated plans that may not actually be executable, due to ambiguities in the planned actions or environmental constraints. In this paper, we propose an approach to generate environmentally-aware action plans that agents are better able to execute. Our approach involves integrating environmental objects and object relations as additional inputs into LLM action plan generation to provide the system with an awareness of its surroundings, resulting in plans where each generated action is mapped to objects present in the scene. We also design a novel scoring function that, along with generating the action steps and associating them with objects, helps the system disambiguate among object instances and take into account their states. We evaluated our approach using the VirtualHome simulator and the ActivityPrograms knowledge base and found that action plans generated from our system had a 310% improvement in executability and a 147% improvement in correctness over prior work. The complete code and a demo of our method is publicly available at https://github.com/hri-ironlab/scene_aware_language_planner.


Dynamic Competency Self-Assessment for Autonomous Agents

arXiv.org Artificial Intelligence

As autonomous robots are deployed in increasingly complex environments, platform degradation, environmental uncertainties, and deviations from validated operation conditions can make it difficult for human partners to understand robot capabilities and limitations. The ability for a robot to self-assess its competency in dynamic and uncertain environments will be a crucial next step in successful human-robot teaming. This work presents and evaluates an Event-Triggered Generalized Outcome Assessment (ET-GOA) algorithm for autonomous agents to dynamically assess task confidence during execution. The algorithm uses a fast online statistical test of the agent's observations and its model predictions to decide when competency assessment is needed. We provide experimental results using ET-GOA to generate competency reports during a simulated delivery task and suggest future research directions for self-assessing agents.


Virtual-to-Real-World Transfer Learning for Robots on Wilderness Trails

arXiv.org Machine Learning

Robots hold promise in many scenarios involving outdoor use, such as search-and-rescue, wildlife management, and collecting data to improve environment, climate, and weather forecasting. However, autonomous navigation of outdoor trails remains a challenging problem. Recent work has sought to address this issue using deep learning. Although this approach has achieved state-of-the-art results, the deep learning paradigm may be limited due to a reliance on large amounts of annotated training data. Collecting and curating training datasets may not be feasible or practical in many situations, especially as trail conditions may change due to seasonal weather variations, storms, and natural erosion. In this paper, we explore an approach to address this issue through virtual-to-real-world transfer learning using a variety of deep learning models trained to classify the direction of a trail in an image. Our approach utilizes synthetic data gathered from virtual environments for model training, bypassing the need to collect a large amount of real images of the outdoors. We validate our approach in three main ways. First, we demonstrate that our models achieve classification accuracies upwards of 95% on our synthetic data set. Next, we utilize our classification models in the control system of a simulated robot to demonstrate feasibility. Finally, we evaluate our models on real-world trail data and demonstrate the potential of virtual-to-real-world transfer learning.


The 1st International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interaction

AI Magazine

The 1st International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI) was held in 2018 in conjunction with the 13th International Conference on Human-Robot Interaction, and brought together researchers from the fields of Human-Robot Interaction (HRI), Robotics, Artificial Intelligence, and Virtual, Augmented, and Mixed Reality in order to identify challenges in mixed reality interactions between humans and robots. This inaugural workshop featured a keynote talk from Blair MacIntyre (Mozilla, Georgia Tech), a panel discussion, and twenty-nine papers presented as lightning talks and/or posters. In this report, we briefly survey the papers presented at the workshop and outline some potential directions for the community.