Liu, Changliu
Absolute Policy Optimization
Zhao, Weiye, Li, Feihan, Sun, Yifan, Chen, Rui, Wei, Tianhao, Liu, Changliu
In recent years, trust region on-policy reinforcement learning has achieved impressive results in addressing complex control tasks and gaming scenarios. However, contemporary state-of-the-art algorithms within this category primarily emphasize improvement in expected performance, lacking the ability to control over the worst-case performance outcomes. To address this limitation, we introduce a novel objective function; by optimizing which, it will lead to guaranteed monotonic improvement in the lower bound of near-total performance samples (absolute performance). Considering this groundbreaking theoretical advancement, we then refine this theoretically grounded algorithm through a series of approximations, resulting in a practical solution called Absolute Policy Optimization (APO). Our experiments demonstrate the effectiveness of our approach across challenging continuous control benchmark tasks and extend its applicability to mastering Atari games. Our findings reveal that APO significantly outperforms state-of-the-art policy gradient algorithms, resulting in substantial improvements in both expected performance and worst-case performance.
Multi-Agent Strategy Explanations for Human-Robot Collaboration
Pandya, Ravi, Zhao, Michelle, Liu, Changliu, Simmons, Reid, Admoni, Henny
As robots are deployed in human spaces, it's important that they are able to coordinate their actions with the people around them. Part of such coordination involves ensuring that people have a good understanding of how a robot will act in the environment. This can be achieved through explanations of the robot's policy. Much prior work in explainable AI and RL focuses on generating explanations for single-agent policies, but little has been explored in generating explanations for collaborative policies. In this work, we investigate how to generate multi-agent strategy explanations for human-robot collaboration. We formulate the problem using a generic multi-agent planner, show how to generate visual explanations through strategy-conditioned landmark states and generate textual explanations by giving the landmarks to an LLM. Through a user study, we find that when presented with explanations from our proposed framework, users are able to better explore the full space of strategies and collaborate more efficiently with new robot partners.
Multimodal Safe Control for Human-Robot Interaction
Pandya, Ravi, Wei, Tianhao, Liu, Changliu
Generating safe behaviors for autonomous systems is important as they continue to be deployed in the real world, especially around people. In this work, we focus on developing a novel safe controller for systems where there are multiple sources of uncertainty. We formulate a novel multimodal safe control method, called the Multimodal Safe Set Algorithm (MMSSA) for the case where the agent has uncertainty over which discrete mode the system is in, and each mode itself contains additional uncertainty. To our knowledge, this is the first energy-function-based safe control method applied to systems with multimodal uncertainty. We apply our controller to a simulated human-robot interaction where the robot is uncertain of the human's true intention and each potential intention has its own additional uncertainty associated with it, since the human is not a perfectly rational actor. We compare our proposed safe controller to existing safe control methods and find that it does not impede the system performance (i.e. efficiency) while also improving the safety of the system.
Towards Proactive Safe Human-Robot Collaborations via Data-Efficient Conditional Behavior Prediction
Pandya, Ravi, Wang, Zhuoyuan, Nakahira, Yorie, Liu, Changliu
We focus on the problem of how we can enable a robot to collaborate seamlessly with a human partner, specifically in scenarios like collaborative manufacturing where prexisting data is sparse. Much prior work in human-robot collaboration uses observational models of humans (i.e. models that treat the robot purely as an observer) to choose the robot's behavior, but such models do not account for the influence the robot has on the human's actions, which may lead to inefficient interactions. We instead formulate the problem of optimally choosing a collaborative robot's behavior based on a conditional model of the human that depends on the robot's future behavior. First, we propose a novel model-based formulation of conditional behavior prediction that allows the robot to infer the human's intentions based on its future plan in data-sparse environments. We then show how to utilize a conditional model for proactive goal selection and path generation around human collaborators. Finally, we use our proposed proactive controller in a collaborative task with real users to show that it can improve users' interactions with a robot collaborator quantitatively and qualitatively.
Learning Predictive Safety Filter via Decomposition of Robust Invariant Set
Li, Zeyang, Hu, Chuxiong, Zhao, Weiye, Liu, Changliu
Ensuring safety of nonlinear systems under model uncertainty and external disturbances is crucial, especially for real-world control tasks. Predictive methods such as robust model predictive control (RMPC) require solving nonconvex optimization problems online, which leads to high computational burden and poor scalability. Reinforcement learning (RL) works well with complex systems, but pays the price of losing rigorous safety guarantee. This paper presents a theoretical framework that bridges the advantages of both RMPC and RL to synthesize safety filters for nonlinear systems with state- and action-dependent uncertainty. We decompose the robust invariant set (RIS) into two parts: a target set that aligns with terminal region design of RMPC, and a reach-avoid set that accounts for the rest of RIS. We propose a policy iteration approach for robust reach-avoid problems and establish its monotone convergence. This method sets the stage for an adversarial actor-critic deep RL algorithm, which simultaneously synthesizes a reach-avoid policy network, a disturbance policy network, and a reach-avoid value network. The learned reach-avoid policy network is utilized to generate nominal trajectories for online verification, which filters potentially unsafe actions that may drive the system into unsafe regions when worst-case disturbances are applied. We formulate a second-order cone programming (SOCP) approach for online verification using system level synthesis, which optimizes for the worst-case reach-avoid value of any possible trajectories. The proposed safety filter requires much lower computational complexity than RMPC and still enjoys persistent robust safety guarantee. The effectiveness of our method is illustrated through a numerical example.
Probabilistic Safeguard for Reinforcement Learning Using Safety Index Guided Gaussian Process Models
Zhao, Weiye, He, Tairan, Liu, Changliu
Safety is one of the biggest concerns to applying reinforcement learning (RL) to the physical world. In its core part, it is challenging to ensure RL agents persistently satisfy a hard state constraint without white-box or black-box dynamics models. This paper presents an integrated model learning and safe control framework to safeguard any agent, where its dynamics are learned as Gaussian processes. The proposed theory provides (i) a novel method to construct an offline dataset for model learning that best achieves safety requirements; (ii) a parameterization rule for safety index to ensure the existence of safe control; (iii) a safety guarantee in terms of probabilistic forward invariance when the model is learned using the aforementioned dataset. Simulation results show that our framework guarantees almost zero safety violation on various continuous control tasks.
Safety Index Synthesis via Sum-of-Squares Programming
Zhao, Weiye, He, Tairan, Wei, Tianhao, Liu, Simin, Liu, Changliu
Control systems often need to satisfy strict safety requirements. Safety index provides a handy way to evaluate the safety level of the system and derive the resulting safe control policies. However, designing safety index functions under control limits is difficult and requires a great amount of expert knowledge. This paper proposes a framework for synthesizing the safety index for general control systems using sum-of-squares programming. Our approach is to show that ensuring the non-emptiness of safe control on the safe set boundary is equivalent to a local manifold positiveness problem. We then prove that this problem is equivalent to sum-of-squares programming via the Positivstellensatz of algebraic geometry. We validate the proposed method on robot arms with different degrees of freedom and ground vehicles. The results show that the synthesized safety index guarantees safety and our method is effective even in high-dimensional robot systems.
An Optical Control Environment for Benchmarking Reinforcement Learning Algorithms
Abuduweili, Abulikemu, Liu, Changliu
Deep reinforcement learning has the potential to address various scientific problems. In this paper, we implement an optics simulation environment for reinforcement learning based controllers. The environment captures the essence of nonconvexity, nonlinearity, and time-dependent noise inherent in optical systems, offering a more realistic setting. Subsequently, we provide the benchmark results of several reinforcement learning algorithms on the proposed simulation environment. The experimental findings demonstrate the superiority of off-policy reinforcement learning approaches over traditional control algorithms in navigating the intricacies of complex optical control environments. The code of the paper is available at https://github.com/Walleclipse/Reinforcement-Learning-Pulse-Stacking.
Robust Safe Control with Multi-Modal Uncertainty
Wei, Tianhao, Ma, Liqian, Pandya, Ravi, Liu, Changliu
Safety in dynamic systems with prevalent uncertainties is crucial. Current robust safe controllers, designed primarily for uni-modal uncertainties, may be either overly conservative or unsafe when handling multi-modal uncertainties. To address the problem, we introduce a novel framework for robust safe control, tailored to accommodate multi-modal Gaussian dynamics uncertainties and control limits. We first present an innovative method for deriving the least conservative robust safe control under additive multi-modal uncertainties. Next, we propose a strategy to identify a locally least-conservative robust safe control under multiplicative uncertainties. Following these, we introduce a unique safety index synthesis method. This provides the foundation for a robust safe controller that ensures a high probability of realizability under control limits and multi-modal uncertainties. Experiments on a simulated Segway validate our approach, showing consistent realizability and less conservatism than controllers designed using uni-modal uncertainty methods. The framework offers significant potential for enhancing safety and performance in robotic applications.
Simulation-aided Learning from Demonstration for Robotic LEGO Construction
Liu, Ruixuan, Chen, Alan, Luo, Xusheng, Liu, Changliu
Recent advancements in manufacturing have a growing demand for fast, automatic prototyping (i.e. assembly and disassembly) capabilities to meet users' needs. This paper studies automatic rapid LEGO prototyping, which is devoted to constructing target LEGO objects that satisfy individual customization needs and allow users to freely construct their novel designs. A construction plan is needed in order to automatically construct the user-specified LEGO design. However, a freely designed LEGO object might not have an existing construction plan, and generating such a LEGO construction plan requires a non-trivial effort since it requires accounting for numerous constraints (e.g. object shape, colors, stability, etc.). In addition, programming the prototyping skill for the robot requires the users to have expert programming skills, which makes the task beyond the reach of the general public. To address the challenges, this paper presents a simulation-aided learning from demonstration (SaLfD) framework for easily deploying LEGO prototyping capability to robots. In particular, the user demonstrates constructing the customized novel LEGO object. The robot extracts the task information by observing the human operation and generates the construction plan. A simulation is developed to verify the correctness of the learned construction plan and the resulting LEGO prototype. The proposed system is deployed to a FANUC LR-mate 200id/7L robot. Experiments demonstrate that the proposed SaLfD framework can effectively correct and learn the prototyping (i.e. assembly and disassembly) tasks from human demonstrations. And the learned prototyping tasks are realized by the FANUC robot.