Goto

Collaborating Authors

 Chen, Weiqin


PIANIST: Learning Partially Observable World Models with LLMs for Multi-Agent Decision Making

arXiv.org Artificial Intelligence

Effective extraction of the world knowledge in LLMs for complex decision-making tasks remains a challenge. We propose a framework PIANIST for decomposing the world model into seven intuitive components conducive to zero-shot LLM generation. Given only the natural language description of the game and how input observations are formatted, our method can generate a working world model for fast and efficient MCTS simulation. We show that our method works well on two different games that challenge the planning and decision making skills of the agent for both language and non-language based action taking, without any training on domain-specific training data or explicitly defined world model.


SAD: State-Action Distillation for In-Context Reinforcement Learning under Random Policies

arXiv.org Artificial Intelligence

Pretrained foundation models (FMs) have exhibited extraordinary in-context learning performance, allowing zero-shot (or few-shot) generalization to new environments/tasks not encountered during the pretraining. In the case of reinforcement learning (RL), in-context RL (ICRL) emerges when pretraining FMs on decision-making problems in an autoregressivesupervised manner. Nevertheless, the current state-of-the-art ICRL algorithms, such as Algorithm Distillation, Decision Pretrained Transformer and Decision Importance Transformer, impose stringent requirements on the pretraining dataset concerning the behavior (source) policies, context information, and action labels, etc. Notably, these algorithms either demand optimal policies or require varying degrees of well-trained behavior policies for all pretraining environments. This significantly hinders the application of ICRL to realworld scenarios, where acquiring optimal or well-trained policies for a substantial volume of real-world training environments can be prohibitively expensive or even intractable. To overcome this challenge, we introduce a novel approach, termed State-Action Distillation (SAD), that allows to generate an effective pretraining dataset guided solely by random policies. In particular, SAD selects query states and corresponding action labels by distilling the outstanding state-action pairs from the entire state and action spaces by using random policies within a trust horizon, and then inherits the classical autoregressive-supervised mechanism during the pretraining. To the best of our knowledge, this is the first work that enables effective ICRL under (e.g., uniform) random policies and random contexts. We also establish the quantitative analysis of the trustworthiness as well as the performance guarantees of our SAD approach. Moreover, our empirical results across multiple popular ICRL benchmark environments demonstrate that, on average, SAD outperforms the best baseline by 236.3% in the offline evaluation and by 135.2% in the online evaluation.


A General Control-Theoretic Approach for Reinforcement Learning: Theory and Algorithms

arXiv.org Artificial Intelligence

For many years now, reinforcement learning (RL) has succeeded in solving a wide variety of decision-making problems and control for robotics [1, 2, 3, 4, 5]. Generally speaking, modelfree methods [6, 7] often suffer from high sample complexity that can require an inordinate amount of samples, making them unsuitable for robotic applications where collecting large amounts of data is time-consuming, costly and potentially dangerous for the system and its surroundings [8, 9, 10, 11, 12]. On the other hand, model-based RL methods have been successful in demonstrating significantly reduced sample complexity and in outperforming model-free approaches for various decision making under uncertainty problems (see, e.g., [13, 14]). However, such modelbased approaches can suffer from the difficulty of learning an appropriate model and from worse asymptotic performance than model-free approaches due to model bias from inherently assuming the learned system dynamics model accurately represents the true system environment (see, e.g., [15, 16, 17]). In this paper we propose a novel form of RL that seeks to directly learn an optimal control policy for a general underlying (unknown) dynamical system and to directly apply the corresponding learned optimal control policy within the dynamical system. This general approach is in strong contrast to many traditional model-based RL methods that, after learning the system dynamics model which is often of high complexity and dimensionality, then use this system dynamics model to compute an approximate solution of a corresponding (stochastic) dynamic programming problem, often applying model predictive control (see, e.g., [18]). Our control-based RL (CBRL) approach instead directly learns the unknown parameters that derive, through control-theoretic means, an optimal control policy function from a family of control policy functions, often of much lower complexity and dimensionality, from which the optimal control policy is directly obtained. The theoretical foundation and analysis of our CRBL approach is presented within the context of a general Markov decision process (MDP) framework that extends the family of policies associated with the classical Bellman operator to a family of control-policy functions mapping a vector of (unknown) parameters from a corresponding parameter set to a control policy which is optimal under those parameters, and that extends the domain of these control policies from a single state to span across all (or a large subset of) states, with the (unknown) parameter vector encoding global and local information that needs to be learned. Within the context of this MDP framework and our general CBRL approach, we establish theoretical results on convergence and optimality with respect to (w.r.t.) a CBRL contraction operator, analogous to the Bellman operator.


Adaptive Primal-Dual Method for Safe Reinforcement Learning

arXiv.org Artificial Intelligence

Primal-dual methods have a natural application in Safe Reinforcement Learning (SRL), posed as a constrained policy optimization problem. In practice however, applying primal-dual methods to SRL is challenging, due to the inter-dependency of the learning rate (LR) and Lagrangian multipliers (dual variables) each time an embedded unconstrained RL problem is solved. In this paper, we propose, analyze and evaluate adaptive primal-dual (APD) methods for SRL, where two adaptive LRs are adjusted to the Lagrangian multipliers so as to optimize the policy in each iteration. We theoretically establish the convergence, optimality and feasibility of the APD algorithm. Finally, we conduct numerical evaluation of the practical APD algorithm with four well-known environments in Bullet-Safey-Gym employing two state-of-the-art SRL algorithms: PPO-Lagrangian and DDPG-Lagrangian. All experiments show that the practical APD algorithm outperforms (or achieves comparable performance) and attains more stable training than the constant LR cases. Additionally, we substantiate the robustness of selecting the two adaptive LRs by empirical evidence.


Probabilistic Constraint for Safety-Critical Reinforcement Learning

arXiv.org Artificial Intelligence

In this paper, we consider the problem of learning safe policies for probabilistic-constrained reinforcement learning (RL). Specifically, a safe policy or controller is one that, with high probability, maintains the trajectory of the agent in a given safe set. We establish a connection between this probabilistic-constrained setting and the cumulative-constrained formulation that is frequently explored in the existing literature. We provide theoretical bounds elucidating that the probabilistic-constrained setting offers a better trade-off in terms of optimality and safety (constraint satisfaction). The challenge encountered when dealing with the probabilistic constraints, as explored in this work, arises from the absence of explicit expressions for their gradients. Our prior work provides such an explicit gradient expression for probabilistic constraints which we term Safe Policy Gradient-REINFORCE (SPG-REINFORCE). In this work, we provide an improved gradient SPG-Actor-Critic that leads to a lower variance than SPG-REINFORCE, which is substantiated by our theoretical results. A noteworthy aspect of both SPGs is their inherent algorithm independence, rendering them versatile for application across a range of policy-based algorithms. Furthermore, we propose a Safe Primal-Dual algorithm that can leverage both SPGs to learn safe policies. It is subsequently followed by theoretical analyses that encompass the convergence of the algorithm, as well as the near-optimality and feasibility on average. In addition, we test the proposed approaches by a series of empirical experiments. These experiments aim to examine and analyze the inherent trade-offs between the optimality and safety, and serve to substantiate the efficacy of two SPGs, as well as our theoretical contributions.


Policy Gradients for Probabilistic Constrained Reinforcement Learning

arXiv.org Artificial Intelligence

This paper considers the problem of learning safe policies in the context of reinforcement learning (RL). In particular, we consider the notion of probabilistic safety. This is, we aim to design policies that maintain the state of the system in a safe set with high probability. This notion differs from cumulative constraints often considered in the literature. The challenge of working with probabilistic safety is the lack of expressions for their gradients. Indeed, policy optimization algorithms rely on gradients of the objective function and the constraints. To the best of our knowledge, this work is the first one providing such explicit gradient expressions for probabilistic constraints. It is worth noting that the gradient of this family of constraints can be applied to various policy-based algorithms. We demonstrate empirically that it is possible to handle probabilistic constraints in a continuous navigation problem.


Open Problems and Modern Solutions for Deep Reinforcement Learning

arXiv.org Artificial Intelligence

Deep Reinforcement Learning (DRL) has achieved great success in solving complicated decision-making problems. Despite the successes, DRL is frequently criticized for many reasons, e.g., data inefficient, inflexible and intractable reward design. In this paper, we review two publications that investigate the mentioned issues of DRL and propose effective solutions. One designs the reward for human-robot collaboration by combining the manually designed extrinsic reward with a parameterized intrinsic reward function via the deterministic policy gradient, which improves the task performance and guarantees a stronger obstacle avoidance. The other one applies selective attention and particle filters to rapidly and flexibly attend to and select crucial pre-learned features for DRL using approximate inference instead of backpropagation, thereby improving the efficiency and flexibility of DRL. Potential avenues for future work in both domains are discussed in this paper.