Goto

Collaborating Authors

 hopper


A Hyperparameter Settings of RD

Neural Information Processing Systems

In this section, we describe details about hyperparameter setting of RD. SAC-N-Unc and TD3-N-Unc, M is set to 1/10 of the total training steps. To ensure fairness, algorithms employing RD are implemented using CORL repository [54]. By modifying the original SAC/TD3 algorithm to employ a critic ensemble of number N and incorporate an uncertainty regularization term within the policy update process, we derive these backbone algorithms. Additionally, using RD with fewer Q ensembles can achieve similar or even better results than the backbone methods using more Q ensembles, indicating its potential in reducing computing resource consumption.


SAD-Flower: Flow Matching for Safe, Admissible, and Dynamically Consistent Planning

Huang, Tzu-Yuan, Lederer, Armin, Wu, Dai-Jie, Dai, Xiaobing, Zhang, Sihua, Sosnowski, Stefan, Sun, Shao-Hua, Hirche, Sandra

arXiv.org Artificial Intelligence

Flow matching (FM) has shown promising results in data-driven planning. However, it inherently lacks formal guarantees for ensuring state and action constraints, whose satisfaction is a fundamental and crucial requirement for the safety and admissibility of planned trajectories on various systems. Moreover, existing FM planners do not ensure the dynamical consistency, which potentially renders trajectories inexecutable. We address these shortcomings by proposing SAD-Flower, a novel framework for generating Safe, Admissible, and Dynamically consistent trajectories. Our approach relies on an augmentation of the flow with a virtual control input. Thereby, principled guidance can be derived using techniques from nonlinear control theory, providing formal guarantees for state constraints, action constraints, and dynamic consistency. Crucially, SAD-Flower operates without retraining, enabling test-time satisfaction of unseen constraints. Through extensive experiments across several tasks, we demonstrate that SAD-Flower outperforms various generative-model-based baselines in ensuring constraint satisfaction.




Appendix Table of Contents

Neural Information Processing Systems

The number of layers is 12 for GPT2 and randomly initialized model and 24 for iGPT. Note that these notations are sometimes used interchangeably as long as it doesn't significantly The activation to be analyzed are outputs from all layers . CKA about is shown in Figure 1. The design of the diagram is based on a previous study [35]. Figure 11: Activation we consider to compute CKA.



Supplementary Material for BAIL: Best-Action Imitation Learning for Batch Deep Reinforcement Learning A Proofs of Theorems

Neural Information Processing Systems

BAIL includes a regularization scheme to prevent over-fitting when generating the upper envelope. We refer to it as an "early stopping scheme" because the key idea is to return to the parameter values which gave the lowest validation error (see Section 7.8 of Goodfellow et al. Details are provided in Table 1. Table 1: BAIL hyper-parameters Parameter V alue discount rate γ 0. 99 horizon T 1000 training set size 0. 8 |B| validation set size 0. 2 |B| optimizer Adam [4] percentage p % 30% for BAIL 25% for Progressive BAIL upper envelope network structure 128 128 hidden units, ReLU activation learning rate 3 10 We use five MuJoCo environments, including Humanoid, which is the most challenging of the MuJoCo environments, and is not attempted in most other papers on batch DRL. The BCQ paper [2] also uses the same hyper-parameters for all experiments.


Supplementary Material A Details on experimental setups A.1 Environments

Neural Information Processing Systems

One can observe that transition dynamics follow multi-modal distributions. We visualize the transitions in Figure 8a. The objective of Pendulum is to swing up the pole and keep the pole upright within 200 time steps. Hopper is to move forward as fast as possible while minimizing the action cost within 500 time steps. We visualize the transitions in Figure 8d.