Computationally and Sample Efficient Safe Reinforcement Learning Using Adaptive Conformal Prediction

Zhou, Hao, Zhang, Yanze, Luo, Wenhao

arXiv.org Artificial Intelligence 

Computationally and Sample Efficient Safe Reinforcement Learning Using Adaptive Conformal Prediction Hao Zhou, Y anze Zhang, and Wenhao Luo Abstract -- Safety is a critical concern in learning-enabled autonomous systems especially when deploying these systems in real-world scenarios. An important challenge is accurately quantifying the uncertainty of unknown models to generate provably safe control policies that facilitate the gathering of informative data, thereby achieving both safe and optimal policies. Additionally, the selection of the data-driven model can significantly impact both the real-time implementation and the uncertainty quantification process. In this paper, we propose a provably sample efficient episodic safe learning framework that remains robust across various model choices with quantified uncertainty for online control tasks. Specifically, we first employ Quadrature Fourier Features (QFF) for kernel function approximation of Gaussian Processes (GPs) to enable efficient approximation of unknown dynamics. Then the Adaptive Conformal Prediction (ACP) is used to quantify the uncertainty from online observations and combined with the Control Barrier Functions (CBF) to characterize the uncertainty-aware safe control constraints under learned dynamics. Finally, an optimism-based exploration strategy is integrated with ACP-based CBFs for safe exploration and near-optimal safe nonlinear control. Theoretical proofs and simulation results are provided to demonstrate the effectiveness and efficiency of the proposed framework. I NTRODUCTION Model-based Reinforcement Learning (MBRL) [1]-[5] has shown promising results when applied to variant nonlinear systems because of its robust generalization capabilities.