Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations
Lindemann, Lars, Robey, Alexander, Jiang, Lejun, Das, Satyajeet, Tu, Stephen, Matni, Nikolai
–arXiv.org Artificial Intelligence
We assume that a model of the system dynamics and a state estimator are available along with corresponding error bounds, e.g., estimated from data in practice. We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety, as defined through controlled forward invariance of a safe set. We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior, e.g., data collected from a human operator or an expert controller. When the parametrization of the ROCBF is linear, then we show that, under mild assumptions, the optimization problem is convex. Along with the optimization problem, we provide verifiable conditions in terms of the density of the data, smoothness of the system model and state estimator, and the size of the error bounds that guarantee validity of the obtained ROCBF. Towards obtaining a practical control algorithm, we propose an algorithmic implementation of our theoretical framework that accounts for assumptions made in our framework in practice. We empirically validate our algorithm in the autonomous driving simulator CARLA and demonstrate how to learn safe control laws from RGB camera images.
arXiv.org Artificial Intelligence
Dec-5-2023
- Country:
- Asia (0.68)
- Europe > Switzerland
- North America > United States
- California > San Francisco County > San Francisco (0.14)
- Genre:
- Research Report (0.50)
- Industry:
- Automobiles & Trucks (1.00)
- Information Technology > Robotics & Automation (0.34)
- Transportation > Ground
- Road (0.48)
- Technology: