Goto

Collaborating Authors

 Batkovic, Ivo


Experimental Validation of Safe MPC for Autonomous Driving in Uncertain Environments

arXiv.org Artificial Intelligence

The full deployment of autonomous driving systems on a worldwide scale requires that the self-driving vehicle be operated in a provably safe manner, i.e., the vehicle must be able to avoid collisions in any possible traffic situation. In this paper, we propose a framework based on Model Predictive Control (MPC) that endows the self-driving vehicle with the necessary safety guarantees. In particular, our framework ensures constraint satisfaction at all times, while tracking the reference trajectory as close as obstacles allow, resulting in a safe and comfortable driving behavior. To discuss the performance and real-time capability of our framework, we provide first an illustrative simulation example, and then we demonstrate the effectiveness of our framework in experiments with a real test vehicle.


Learning When to Drive in Intersections by Combining Reinforcement Learning and Model Predictive Control

arXiv.org Artificial Intelligence

Learning When to Drive in Intersections by Combining Reinforcement Learning and Model Predictive Control Tommy Tram 1, 2, 3, Ivo Batkovic 1, 2, 3, Mohammad Ali 1, and Jonas Sj oberg 2 Abstract -- In this paper, we propose a decision making algorithm intended for automated vehicles that negotiate with other possibly non-automated vehicles in intersections. The decision algorithm is separated into two parts: a high-level decision module based on reinforcement learning, and a low-level planning module based on model predictive control. Traffic is simulated with numerous predefined driver behaviors and intentions, and the performance of the proposed decision algorithm was evaluated against another controller . The results show that the proposed decision algorithm yields shorter training episodes and an increased performance in success rate compared to the other controller . Interactions between road users in intersections is a complex problem to solve, making it difficult to address using conventional rule based systems. Many advancements aim to solve this problem by trying to imitate human drivers [1] or predicting what other drivers in traffic are planning to do [2]. In [3], the authors show that by modeling the decision process as a partially observable Markov decision process, the model can account for uncertainty in sensing the environment and [4] showed some probabilistic guarantees when solving the problem using reinforcement learning (RL).