Goto

Collaborating Authors

 Borrelli, Francesco


Scalable Multi-modal Model Predictive Control via Duality-based Interaction Predictions

arXiv.org Artificial Intelligence

Abstract-- We propose a hierarchical architecture designed for scalable real-time Model Predictive Control (MPC) in complex, multi-modal traffic scenarios. This architecture comprises two key components: 1) RAID-Net, a novel attention-based Recurrent Neural Network that predicts relevant interactions along the MPC prediction horizon between the autonomous vehicle and the surrounding vehicles using Lagrangian duality, and 2) a reduced Stochastic MPC problem that eliminates irrelevant collision avoidance constraints, enhancing computational efficiency. Our approach is demonstrated in a simulated traffic intersection with interactive surrounding vehicles, showcasing a 12x speed-up in solving the motion planning problem. While this approach showcases robust navigation to heterogeneous traffic agents: human-driven and autonomous capabilities in multi-modal traffic scenarios, it focuses on vehicles navigating and making their own decisions. The game theoretic approaches in In urban driving scenarios, the motion planning for autonomous (ii) are generally computationally intractable for traffic scenarios vehicles in the presence of uncertain, multi-modal with many vehicles/agents, which is further exacerbated human-driven and autonomous vehicles poses a significant when the games are multi-modal/mixed.


Energy-Efficient Lane Changes Planning and Control for Connected Autonomous Vehicles on Urban Roads

arXiv.org Artificial Intelligence

This paper presents a novel energy-efficient motion planning algorithm for Connected Autonomous Vehicles (CAVs) on urban roads. The approach consists of two components: a decision-making algorithm and an optimization-based trajectory planner. The decision-making algorithm leverages Signal Phase and Timing (SPaT) information from connected traffic lights to select a lane with the aim of reducing energy consumption. The algorithm is based on a heuristic rule which is learned from human driving data. The optimization-based trajectory planner generates a safe, smooth, and energy-efficient trajectory toward the selected lane. The proposed strategy is experimentally evaluated in a Vehicle-in-the-Loop (VIL) setting, where a real test vehicle receives SPaT information from both actual and virtual traffic lights and autonomously drives on a testing site, while the surrounding vehicles are simulated. The results demonstrate that the use of SPaT information in autonomous driving leads to improved energy efficiency, with the proposed strategy saving 37.1% energy consumption compared to a lane-keeping algorithm.


Predictive Control for Autonomous Driving with Uncertain, Multi-modal Predictions

arXiv.org Artificial Intelligence

We propose a Stochastic MPC (SMPC) formulation for path planning with autonomous vehicles in scenarios involving multiple agents with multi-modal predictions. The multi-modal predictions capture the uncertainty of urban driving in distinct modes/maneuvers (e.g., yield, keep speed) and driving trajectories (e.g., speed, turning radius), which are incorporated for multi-modal collision avoidance chance constraints for path planning. In the presence of multi-modal uncertainties, it is challenging to reliably compute feasible path planning solutions at real-time frequencies ($\geq$ 10 Hz). Our main technological contribution is a convex SMPC formulation that simultaneously (1) optimizes over parameterized feedback policies and (2) allocates risk levels for each mode of the prediction. The use of feedback policies and risk allocation enhances the feasibility and performance of the SMPC formulation against multi-modal predictions with large uncertainty. We evaluate our approach via simulations and road experiments with a full-scale vehicle interacting in closed-loop with virtual vehicles. We consider distinct, multi-modal driving scenarios: 1) Negotiating a traffic light and a fast, tailgating agent, 2) Executing an unprotected left turn at a traffic intersection, and 3) Changing lanes in the presence of multiple agents. For all of these scenarios, our approach reliably computes multi-modal solutions to the path-planning problem at real-time frequencies.


Learning Model Predictive Control with Error Dynamics Regression for Autonomous Racing

arXiv.org Artificial Intelligence

This work presents a novel Learning Model Predictive Control (LMPC) strategy for autonomous racing at the handling limit that can iteratively explore and learn unknown dynamics in high-speed operational domains. We start from existing LMPC formulations and modify the system dynamics learning method. In particular, our approach uses a nominal, global, nonlinear, physics-based model with a local, linear, data-driven learning of the error dynamics. We conduct experiments in simulation, 1/10th scale hardware, and deployed the proposed LMPC on a full-scale autonomous race car used in the Indy Autonomous Challenge (IAC) with closed loop experiments at the Putnam Park Road Course in Indiana, USA. The results show that the proposed control policy exhibits improved robustness to parameter tuning and data scarcity. Incremental and safety-aware exploration toward the limit of handling and iterative learning of the vehicle dynamics in high-speed domains is observed both in simulations and experiments.


Euclidean and non-Euclidean Trajectory Optimization Approaches for Quadrotor Racing

arXiv.org Artificial Intelligence

We present two approaches to compute raceline trajectories for quadrotors by solving an optimal control problem. The approaches involve expressing quadrotor pose in either a Euclidean or non-Euclidean frame of reference and are both based on collocation. The compute times of both approaches are over 100x faster than published methods. Additionally, both approaches compute trajectories with faster lap time and show improved numerical convergence. In the last part of the paper we devise a novel method to compute racelines in dense obstacle fields using the non-Euclidean approach.


Data-Driven Optimization for Deposition with Degradable Tools

arXiv.org Artificial Intelligence

We present a data-driven optimization approach for robotic controlled deposition with a degradable tool. Existing methods make the assumption that the tool tip is not changing or is replaced frequently. Errors can accumulate over time as the tool wears away and this leads to poor performance in the case where the tool degradation is unaccounted for during deposition. In the proposed approach, we utilize visual and force feedback to update the unknown model parameters of our tool-tip. Subsequently, we solve a constrained finite time optimal control problem for tracking a reference deposition profile, where our robot plans with the learned tool degradation dynamics. We focus on a robotic drawing problem as an illustrative example. Using real-world experiments, we show that the error in target vs actual deposition decreases when learned degradation models are used in the control design.


Facilitating Cooperative and Distributed Multi-Vehicle Lane Change Maneuvers

arXiv.org Artificial Intelligence

A distributed coordination method for solving multi-vehicle lane changes for connected autonomous vehicles (CAVs) is presented. Existing approaches to multi-vehicle lane changes are passive and opportunistic as they are implemented only when the environment allows it. The novel approach of this paper relies on the role of a facilitator assigned to a CAV. The facilitator interacts with and modifies the environment to enable lane changes of other CAVs. Distributed MPC path planners and a distributed coordination algorithm are used to control the facilitator and other CAVs in a proactive and cooperative way. We demonstrate the effectiveness of the proposed approach through numerical simulations. In particular, we show enhanced feasibility of a multi-CAV lane change in comparison to the simultaneous multi-CAV lane change approach in various traffic conditions generated by using a data-set from real-traffic scenarios.


A Gaussian Process Model for Opponent Prediction in Autonomous Racing

arXiv.org Artificial Intelligence

In head-to-head racing, an accurate model of interactive behavior of the opposing target vehicle (TV) is required to perform tightly constrained, but highly rewarding maneuvers such as overtaking. However, such information is not typically made available in competitive scenarios, we therefore propose to construct a prediction and uncertainty model given data of the TV from previous races. In particular, a one-step Gaussian process (GP) model is trained on closed-loop interaction data to learn the behavior of a TV driven by an unknown policy. Predictions of the nominal trajectory and associated uncertainty are rolled out via a sampling-based approach and are used in a model predictive control (MPC) policy for the ego vehicle in order to intelligently trade-off between safety and performance when attempting overtaking maneuvers against a TV. We demonstrate the GP-based predictor in closed loop with the MPC policy in simulation races and compare its performance against several predictors from literature. In a Monte Carlo study, we observe that the GP-based predictor achieves similar win rates while maintaining safety in up to 3x more races. We finally demonstrate the prediction and control framework in real-time in a experimental study on a 1/10th scale racecar platform operating at speeds of around 2.8 m/s, and show a significant level of improvement when using the GP-based predictor over a baseline MPC predictor. Videos of the hardware experiments can be found at https://youtu.be/KMSs4ofDfIs.


Reinforcement Learning and Distributed Model Predictive Control for Conflict Resolution in Highly Constrained Spaces

arXiv.org Artificial Intelligence

This work presents a distributed algorithm for resolving cooperative multi-vehicle conflicts in highly constrained spaces. By formulating the conflict resolution problem as a Multi-Agent Reinforcement Learning (RL) problem, we can train a policy offline to drive the vehicles towards their destinations safely and efficiently in a simplified discrete environment. During the online execution, each vehicle first simulates the interaction among vehicles with the trained policy to obtain its strategy, which is used to guide the computation of a reference trajectory. A distributed Model Predictive Controller (MPC) is then proposed to track the reference while avoiding collisions. The preliminary results show that the combination of RL and distributed MPC has the potential to guide vehicles to resolve conflicts safely and smoothly while being less computationally demanding than the centralized approach.


Learning to Satisfy Unknown Constraints in Iterative MPC

arXiv.org Machine Learning

We propose a control design method for linear time-invariant systems that iteratively learns to satisfy unknown polyhedral state constraints. At each iteration of a repetitive task, the method constructs an estimate of the unknown environment constraints using collected closed-loop trajectory data. This estimated constraint set is improved iteratively upon collection of additional data. An MPC controller is then designed to robustly satisfy the estimated constraint set. This paper presents the details of the proposed approach, and provides robust and probabilistic guarantees of constraint satisfaction as a function of the number of executed task iterations. We demonstrate the safety of the proposed framework and explore the safety vs. performance trade-off in a detailed numerical example.