Smooth Model Predictive Control with Applications to Statistical Learning

Ahn, Kwangjun, Pfrommer, Daniel, Umenberger, Jack, Marcucci, Tobia, Mhammedi, Zak, Jadbabaie, Ali

arXiv.org Artificial Intelligence 

Approximating complex state-feedback controllers by parametric deep neural network models is a straightforward and easy technique for reducing the computational overhead of complex control policies, particularly in the context of Model Predictive Control (MPC). Learning a feedback controller to imitate an MPC policy over a given state distribution can overcome the limitations of both the implicit (online) and explicit (offline) variants of MPC. Implicit MPC uses an iterative numerical solver to obtain the optimal solution, which can be intractable to do in real-time for high-dimensional systems with complex dynamics. Conversely, explicit MPC finds an offline formulation of the MPC controller via multi-parametric programming which can be quickly queried, but where the complexity of the explicit representation scales poorly in the problem dimensions. Imitation learning (i.e., finding a feedback controller which approximates and performs similarly to the MPC policy) can transcend these limitations by using the computationally expensive iterative numerical solver in an offline manner to learn a cheaply-queriable, approximate policy solely over the state distribution relevant to the control problem, thereby bypassing the need to store the exact policy representation over the entire state domain. For continuous control problems, where approximately optimal control inputs are sufficient to solve the task, imitation learning is a direct path toward computationally inexpensive controllers which solve difficult, high-dimensional control problems in real-time.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found