Infinite-Horizon Value Function Approximation for Model Predictive Control
Jordana, Armand, Kleff, Sébastien, Haffemayer, Arthur, Ortiz-Haro, Joaquim, Carpentier, Justin, Mansard, Nicolas, Righetti, Ludovic
–arXiv.org Artificial Intelligence
Model Predictive Control has emerged as a popular tool for robots to generate complex motions. However, the real-time requirement has limited the use of hard constraints and large preview horizons, which are necessary to ensure safety and stability. In practice, practitioners have to carefully design cost functions that can imitate an infinite horizon formulation, which is tedious and often results in local minima. In this work, we study how to approximate the infinite horizon value function of constrained optimal control problems with neural networks using value iteration and trajectory optimization. Furthermore, we demonstrate how using this value function approximation as a terminal cost provides global stability to the model predictive controller. The approach is validated on two toy problems and a real-world scenario with online obstacle avoidance on an industrial manipulator where the value function is conditioned to the goal and obstacle.
arXiv.org Artificial Intelligence
Feb-10-2025
- Country:
- Europe > France (0.28)
- North America > United States (0.28)
- Genre:
- Research Report (0.82)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Neural Networks (0.67)
- Reinforcement Learning (0.69)
- Representation & Reasoning
- Constraint-Based Reasoning (0.68)
- Optimization (0.68)
- Uncertainty > Fuzzy Logic (0.62)
- Robots (1.00)
- Machine Learning
- Information Technology > Artificial Intelligence