Adviser-Actor-Critic: Eliminating Steady-State Error in Reinforcement Learning Control
Chen, Donghe, Peng, Yubin, Zheng, Tengjie, Wang, Han, Qu, Chaoran, Cheng, Lin
–arXiv.org Artificial Intelligence
High-precision control tasks present substantial Dynamic modeling is crucial for understanding robot behavior challenges for reinforcement learning (RL) algorithms, and designing control strategies. However, real-world frequently resulting in suboptimal performance systems often display nonlinear behavior, making it difficult attributed to network approximation inaccuracies to create accurate models. Additionally, the highdimensional and inadequate sample quality.These state space of robots can lead to complex interactions issues are exacerbated when the task requires the between components, further complicating control agent to achieve a precise goal state, as is common (Buşoniu et al., 2018; Zhao et al., 2020a;b; Cao et al., 2023). in robotics and other real-world applications.We To highlight these challenges, we discuss the attributes and introduce Adviser-Actor-Critic (AAC), designed limitations of existing control algorithms.
arXiv.org Artificial Intelligence
Feb-4-2025
- Country:
- Asia > China (0.14)
- North America > Canada (0.14)
- Genre:
- Overview (0.46)
- Research Report (0.50)
- Technology: