Review for NeurIPS paper: On the Stability and Convergence of Robust Adversarial Reinforcement Learning: A Case Study on Linear Quadratic Systems
–Neural Information Processing Systems
Additional Feedback: Overall, I have a bit negative opinion of the paper. My main concerns include: 1, the related work is not well discussed. Authors define robust stability condition, which essentially makes a critical intermediate term in analysis easy to deal with. A more reasonable assumption should be imposed on A,B,C. Post rebuttal 1, some potential references about RARL that should be included are: Extending robust adversarial reinforcement learning considering adaptation and diversity, Shioya et al 2018; Adversarial Reinforcement Learning-based Robust Access Point Coordination Against Uncoordinated Interference, Kihira et al 2020; Robust multi-agent reinforcement learning via minimax deep deterministic policy gradient, Li et al 2019; Policy-Gradient Algorithms Have No Guarantees of Convergence in Linear Quadratic Games, Mazumdar et al 2019; Policy Iteration for Linear Quadratic Games With Stochastic Parameters, Gravell et al 2020; Risk averse robust adversarial reinforcement learning, Pan et al 2019; Online robust policy learning in the presence of unknown adversaries, Havens et al 2018.
Neural Information Processing Systems
Feb-8-2025, 13:26:16 GMT
- Technology: