robust stability condition
- North America > United States > Illinois (0.05)
- Asia > Middle East > Jordan (0.04)
- North America > United States > New Jersey (0.04)
- (4 more...)
- North America > United States > Illinois (0.05)
- Asia > Middle East > Jordan (0.04)
- North America > United States > New Jersey (0.04)
- (4 more...)
Review for NeurIPS paper: On the Stability and Convergence of Robust Adversarial Reinforcement Learning: A Case Study on Linear Quadratic Systems
Additional Feedback: Overall, I have a bit negative opinion of the paper. My main concerns include: 1, the related work is not well discussed. Authors define robust stability condition, which essentially makes a critical intermediate term in analysis easy to deal with. A more reasonable assumption should be imposed on A,B,C. Post rebuttal 1, some potential references about RARL that should be included are: Extending robust adversarial reinforcement learning considering adaptation and diversity, Shioya et al 2018; Adversarial Reinforcement Learning-based Robust Access Point Coordination Against Uncoordinated Interference, Kihira et al 2020; Robust multi-agent reinforcement learning via minimax deep deterministic policy gradient, Li et al 2019; Policy-Gradient Algorithms Have No Guarantees of Convergence in Linear Quadratic Games, Mazumdar et al 2019; Policy Iteration for Linear Quadratic Games With Stochastic Parameters, Gravell et al 2020; Risk averse robust adversarial reinforcement learning, Pan et al 2019; Online robust policy learning in the presence of unknown adversaries, Havens et al 2018.