Review for NeurIPS paper: Security Analysis of Safe and Seldonian Reinforcement Learning Algorithms

Neural Information Processing Systems 

All the reviewers support acceptance for the contributions, notably improvements to the robustness of RL algorithms to adversarial attacks, and a clear exposition on how these methods can be applied to real world problems. Please consider revising the paper to address the concerns raised in the reviews and rebuttal, in particular to better explain the scope of the work. Separately, it may be useful to extend the broader impact statement to inform a casual reader that a mathematical safety guarantee on an algorithm is not a replacement for domain specific safety requirements (for example, the diabetes treatment would still need oversight for medical safety).