Review for NeurIPS paper: Improved Algorithms for Online Submodular Maximization via First-order Regret Bounds

Neural Information Processing Systems 

Weaknesses: This paper has a few weaknesses which are as follows: - Although the authors have cited [9] and [10] as previous literature on online submodular maximization, they have not compared their obtained regret bounds with the bounds of [9] and [10]. These two papers consider the online continuous submodular maximization problem (as opposed to submodular set function maximization in this work) and may seem different in the first look, however, since the multilinear extension of submodular set functions are continuous submodular, combination of the continuous algorithms in [9] and [10] and a lossless rounding technique such as pipage rounding could be used for discrete problems. In particular, the algorithms of [9] and [10] are almost identical to Algorithm 1 of this paper and the only novelty of Algorithm 1 is the use of g f-l (as opposed to f) as the utility function which leads to \alpha-regret bounds with curvature-dependent approximation ratio \alpha. I noticed that this issue has been addressed in more detail in the appendix, however, I think it's important to discuss the computational complexity of the discretized version of Algorithm 1 in the paper as well. In particular, implementing this algorithm on a real-world problem and comparing the performance for different discretization levels and various number of samples could have made the paper even better.