Reviews: Submodular Maximization via Gradient Ascent: The Case of Deep Submodular Functions

Neural Information Processing Systems 

The paper proves a very interesting result: For maximizing Deep Submodular Functions (DSF) with matroid constraints, one can provide efficient algorithms that, under mild assumptions on the singleton marginals, have approximation factor better than 1-1/e (and potentially approaching 1 when the rank of the matroid becomes large). This is given in Theorem 1 which I think is the main result of the paper. The basic idea behind the algorithm is that for DSFs there is a natural concave extension, equations (4), (5) that can be maximized by projected gradient ascent (this result has been proved in [3]). The authors show in Proposition 1 that this concave extension is close to the multilinear extension, and in section 4 they show that the projected gradient ascent algorithm can be implemented efficiently (e.g. the subgradient of the concave extension can be computed efficiently due to the deep structure). The paper is written well and the results are novel.