Stochastic Continuous Greedy ++: When Upper and Lower Bounds Match

Neural Information Processing Systems 

In this paper, we develop \scg (\text{SCG}{ }), the first efficient variant of a conditional gradient method for maximizing a continuous submodular function subject to a convex constraint. Concretely, for a monotone and continuous DR-submodular function, \SCGPP achieves a tight [(1-1/e)\OPT -\epsilon] solution while using O(1/\epsilon 2) stochastic gradients and O(1/\epsilon) calls to the linear optimization oracle. We further provide an information-theoretic lower bound to showcase the necessity of \OM({1}/{\epsilon 2}) stochastic oracle queries in order to achieve [(1-1/e)\OPT -\epsilon] for monotone and DR-submodular functions. This result shows that our proposed \SCGPP enjoys optimality in terms of both approximation guarantee, i.e., (1-1/e) approximation factor, and stochastic gradient evaluations, i.e., O(1/\epsilon 2) calls to the stochastic oracle. By using stochastic continuous optimization as an interface, we also show that it is possible to obtain the [(1-1/e)\OPT-\epsilon] tight approximation guarantee for maximizing a monotone but stochastic submodular set function subject to a general matroid constraint after at most \mathcal{O}(n 2/\epsilon 2) calls to the stochastic function value, where n is the number of elements in the ground set.