Submodular Maximization in Clean Linear Time

Neural Information Processing Systems 

In this paper, we provide the first deterministic algorithm that achieves 1/2 -approximation for monotone submodular maximization subject to a knapsack constraint, while making a number of queries that scales only linearly with the size of the ground set n . Moreover, our result automatically paves the way for developing a linear-time deterministic algorithm that achieves the tight 1-1/e approximation guarantee for monotone submodular maximization under a cardinality (size) constraint. To complement our positive results, we also show strong information-theoretic lower bounds. More specifically, we show that when the maximum cardinality allowed for a solution is constant, no deterministic or randomized algorithm making a sub-linear number of function evaluations can guarantee any constant approximation ratio. Furthermore, when the constraint allows the selection of a constant fraction of the ground set, we show that any algorithm making fewer than \Omega(n/\log(n)) function evaluations cannot perform better than an algorithm that simply outputs a uniformly random subset of the ground set of the right size.