Goto

Collaborating Authors

 submodularity


Appendix A: Missing Proofs A.1 Proof of Lemma 1 Proof

Neural Information Processing Systems

For clarity, we decompose Lemma 2 into three lemmas (Lemmas 4-6) and prove each of them. For clarity, we decompose Lemma 3 into three lemmas (Lemmas 7-9) and prove each of them. The proof of Lemma 10 is similar to that of Lemma 1. From the above process, it can be easily seen that Condition 1-3 in the lemma are satisfied. Theorem 3 can also be used to prove Theorem 2, simply by setting p = 1 . By similar reasoning with the proof of Theorem 1 (Appendix A.3), we get It can be easily verified that the social network monitoring problem considered in Section 5 is a non-monotone submodular maximization problem subject to a partition matroid constraint.


Fast Parallel Algorithms for Statistical Subset Selection Problems

Neural Information Processing Systems

In this paper, we propose a new framework for designing fast parallel algorithms for fundamental statistical subset selection tasks that include feature selection and experimental design. Such tasks are known to be weakly submodular and are amenable to optimization via the standard greedy algorithm. Despite its desirable approximation guarantees, however, the greedy algorithm is inherently sequential and in the worst case, its parallel runtime is linear in the size of the data. Recently, there has been a surge of interest in a parallel optimization technique called adaptive sampling which produces solutions with desirable approximation guarantees for submodular maximization in exponentially faster parallel runtime. Unfortunately, we show that for general weakly submodular functions such accelerations are impossible. The major contribution in this paper is a novel relaxation of submodularity which we call differential submodularity. We first prove that differential submodularity characterizes objectives like feature selection and experimental design. We then design an adaptive sampling algorithm for differentially submodular functions whose parallel runtime is logarithmic in the size of the data and achieves strong approximation guarantees. Through experiments, we show the algorithm's performance is competitive with state-of-the-art methods and obtains dramatic speedups for feature selection and experimental design problems.


Budgeted stream-based active learning via adaptive submodular maximization

Neural Information Processing Systems

Active learning enables us to reduce the annotation cost by adaptively selecting unlabeled instances to be labeled. For pool-based active learning, several effective methods with theoretical guarantees have been developed through maximizing some utility function satisfying adaptive submodularity. In contrast, there have been few methods for stream-based active learning based on adaptive submodularity. In this paper, we propose a new class of utility functions, policy-adaptive submodular functions, and prove this class includes many existing adaptive submodular functions appearing in real world problems. We provide a general framework based on policy-adaptive submodularity that makes it possible to convert existing pool-based methods to stream-based methods and give theoretical guarantees on their performance. In addition we empirically demonstrate their effectiveness comparing with existing heuristics on common benchmark datasets.








Discretely Beyond 1 /e: Guided Combinatorial Algorithms for Submodular Maximization

Neural Information Processing Systems

These are achieved by guiding the randomized greedy algorithm with a fast local search algorithm. Further, we develop deterministic versions of these algorithms, maintaining the same ratio and asymptotic time complexity.