Goto

Collaborating Authors

 submodularity


Budgeted stream-based active learning via adaptive submodular maximization

Neural Information Processing Systems

Active learning enables us to reduce the annotation cost by adaptively selecting unlabeled instances to be labeled. For pool-based active learning, several effective methods with theoretical guarantees have been developed through maximizing some utility function satisfying adaptive submodularity. In contrast, there have been few methods for stream-based active learning based on adaptive submodularity. In this paper, we propose a new class of utility functions, policy-adaptive submodular functions, and prove this class includes many existing adaptive submodular functions appearing in real world problems. We provide a general framework based on policy-adaptive submodularity that makes it possible to convert existing pool-based methods to stream-based methods and give theoretical guarantees on their performance. In addition we empirically demonstrate their effectiveness comparing with existing heuristics on common benchmark datasets.








Fast Adaptive Non-Monotone Submodular Maximization Subject to a Knapsack Constraint Supplementary Material

Neural Information Processing Systems

In this appendix, we include all the material missing from the main paper. Moreover, we restate a key result which connects random sampling and submodular maximization. The original version of the theorem was due to Feige et al. In fact, in what follows we exclusively use S and O for their final versions. Before stating the next lemma, let us introduce some notation for the sake of readability.