Wang, Zhilei
A New Approach to Drifting Games, Based on Asymptotically Optimal Potentials
Wang, Zhilei, Kohn, Robert V.
This paper develops a fresh approach to the analysis of some drifting games. Our focus is on the identification of asymptotically optimal potential-based strategies for some versions of this repeated two-person game. Our approach involves (a) guessing an asymptotically optimal potential by solving an associated PDE (which is in general highly nonlinear); then (b) justifying the guess, by proving upper and lower bounds on the final-time loss whose difference scales like a negative power of the number of time steps. Our upper bounds are based on potential-based strategies for the player, and our lower bounds are similarly based on strategies for the adversary. Their proofs are rather elementary, using Taylor expansion and the explicit character of the potential. Most previous work on asymptotically optimal strategies has used potentials obtained by solving a discrete dynamic programming principle, which is complicated and sometimes intractable.
Achieving Minimax Rates in Pool-Based Batch Active Learning
Gentile, Claudio, Wang, Zhilei, Zhang, Tong
We consider a batch active learning scenario where the learner adaptively issues batches of points to a labeling oracle. Sampling labels in batches is highly desirable in practice due to the smaller number of interactive rounds with the labeling oracle (often human beings). However, batch active learning typically pays the price of a reduced adaptivity, leading to suboptimal results. In this paper we propose a solution which requires a careful trade off between the informativeness of the queried points and their diversity. We theoretically investigate batch active learning in the practically relevant scenario where the unlabeled pool of data is available beforehand (pool-based active learning). We analyze a novel stage-wise greedy algorithm and show that, as a function of the label complexity, the excess risk of this algorithm operating in the realizable setting for which we prove matches the known minimax rates in standard statistical learning settings. Our results also exhibit a mild dependence on the batch size. These are the first theoretical results that employ careful trade offs between informativeness and diversity to rigorously quantify the statistical performance of batch active learning in the pool-based scenario.