Goto

Collaborating Authors

 vt 1



Appendices Contents Appendices 18

Neural Information Processing Systems

Diplomacyisacomplex environment, where training requires significant time. The action is an allocation of the player's coins across the fields: the player decides how manyof itsccoins to put in each of the fields, choosing c1,c2,...,cf where Pf Finally, Blotto is a single-turn (i.e.




Prior-independentDynamicAuctionsfora Value-maximizing Buyer

Neural Information Processing Systems

Automatic bidding has become one of the main options for advertisers to buy advertisement opportunities intheonline advertising market[Dolan, 2020]. Theprevalence ofautomatic bidding is partly driven by the fact that it significantly simplifies the interaction between the advertisers and theadvertisingplatform.


SupplementaryMaterials AProofofTheorem2: AsymptoticConvergenceofRobustQ-Learning

Neural Information Processing Systems

From[BorkarandMeyn,2000],weknowthatthestochastic approximation (18) converges to the fixed point ofT, i.e., Q . Finally, to show Theorem 3, we only need to show each term in(56) is smaller than . In this section we develop the finite-time analysis of the robust TDC algorithm. We note that recently there are several works [Srikant and Ying, 2019, Xu and Liang, 2021, Kaledin et al., 2020] on finite-time analysis of RL algorithms that do not need theprojection. Specifically, the problem in [Srikant and Ying, 2019] is for one time scalelinear stochastic approximation.


FullyUnconstrainedOnlineLearning

Neural Information Processing Systems

We provide a technique for online convex optimization that obtains regret G w Tlog( w G T)+ w 2 +G2 on G-Lipschitz losses for any comparison pointw without knowing eitherG or w .


Adaptive Federated Optimization

Reddi, Sashank, Charles, Zachary, Zaheer, Manzil, Garrett, Zachary, Rush, Keith, Konečný, Jakub, Kumar, Sanjiv, McMahan, H. Brendan

arXiv.org Machine Learning

Federated learning is a distributed machine learning paradigm in which a large number of clients coordinate with a central server to learn a model without sharing their own training data. Due to the heterogeneity of the client datasets, standard federated optimization methods such as Federated Averaging (FedAvg) are often difficult to tune and exhibit unfavorable convergence behavior. In non-federated settings, adaptive optimization methods have had notable success in combating such issues. In this work, we propose federated versions of adaptive optimizers, including Adagrad, Adam, and Yogi, and analyze their convergence in the presence of heterogeneous data for general nonconvex settings. Our results highlight the interplay between client heterogeneity and communication efficiency. We also perform extensive experiments on these methods and show that the use of adaptive optimizers can significantly improve the performance of federated learning.


Nonmyopic Gaussian Process Optimization with Macro-Actions

Kharkovskii, Dmitrii, Ling, Chun Kai, Low, Kian Hsiang

arXiv.org Machine Learning

This paper presents a multi-staged approach to nonmyopic adaptive Gaussian process optimization (GPO) for Bayesian optimization (BO) of unknown, highly complex objective functions that, in contrast to existing nonmyopic adaptive BO algorithms, exploits the notion of macro-actions for scaling up to a further lookahead to match up to a larger available budget. To achieve this, we generalize GP upper confidence bound to a new acquisition function defined w.r.t. a nonmyopic adaptive macro-action policy, which is intractable to be optimized exactly due to an uncountable set of candidate outputs. The contribution of our work here is thus to derive a nonmyopic adaptive epsilon-Bayes-optimal macro-action GPO (epsilon-Macro-GPO) policy. To perform nonmyopic adaptive BO in real time, we then propose an asymptotically optimal anytime variant of our epsilon-Macro-GPO policy with a performance guarantee. We empirically evaluate the performance of our epsilon-Macro-GPO policy and its anytime variant in BO with synthetic and real-world datasets.


On the Convergence of SARAH and Beyond

Li, Bingcong, Ma, Meng, Giannakis, Georgios B.

arXiv.org Machine Learning

The main theme of this work is a unifying algorithm, abbreviated as L2S, that can deal with (strongly) convex and nonconvex empirical risk minimization (ERM) problems. It broadens a recently developed variance reduction method known as SARAH. L2S enjoys a linear convergence rate for strongly convex problems, which also implies the last iteration of SARAH's inner loop converges linearly. For convex problems, different from SARAH, L2S can afford step and mini-batch sizes not dependent on the data size $n$, and the complexity needed to guarantee $\mathbb{E}[\|\nabla F(\mathbf{x}) \|^2] \leq \epsilon$ is ${\cal O}(n+ n/\epsilon)$. For nonconvex problems on the other hand, the complexity is ${\cal O}(n+ \sqrt{n}/\epsilon)$. Parallel to L2S there are a few side results. Leveraging an aggressive step size, D2S is proposed, which provides a more efficient alternative to L2S and SARAH-like algorithms. Specifically, D2S requires a reduced IFO complexity of ${\cal O}\big( (n+ \bar{\kappa}) \ln (1/\epsilon) \big)$ for strongly convex problems. Moreover, to avoid the tedious selection of the optimal step size, an automatic tuning scheme is developed, which obtains comparable empirical performance with SARAH using judiciously tuned step size.