Goto

Collaborating Authors

 ucbt


ANear-OptimalBest-of-Both-WorldsAlgorithm forOnlineLearningwithFeedbackGraphs

Neural Information Processing Systems

We present a computationally efficient algorithm for learning in this framework that simultaneously achieves near-optimal regret bounds in both stochastic and adversarial environments. The bound against oblivious adversaries is O( αT), where T is the time horizon andα is the independence number of the feedback graph.


f3d9de86462c28781cbe5c47ef22c3e5-Supplemental.pdf

Neural Information Processing Systems

The algorithm [62] consider Algorithm 2 for the stochastic generalized linear bandit problem. Assume thatθ is the true parameter of the reward model. Then we consider the lower bounds. For fj(A) = 12(ej1eTj2 +ej2eTj1),A with j1 j2, fj(Ai) is only 1 wheni = j and 0 otherwise. With Claim D.12 and Claim D.11 we get that g C q To get 1), we writeVl = [v1, vl] Rd l and V l = [vl+1, vk].


69469da823348084ca8933368ecbf676-Supplemental-Conference.pdf

Neural Information Processing Systems

In this section, we examine three algorithms via four numerical examples. The first algorithm is the Sliding Window-UCB (SW-UCB) algorithm presented in our paper. The second algorithm is the naive UCB algorithm without any sliding windows (Agrawal and Devanur, 2014). The third algorithm is LagrangeBwK presented in (Immorlica et al., 2019), which is originally proposed for the adversarial BwK problem. Note that the LagrangeBwK requires an approximation of the static best distribution benchmark. For simplicity, we put the exact value of the benchmark into the algorithm. All the regret performances are reported based on the average over 100 simulation trials.


Regression Oracles and Exploration Strategies for Short-Horizon Multi-Armed Bandits

Gray, Robert C., Zhu, Jichen, Ontañón, Santiago

arXiv.org Artificial Intelligence

This paper explores multi-armed bandit (MAB) strategies in very short horizon scenarios, i.e., when the bandit strategy is only allowed very few interactions with the environment. This is an understudied setting in the MAB literature with many applications in the context of games, such as player modeling. Specifically, we pursue three different ideas. First, we explore the use of regression oracles, which replace the simple average used in strategies such as epsilon-greedy with linear regression models. Second, we examine different exploration patterns such as forced exploration phases. Finally, we introduce a new variant of the UCB1 strategy called UCBT that has interesting properties and no tunable parameters. We present experimental results in a domain motivated by exergames, where the goal is to maximize a player's daily steps. Our results show that the combination of epsilon-greedy or epsilon-decreasing with regression oracles outperforms all other tested strategies in the short horizon setting.