combucb1
- North America > Canada > Alberta (0.14)
- North America > United States > New York > Nassau County > Mineola (0.04)
- North America > United States > California > Santa Clara County > Sunnyvale (0.04)
- (2 more...)
- Leisure & Entertainment (1.00)
- Media > Film (0.94)
Combinatorial Cascading Bandits
We propose combinatorial cascading bandits, a class of partial monitoring problems where at each step a learning agent chooses a tuple of ground items subject to constraints and receives a reward if and only if the weights of all chosen items are one. The weights of the items are binary, stochastic, and drawn independently of each other. The agent observes the index of the first chosen item whose weight is zero. This observation model arises in network routing, for instance, where the learning agent may only observe the first link in the routing path which is down, and blocks the path. We propose a UCB-like algorithm for solving our problems, CombCascade; and prove gap-dependent and gap-free upper bounds on its n-step regret. Our proofs build on recent work in stochastic combinatorial semi-bandits but also address two novel challenges of our setting, a non-linear reward function and partial observability. We evaluate CombCascade on two real-world problems and show that it performs well even when our modeling assumptions are violated. We also demonstrate that our setting requires a new learning algorithm.
- North America > Canada > Alberta (0.14)
- North America > United States > New York > Nassau County > Mineola (0.04)
- North America > United States > California > Santa Clara County > Sunnyvale (0.04)
- (2 more...)
- Leisure & Entertainment (0.68)
- Media > Film (0.47)
Combinatorial Cascading Bandits
Kveton, Branislav, Wen, Zheng, Ashkan, Azin, Szepesvari, Csaba
We propose combinatorial cascading bandits, a class of partial monitoring problems where at each step a learning agent chooses a tuple of ground items subject to constraints and receives a reward if and only if the weights of all chosen items are one. The weights of the items are binary, stochastic, and drawn independently of each other. The agent observes the index of the first chosen item whose weight is zero. This observation model arises in network routing, for instance, where the learning agent may only observe the first link in the routing path which is down, and blocks the path. We propose a UCB-like algorithm for solving our problems, CombCascade; and prove gap-dependent and gap-free upper bounds on its n-step regret. Our proofs build on recent work in stochastic combinatorial semi-bandits but also address two novel challenges of our setting, a non-linear reward function and partial observability. We evaluate CombCascade on two real-world problems and show that it performs well even when our modeling assumptions are violated. We also demonstrate that our setting requires a new learning algorithm.
- North America > Canada > Alberta (0.14)
- North America > United States > New York > Nassau County > Mineola (0.04)
- North America > United States > California > Santa Clara County > Sunnyvale (0.04)
- (2 more...)
- Leisure & Entertainment (1.00)
- Media > Film (0.94)
Combinatorial Cascading Bandits
Kveton, Branislav, Wen, Zheng, Ashkan, Azin, Szepesvari, Csaba
We propose combinatorial cascading bandits, a class of partial monitoring problems where at each step a learning agent chooses a tuple of ground items subject to constraints and receives a reward if and only if the weights of all chosen items are one. The weights of the items are binary, stochastic, and drawn independently of each other. The agent observes the index of the first chosen item whose weight is zero. This observation model arises in network routing, for instance, where the learning agent may only observe the first link in the routing path which is down, and blocks the path. We propose a UCB-like algorithm for solving our problems, CombCascade; and prove gap-dependent and gap-free upper bounds on its $n$-step regret. Our proofs build on recent work in stochastic combinatorial semi-bandits but also address two novel challenges of our setting, a non-linear reward function and partial observability. We evaluate CombCascade on two real-world problems and show that it performs well even when our modeling assumptions are violated. We also demonstrate that our setting requires a new learning algorithm.
- North America > Canada > Alberta (0.14)
- North America > United States > New York > Nassau County > Mineola (0.04)
- North America > United States > California > Santa Clara County > Sunnyvale (0.04)
- (2 more...)
- Leisure & Entertainment (1.00)
- Media > Film (0.94)
Tight Regret Bounds for Stochastic Combinatorial Semi-Bandits
Kveton, Branislav, Wen, Zheng, Ashkan, Azin, Szepesvari, Csaba
A stochastic combinatorial semi-bandit is an online learning problem where at each step a learning agent chooses a subset of ground items subject to constraints, and then observes stochastic weights of these items and receives their sum as a payoff. In this paper, we close the problem of computationally and sample efficient learning in stochastic combinatorial semi-bandits. In particular, we analyze a UCB-like algorithm for solving the problem, which is known to be computationally efficient; and prove $O(K L (1 / \Delta) \log n)$ and $O(\sqrt{K L n \log n})$ upper bounds on its $n$-step regret, where $L$ is the number of ground items, $K$ is the maximum number of chosen items, and $\Delta$ is the gap between the expected returns of the optimal and best suboptimal solutions. The gap-dependent bound is tight up to a constant factor and the gap-free bound is tight up to a polylogarithmic factor.
- North America > Canada > Alberta (0.14)
- North America > United States > New York > Nassau County > Mineola (0.04)
- North America > United States > California > Santa Clara County > Sunnyvale (0.04)
- (3 more...)