Bandit Learning with Positive Externalities
Shah, Virag, Blanchet, Jose, Johari, Ramesh
Many platforms are characterized by the fact that future user arrivals are likely to have preferences similar to users who were satisfied in the past. In other words, arrivals exhibit positive externalities. We study multiarmed bandit (MAB) problems with positive externalities. Our model has a finite number of arms and users are distinguished by the arm(s) they prefer. We model positive externalities by assuming that the preferred arms of future arrivals are self-reinforcing based on the experiences of past users. We show that classical algorithms such as UCB which are optimal in the classical MAB setting may even exhibit linear regret in the context of positive externalities. We show that there is a fundamental tradeoff: on one hand, the positive externality allows an algorithm to quickly converge to the "right" population, on the other hand, this same effect also amplifies the consequences of any mistakes. We show that this tradeoff calls for a novel algorithmic approach relative to benchmarks such as UCB and random-explore-then-exploit, which are not optimal in this setting. We develop explicit lower-bounds to the achievable regret, with a structure which is quite different that for the standard MAB settings. We show that the lower-bound is tight by developing an algorithm which achieves optimal regret.
Apr-21-2018
- Country:
- Europe > United Kingdom
- England > Oxfordshire > Oxford (0.04)
- North America > United States
- New York (0.04)
- Europe > United Kingdom
- Genre:
- Research Report (0.64)
- Technology: