On Explore-Then-Commit Strategies

Neural Information Processing Systems 

We study the problem of minimising regret in two-armed bandit problems with Gaussian rewards. Our objective is to use this simple setting to illustrate that strategies based on an exploration phase (up to a stopping time) followed by exploitation are necessarily suboptimal. The results hold regardless of whether or not the difference in means between the two arms is known.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found