Introduction to Multi-Armed Bandits
–arXiv.org Artificial Intelligence
Multi-armed bandits a simple but very powerful framework for algorithms that make decisions over time under uncertainty. An enormous body of work has accumulated over the years, covered in several books and surveys. This book provides a more introductory, textbook-like treatment of the subject. Each chapter tackles a particular line of work, providing a self-contained, teachable technical introduction and a review of the more advanced results. The chapters are as follows: Stochastic bandits; Lower bounds; Bayesian Bandits and Thompson Sampling; Lipschitz Bandits; Full Feedback and Adversarial Costs; Adversarial Bandits; Linear Costs and Semi-bandits; Contextual Bandits; Bandits and Zero-Sum Games; Bandits with Knapsacks; Incentivized Exploration and Connections to Mechanism Design.
arXiv.org Artificial Intelligence
Apr-29-2019
- Country:
- North America > United States > Maryland (0.13)
- Genre:
- Instructional Material > Course Syllabus & Notes (0.92)
- Research Report (1.00)
- Summary/Review (1.00)
- Industry:
- Education (1.00)
- Information Technology > Services (0.67)
- Leisure & Entertainment > Games (0.67)
- Marketing (0.67)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning > Statistical Learning (1.00)
- Representation & Reasoning
- Optimization (0.92)
- Uncertainty (0.87)
- Communications (1.00)
- Data Science > Data Mining
- Big Data (1.00)
- Game Theory (1.00)
- Information Management > Search (1.00)
- Artificial Intelligence
- Information Technology