biss
Doubly-Optimistic Play for Safe Linear Bandits
Chen, Tianrui, Gangrade, Aditya, Saligrama, Venkatesh
The safe linear bandit problem (SLB) is an online approach to linear programming with unknown objective and unknown round-wise constraints, under stochastic bandit feedback of rewards and safety risks of actions. We study aggressive \emph{doubly-optimistic play} in SLBs, and their role in avoiding the strong assumptions and poor efficacy associated with extant pessimistic-optimistic solutions. We first elucidate an inherent hardness in SLBs due the lack of knowledge of constraints: there exist `easy' instances, for which suboptimal extreme points have large `gaps', but on which SLB methods must still incur $\Omega(\sqrt{T})$ regret and safety violations due to an inability to refine the location of optimal actions to arbitrary precision. In a positive direction, we propose and analyse a doubly-optimistic confidence-bound based strategy for the safe linear bandit problem, DOSLB, which exploits supreme optimism by using optimistic estimates of both reward and safety risks to select actions. Using a novel dual analysis, we show that despite the lack of knowledge of constraints, DOSLB rarely takes overly risky actions, and obtains tight instance-dependent $O(\log^2 T)$ bounds on both efficacy regret and net safety violations up to any finite precision, thus yielding large efficacy gains at a small safety cost and without strong assumptions. Concretely, we argue that algorithm activates noisy versions of an `optimal' set of constraints at each round, and activation of suboptimal sets of constraints is limited by the larger of a safety and efficacy gap we define.
- Energy > Oil & Gas > Upstream (0.95)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.87)
Predicting Optimal Solution Cost with Bidirectional Stratified Sampling
Lelis, Levi (University of Alberta) | Stern, Roni (Ben Gurion University) | Felner, Ariel (Ben Gurion University) | Zilles, Sandra (University of Regina) | Holte, Robert C. (University of Alberta)
Optimal planning and heuristic search systems solve state-space searchproblems by finding a least-cost path from start to goal. As a byproduct of having an optimal path they also determine the optimal solution cost. In this paper we focus on the problem of determining the optimal solution cost for a state-space search problem directly, i.e. without actually finding a solution path of that cost. We present an efficient algorithm, BiSS, based on ideas of bidirectional search and stratified sampling that produces accurate estimates of the optimal solution cost. Our method is guaranteed to return the optimal solution cost in the limit as the sample size goes to infinity.We show empirically that our method makes accurate predictions in several domains. In addition, we show that our method scales to state spaces much larger than can be solved optimally. In particular, we estimate the average solution cost for the 6x6, 7x7, and 8x8 Sliding-Tile Puzzle and provide indirect evidence that these estimates are accurate.
- North America > Canada > Alberta > Census Division No. 11 > Edmonton Metropolitan Region > Edmonton (0.04)
- North America > Canada > Saskatchewan > Regina (0.04)
- Asia > Middle East > Israel > Southern District > Beer-Sheva (0.04)