Preference-based Pure Exploration
–Neural Information Processing Systems
We study the preference-based pure exploration problem for bandits with vectorvalued rewards. The rewards are ordered using a (given) preference cone C and our goal is to identify the set of Pareto optimal arms. First, to quantify the impact of preferences, we derive a novel lower bound on sample complexity for identifying the most preferred policy with a confidence level 1 δ. Our lower bound elicits the role played by the geometry of the preference cone and punctuates the difference in hardness compared to existing best-arm identification variants of the problem. We further explicate this geometry when the rewards follow Gaussian distributions. We then provide a convex relaxation of the lower bound and leverage it to design the Preference-based Track and Stop (PreTS) algorithm that identifies the most preferred policy. Finally, we show that the sample complexity of PreTS is asymptotically tight by deriving a new concentration inequality for vector-valued rewards.
Neural Information Processing Systems
May-28-2025, 17:17:11 GMT
- Country:
- Europe (0.67)
- North America > United States
- Genre:
- Research Report > Experimental Study (1.00)
- Industry:
- Technology: