Multi-Armed Bandits with Minimum Aggregated Revenue Constraints
Yahmed, Ahmed Ben, Ferchichi, Hafedh El, Abeille, Marc, Perchet, Vianney
–arXiv.org Artificial Intelligence
We examine a multi-armed bandit problem with contextual information, where the objective is to ensure that each arm receives a minimum aggregated reward across contexts while simultaneously maximizing the total cumulative reward. This framework captures a broad class of real-world applications where fair revenue allocation is critical and contextual variation is inherent. The cross-context aggregation of minimum reward constraints, while enabling better performance and easier feasibility, introduces significant technical challenges -- particularly the absence of closed-form optimal allocations typically available in standard MAB settings. We design and analyze algorithms that either optimistically prioritize performance or pessimistically enforce constraint satisfaction. For each algorithm, we derive problem-dependent upper bounds on both regret and constraint violations. Furthermore, we establish a lower bound demonstrating that the dependence on the time horizon in our results is optimal in general and revealing fundamental limitations of the free exploration principle leveraged in prior work.
arXiv.org Artificial Intelligence
Oct-15-2025
- Country:
- Asia > China (0.04)
- Europe
- France > Île-de-France
- Spain > Catalonia
- Barcelona Province > Barcelona (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- North America > United States
- New York > New York County > New York City (0.14)
- Genre:
- Research Report > New Finding (0.34)
- Technology: