single recipe
A Single Recipe for Online Submodular Maximization with Adversarial or Stochastic Constraints
In this paper, we consider an online optimization problem in which the reward functions are DR-submodular, and in addition to maximizing the total reward, the sequence of decisions must satisfy some convex constraints on average. Specifically, at each round $t\in\{1,\dots,T\}$, upon committing to an action $x_t$, a DR-submodular utility function $f_t(\cdot)$ and a convex constraint function $g_t(\cdot)$ are revealed, and the goal is to maximize the overall utility while ensuring the average of the constraint functions $\frac{1}{T}\sum_{t=1}^T g_t(x_t)$ is non-positive. Such cumulative constraints arise naturally in applications where the average resource consumption is required to remain below a prespecified threshold. We study this problem under an adversarial model and a stochastic model for the convex constraints, where the functions $g_t$ can vary arbitrarily or according to an i.i.d.
Review for NeurIPS paper: A Single Recipe for Online Submodular Maximization with Adversarial or Stochastic Constraints
Summary and Contributions: The paper considers the problem of maximizing a general monotone DR-submodular function subject to a general convex constraint (general up to some natural assumptions) in the online regret-minimization setting. The paper presents two algorithms for this problem, and proves bounds on their regret (with respect to the 1-1/e offline approximation) as well as the extent to which they violate the constraint on average. In that respect, the paper considers three kinds of regrets: - The traditional adversarial static regret in which the input is selected by an adversary and the algorithm competes with the best single solution in hindsight. For some of these benchmarks there are previous results for the special case in which the constraints are linear. The current paper improves over them both in terms of the generality of the constraint, and in terms of the quality of the guarantees.
A Single Recipe for Online Submodular Maximization with Adversarial or Stochastic Constraints
In this paper, we consider an online optimization problem in which the reward functions are DR-submodular, and in addition to maximizing the total reward, the sequence of decisions must satisfy some convex constraints on average. Specifically, at each round t\in\{1,\dots,T\}, upon committing to an action x_t, a DR-submodular utility function f_t(\cdot) and a convex constraint function g_t(\cdot) are revealed, and the goal is to maximize the overall utility while ensuring the average of the constraint functions \frac{1}{T}\sum_{t 1} T g_t(x_t) is non-positive. Such cumulative constraints arise naturally in applications where the average resource consumption is required to remain below a prespecified threshold. We study this problem under an adversarial model and a stochastic model for the convex constraints, where the functions g_t can vary arbitrarily or according to an i.i.d. We propose a single algorithm which achieves sub-linear (with respect to T) regret as well as sub-linear constraint violation bounds in both settings, without prior knowledge of the regime.