Ene, Alina
Online and Streaming Algorithms for Constrained $k$-Submodular Maximization
Spaeh, Fabian, Ene, Alina, Nguyen, Huy L.
Constrained $k$-submodular maximization is a general framework that captures many discrete optimization problems such as ad allocation, influence maximization, personalized recommendation, and many others. In many of these applications, datasets are large or decisions need to be made in an online manner, which motivates the development of efficient streaming and online algorithms. In this work, we develop single-pass streaming and online algorithms for constrained $k$-submodular maximization with both monotone and general (possibly non-monotone) objectives subject to cardinality and knapsack constraints. Our algorithms achieve provable constant-factor approximation guarantees which improve upon the state of the art in almost all settings. Moreover, they are combinatorial and very efficient, and have optimal space and running time. We experimentally evaluate our algorithms on instances for ad allocation and other applications, where we observe that our algorithms are efficient and scalable, and construct solutions that are comparable in value to offline greedy algorithms.
Online Ad Allocation with Predictions
Spaeh, Fabian, Ene, Alina
Display Ads and the generalized assignment problem are two well-studied online packing problems with important applications in ad allocation and other areas. In both problems, ad impressions arrive online and have to be allocated immediately to budget-constrained advertisers. Worst-case algorithms that achieve the ideal competitive ratio are known, but might act overly conservative given the predictable and usually tame nature of real-world input. Given this discrepancy, we develop an algorithm for both problems that incorporate machine-learned predictions and can thus improve the performance beyond the worst-case. Our algorithm is based on the work of Feldman et al. (2009) and similar in nature to Mahdian et al. (2007) who were the first to develop a learning-augmented algorithm for the related, but more structured Ad Words problem. We use a novel analysis to show that our algorithm is able to capitalize on a good prediction, while being robust against poor predictions. We experimentally evaluate our algorithm on synthetic and real-world data on a wide range of predictions. Our algorithm is consistently outperforming the worst-case algorithm without predictions.
High Probability Convergence of Stochastic Gradient Methods
Liu, Zijian, Nguyen, Ta Duy, Nguyen, Thien Hang, Ene, Alina, Nguyen, Huy Lê
Stochastic optimization is a fundamental area with extensive applications in many domains, ranging from machine learning to algorithm design and beyond. The design and analysis of iterative methods for stochastic optimization has been the focus of a long line of work, leading to a rich understanding of the convergence of paradigmatic iterative methods such as stochastic gradient descent, mirror descent, and accelerated methods for both convex and non-convex optimization. However, most of these works only establish convergence guarantees that hold only in expectation. Although very meaningful, these results do not fully capture the convergence behaviors of the algorithms when we perform only a small number of runs of the algorithm, as it is typical in modern machine learning applications where there are significant computational and statistical costs associated with performing multiple runs of the algorithm (Harvey et al., 2019; Madden et al., 2020; Davis et al., 2021). Thus, an important direction is to establish convergence guarantees for a single run of the algorithm that hold not only in expectation but also with high probability. Compared to the guarantees that hold in expectation, high probability guarantees are significantly harder to obtain and they hold in more limited settings with stronger assumptions on the problem settings and the stochastic noise distribution. Most existing works that establish high probability guarantees focus on the setting where the length of the stochastic noise follows a light-tail (sub-Gaussian) distribution (Juditsky et al., 2011; Lan, 2012, 2020; Li and Orabona, 2020; Madden et al., 2020; Kavis et al., 2021). Recent works also study the more challenging heavy-tail setting, notably under a bounded variance (Nazin et al., 2019; Gorbunov et al., 2020; Cutkosky and Mehta, 2021) or bounded p-moment assumption (Cutkosky and Mehta, 2021) on the length of the stochastic noise. Both settings are highly relevant in practice: Zhang et al. (2020) empirically studied the noise distribution for two common tasks, training a
Adaptive Gradient Methods for Constrained Convex Optimization
Ene, Alina, Nguyen, Huy L., Vladu, Adrian
Gradient methods are a fundamental building block of modern machine learning. Their scalability and small memory footprint makes them exceptionally well suite d to the massive volumes of data used for present-day learning tasks. While such optimization methods perform very well in practi ce, one of their major limitations consists of their inability to converge faster by taking advantage of specific features of the input data. For example, the training data used for classification tasks may exhibit a few very informative features, while all the others have only marginal relevance. Having access t o this information a priori would enable practitioners to appropriately tune first-order optimizat ion methods, thus allowing them to train much faster. Lacking this knowledge, one may attempt to reach a si milar performance by very carefully tuning hyper-parameters, which are all specific to the learning mod el and input data. This limitation has motivated the development of adaptive m ethods, which in absence of prior knowledge concerning the importance of various features in the da ta, adapt their learning rates based on the information they acquired in previous iterations. The most notable example is AdaGrad [ 13 ], which adaptively modifies the learning rate corresponding to each coordinate in the vector of weights. Following its success, a host of new adaptive methods appeared, inc luding Adam [ 17 ], AmsGrad [ 27 ], and Shampoo [ 14 ], which attained optimal rates for generic online learning tasks.
A Reduction for Optimizing Lattice Submodular Functions with Diminishing Returns
Ene, Alina, Nguyen, Huy L.
A function $f: \mathbb{Z}_+^E \rightarrow \mathbb{R}_+$ is DR-submodular if it satisfies $f({\bf x} + \chi_i) -f ({\bf x}) \ge f({\bf y} + \chi_i) - f({\bf y})$ for all ${\bf x}\le {\bf y}, i\in E$. Recently, the problem of maximizing a DR-submodular function $f: \mathbb{Z}_+^E \rightarrow \mathbb{R}_+$ subject to a budget constraint $\|{\bf x}\|_1 \leq B$ as well as additional constraints has received significant attention \cite{SKIK14,SY15,MYK15,SY16}. In this note, we give a generic reduction from the DR-submodular setting to the submodular setting. The running time of the reduction and the size of the resulting submodular instance depends only \emph{logarithmically} on $B$. Using this reduction, one can translate the results for unconstrained and constrained submodular maximization to the DR-submodular setting for many types of constraints in a unified manner.
Decomposable Submodular Function Minimization: Discrete and Continuous
Ene, Alina, Nguyen, Huy, Végh, László A.
This paper investigates connections between discrete and continuous approaches for decomposable submodular function minimization. We provide improved running time estimates for the state-of-the-art continuous algorithms for the problem using combinatorial arguments. We also provide a systematic experimental comparison of the two types of methods, based on a clear distinction between level-0 and level-1 algorithms.
The Power of Randomization: Distributed Submodular Maximization on Massive Datasets
Barbosa, Rafael da Ponte, Ene, Alina, Nguyen, Huy L., Ward, Justin
A wide variety of problems in machine learning, including exemplar clustering, document summarization, and sensor placement, can be cast as constrained submodular maximization problems. Unfortunately, the resulting submodular optimization problems are often too large to be solved on a single machine. We develop a simple distributed algorithm that is embarrassingly parallel and it achieves provable, constant factor, worst-case approximation guarantees. In our experiments, we demonstrate its efficiency in large problems with different kinds of constraints with objective values always close to what is achievable in the centralized setting.