sakaue@mist.i.u-tokyo.a...
No-Regret M-Concave Function Maximization: Stochastic Bandit Algorithms and NP-Hardness of Adversarial Full-Information Setting
Taihei Oki, Hokkaido University, Hokkaido, Japan, oki@icredd.hokudai.ac.jp "3026 Shinsaku Sakaue[1], The University of Tokyo and RIKEN AIP, Tokyo, Japan, sakaue@mist.i.u-tokyo.ac.jp
Generalization Bound and Learning Methods for Data-Driven Projections in Linear Programming
Shinsaku Sakaue, The University of Tokyo and RIKEN AIP, Tokyo, Japan, sakaue@mist.i.u-tokyo.ac.jp, "3026 Taihei Oki, Hokkaido University, Hokkaido, Japan, oki@icredd.hokudai.ac.jp
How to solve high-dimensional linear programs (LPs) efficiently is a fundamental question. Recently, there has been a surge of interest in reducing LP sizes using random projections, which can accelerate solving LPs independently of improving LP solvers. This paper explores a new direction of data-driven projections, which use projection matrices learned from data instead of random projection matrices. Given training data of n-dimensional LPs, we learn an n k projection matrix with n > k. When addressing a future LP instance, we reduce its dimensionality from n to k via the learned projection matrix, solve the resulting LP to obtain a k-dimensional solution, and apply the learned matrix to it to recover an n-dimensional solution. On the theoretical side, a natural question is: how much data is sufficient to ensure the quality of recovered solutions? We address this question based on the framework of data-driven algorithm design, which connects the amount of data sufficient for establishing generalization bounds to the pseudo-dimension of performance metrics.
Faster Discrete Convex Function Minimization with Predictions: The M-Convex Case
Taihei Oki, The University of Tokyo, Tokyo, Japan, oki@mist.i.u-tokyo.ac.jp "3026 Shinsaku Sakaue, The University of Tokyo, Tokyo, Japan, sakaue@mist.i.u-tokyo.ac.jp
Recent years have seen a growing interest in accelerating optimization algorithms with machine-learned predictions. Sakaue and Oki (NeurIPS 2022) have developed a general framework that warm-starts the L-convex function minimization method with predictions, revealing the idea's usefulness for various discrete optimization problems. In this paper, we present a framework for using predictions to accelerate M-convex function minimization, thus complementing previous research and extending the range of discrete optimization algorithms that can benefit from predictions. Our framework is particularly effective for an important subclass called laminar convex minimization, which appears in many operations research applications. Our methods can improve time complexity bounds upon the best worst-case results by using predictions and even have potential to go beyond a lower-bound result.
Faster Discrete Convex Function Minimization with Predictions: The M-Convex Case
Taihei Oki, The University of Tokyo, Tokyo, Japan, oki@mist.i.u-tokyo.ac.jp "3026 Shinsaku Sakaue, The University of Tokyo, Tokyo, Japan, sakaue@mist.i.u-tokyo.ac.jp
Recent years have seen a growing interest in accelerating optimization algorithms with machine-learned predictions. Sakaue and Oki (NeurIPS 2022) have developed a general framework that warm-starts the L-convex function minimization method with predictions, revealing the idea's usefulness for various discrete optimization problems. In this paper, we present a framework for using predictions to accelerate M-convex function minimization, thus complementing previous research and extending the range of discrete optimization algorithms that can benefit from predictions. Our framework is particularly effective for an important subclass called laminar convex minimization, which appears in many operations research applications. Our methods can improve time complexity bounds upon the best worst-case results by using predictions and even have potential to go beyond a lower-bound result.
Faster Discrete Convex Function Minimization with Predictions: The M-Convex Case
Taihei Oki, The University of Tokyo, Tokyo, Japan, oki@mist.i.u-tokyo.ac.jp "3026 Shinsaku Sakaue, The University of Tokyo, Tokyo, Japan, sakaue@mist.i.u-tokyo.ac.jp
Recent years have seen a growing interest in accelerating optimization algorithms with machine-learned predictions. Sakaue and Oki (NeurIPS 2022) have developed a general framework that warm-starts the L-convex function minimization method with predictions, revealing the idea's usefulness for various discrete optimization problems. In this paper, we present a framework for using predictions to accelerate M-convex function minimization, thus complementing previous research and extending the range of discrete optimization algorithms that can benefit from predictions. Our framework is particularly effective for an important subclass called laminar convex minimization, which appears in many operations research applications. Our methods can improve time complexity bounds upon the best worst-case results by using predictions and even have potential to go beyond a lower-bound result.