interpolation constraint
- Asia > Middle East > Israel (0.04)
- Asia > China (0.04)
- Asia > Middle East > Israel (0.04)
- Asia > China (0.04)
- Asia > Middle East > Israel (0.04)
- Asia > China (0.04)
- Asia > Middle East > Israel (0.04)
- Asia > China (0.04)
PEPit: computer-assisted worst-case analyses of first-order optimization methods in Python
Goujaud, Baptiste, Moucer, Céline, Glineur, François, Hendrickx, Julien, Taylor, Adrien, Dieuleveut, Aymeric
PEPit is a Python package aiming at simplifying the access to worst-case analyses of a large family of first-order optimization methods possibly involving gradient, projection, proximal, or linear optimization oracles, along with their approximate, or Bregman variants. In short, PEPit is a package enabling computer-assisted worst-case analyses of first-order optimization methods. The key underlying idea is to cast the problem of performing a worst-case analysis, often referred to as a performance estimation problem (PEP), as a semidefinite program (SDP) which can be solved numerically. To do that, the package users are only required to write first-order methods nearly as they would have implemented them. The package then takes care of the SDP modeling parts, and the worst-case analysis is performed numerically via a standard solver.
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Europe > Belgium > Wallonia > Walloon Brabant > Louvain-la-Neuve (0.04)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
Exploiting Agent Symmetries for Performance Analysis of Distributed Optimization Methods
Colla, Sebastien, Hendrickx, Julien M.
We show that, in many settings, the worst-case performance of a distributed optimization algorithm is independent of the number of agents in the system, and can thus be computed in the fundamental case with just two agents. This result relies on a novel approach that systematically exploits symmetries in worst-case performance computation, framed as Semidefinite Programming (SDP) via the Performance Estimation Problem (PEP) framework. Harnessing agent symmetries in the PEP yields compact problems whose size is independent of the number of agents in the system. When all agents are equivalent in the problem, we establish the explicit conditions under which the resulting worst-case performance is independent of the number of agents and is therefore equivalent to the basic case with two agents. Our compact PEP formulation also allows the consideration of multiple equivalence classes of agents, and its size only depends on the number of equivalence classes. This enables practical and automated performance analysis of distributed algorithms in numerous complex and realistic settings, such as the analysis of the worst agent performance. We leverage this new tool to analyze the performance of the EXTRA algorithm in advanced settings and its scalability with the number of agents, providing a tighter analysis and deeper understanding of the algorithm performance.
Relaxed Gaussian process interpolation: a goal-oriented approach to Bayesian optimization
Petit, Sébastien J, Bect, Julien, Vazquez, Emmanuel
This work presents a new procedure for obtaining predictive distributions in the context of Gaussian process (GP) modeling, with a relaxation of the interpolation constraints outside some ranges of interest: the mean of the predictive distributions no longer necessarily interpolates the observed values when they are outside ranges of interest, but are simply constrained to remain outside. This method called relaxed Gaussian process (reGP) interpolation provides better predictive distributions in ranges of interest, especially in cases where a stationarity assumption for the GP model is not appropriate. It can be viewed as a goal-oriented method and becomes particularly interesting in Bayesian optimization, for example, for the minimization of an objective function, where good predictive distributions for low function values are important. When the expected improvement criterion and reGP are used for sequentially choosing evaluation points, the convergence of the resulting optimization algorithm is theoretically guaranteed (provided that the function to be optimized lies in the reproducing kernel Hilbert spaces attached to the known covariance of the underlying Gaussian process). Experiments indicate that using reGP instead of stationary GP models in Bayesian optimization is beneficial.
- North America > United States > New York (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (3 more...)
How to monitor and mitigate stair-casing in l1 trend filtering
Rojas, Cristian R., Wahlberg, Bo
In this paper we study the estimation of changing trends in time-series using $\ell_1$ trend filtering. This method generalizes 1D Total Variation (TV) denoising for detection of step changes in means to detecting changes in trends, and it relies on a convex optimization problem for which there are very efficient numerical algorithms. It is known that TV denoising suffers from the so-called stair-case effect, which leads to detecting false change points. The objective of this paper is to show that $\ell_1$ trend filtering also suffers from a certain stair-case problem. The analysis is based on an interpretation of the dual variables of the optimization problem in the method as integrated random walk. We discuss consistency conditions for $\ell_1$ trend filtering, how to monitor their fulfillment, and how to modify the algorithm to avoid the stair-case false detection problem.