zer
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > New Mexico > Los Alamos County > Los Alamos (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- (3 more...)
Revisiting Extragradient-Type Methods -- Part 1: Generalizations and Sublinear Convergence Rates
Tran-Dinh, Quoc, Nguyen-Trung, Nghia
This paper presents a comprehensive analysis of the well-known extragradient (EG) method for solving both equations and inclusions. First, we unify and generalize EG for [non]linear equations to a wider class of algorithms, encompassing various existing schemes and potentially new variants. Next, we analyze both sublinear ``best-iterate'' and ``last-iterate'' convergence rates for the entire class of algorithms, and derive new convergence results for two well-known instances. Second, we extend our EG framework above to ``monotone'' inclusions, introducing a new class of algorithms and its corresponding convergence results. Third, we also unify and generalize Tseng's forward-backward-forward splitting (FBFS) method to a broader class of algorithms to solve [non]linear inclusions when a weak-Minty solution exists, and establish its ``best-iterate'' convergence rate. Fourth, to complete our picture, we also investigate sublinear rates of two other common variants of EG using our EG analysis framework developed here: the reflected forward-backward splitting and the golden ratio methods. Finally, we conduct an extensive numerical experiment to validate our theoretical findings. Our results demonstrate that several new variants of our proposed algorithms outperform existing schemes in the majority of examples.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Europe > Russia (0.04)
- Asia > Russia (0.04)
- (4 more...)
Randomized Block-Coordinate Optimistic Gradient Algorithms for Root-Finding Problems
In this paper, we develop two new randomized block-coordinate optimistic gradient algorithms to approximate a solution of nonlinear equations in large-scale settings, which are called root-finding problems. Our first algorithm is non-accelerated with constant stepsizes, and achieves $\mathcal{O}(1/k)$ best-iterate convergence rate on $\mathbb{E}[ \Vert Gx^k\Vert^2]$ when the underlying operator $G$ is Lipschitz continuous and satisfies a weak Minty solution condition, where $\mathbb{E}[\cdot]$ is the expectation and $k$ is the iteration counter. Our second method is a new accelerated randomized block-coordinate optimistic gradient algorithm. We establish both $\mathcal{O}(1/k^2)$ and $o(1/k^2)$ last-iterate convergence rates on both $\mathbb{E}[ \Vert Gx^k\Vert^2]$ and $\mathbb{E}[ \Vert x^{k+1} - x^{k}\Vert^2]$ for this algorithm under the co-coerciveness of $G$. In addition, we prove that the iterate sequence $\{x^k\}$ converges to a solution almost surely, and $\Vert Gx^k\Vert^2$ attains a $o(1/k)$ almost sure convergence rate. Then, we apply our methods to a class of large-scale finite-sum inclusions, which covers prominent applications in machine learning, statistical learning, and network optimization, especially in federated learning. We obtain two new federated learning-type algorithms and their convergence rate guarantees for solving this problem class.
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- North America > United States > North Carolina > Orange County > Chapel Hill (0.04)
- North America > United States > New York (0.04)
- (4 more...)
Sublinear Convergence Rates of Extragradient-Type Methods: A Survey on Classical and Recent Developments
The generalized equation (also called the [non]linear inclusion) provides a unified template to model various problems in computational mathematics and related fields su ch as the optimality condition of optimization problems (in both unconstrained and constrained settings), minimax optimization, variational inequality, complementarity, two-person game, and fixed-point problem s, see, e.g., [11, 24, 50, 112, 116, 118, 120]. Theory and numerical methods for this equation and its special case s have been extensively studied for many decades, see, e.g., the following monographs and the references quot ed therein [11, 50, 94, 119]. At the same time, several applications of this mathematical tool in operatio ns research, economics, uncertainty quantification, and transportations have been investigated [14, 52, 61, 50, 72]. In the last few years, there has been a surge of research in minimax problems due to new applications in mach ine learning and robust optimization, especially in generative adversarial networks (GANs), adversarial tr aining, and distributionally robust optimization, see, e.g., [4, 14, 55, 76, 84, 114] as a few examples. Minimax probl ems have also found new applications in online learning and reinforcement learning, among many others, se e, e.g., [4, 9, 15, 55, 67, 76, 78, 84, 114, 139]. Such prominent applications have motivated the research in minimax optimization and variational inequality problems (VIPs). On the one hand, classical algorithms such as gradient descent-ascent, extragradient, and primal-dual methods have been revisited, improved, and ext ended. On the other hand, new variants such as accelerated extragradient and accelerated operator split ting schemes have also been developed and equipped with rigorous convergence guarantees and practical perfor mance evaluation. This new development motivates us to write this survey paper, with the focus on sublinear con vergence rate analysis.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Asia > Middle East > Jordan (0.04)
- Europe > Russia (0.04)
- (4 more...)
- Research Report (1.00)
- Overview (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Search (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.87)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Gradient Descent (0.34)
Scalable Plug-and-Play ADMM with Convergence Guarantees
Sun, Yu, Wu, Zihui, Wohlberg, Brendt, Kamilov, Ulugbek S.
Plug-and-play priors (PnP) is a broadly applicable methodology for solving inverse problems by exploiting statistical priors specified as denoisers. Recent work has reported the state-of-the-art performance of PnP algorithms using pre-trained deep neural nets as denoisers in a number of imaging applications. However, current PnP algorithms are impractical in large-scale settings due to their heavy computational and memory requirements. This work addresses this issue by proposing an incremental variant of the widely used PnP-ADMM algorithm, making it scalable to large-scale datasets. We theoretically analyze the convergence of the algorithm under a set of explicit assumptions, extending recent theoretical results in the area. Additionally, we show the effectiveness of our algorithm with nonsmooth data-fidelity terms and deep neural net priors, its fast convergence compared to existing PnP algorithms, and its scalability in terms of speed and memory.
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.14)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > United States > New Mexico > Los Alamos County > Los Alamos (0.04)
- (13 more...)