Goto

Collaborating Authors

 zo-adamm




ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization

Neural Information Processing Systems

The adaptive momentum method (AdaMM), which uses past gradients to update descent directions and learning rates simultaneously, has become one of the most popular first-order optimization methods for solving machine learning problems. However, AdaMM is not suited for solving black-box optimization problems, where explicit gradient forms are difficult or infeasible to obtain. In this paper, we propose a zeroth-order AdaMM (ZO-AdaMM) algorithm, that generalizes AdaMM to the gradient-free regime. We show that the convergence rate of ZO-AdaMM for both convex and nonconvex optimization is roughly a factor of $O(\sqrt{d})$ worse than that of the first-order AdaMM algorithm, where $d$ is problem size. In particular, we provide a deep understanding on why Mahalanobis distance matters in convergence of ZO-AdaMM and other AdaMM-type methods. As a byproduct, our analysis makes the first step toward understanding adaptive learning rate methods for nonconvex constrained optimization.Furthermore, we demonstrate two applications, designing per-image and universal adversarial attacks from black-box neural networks, respectively. We perform extensive experiments on ImageNet and empirically show that ZO-AdaMM converges much faster to a solution of high accuracy compared with $6$ state-of-the-art ZO optimization methods.


ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization

Xiangyi Chen, Sijia Liu, Kaidi Xu, Xingguo Li, Xue Lin, Mingyi Hong, David Cox

Neural Information Processing Systems

However, AdaMM is not suited for solving black-box optimization problems, where explicit gradient forms are difficult or infeasible to obtain. In this paper, we propose a zeroth-order AdaMM (ZO-AdaMM) algorithm, that generalizes AdaMM to the gradient-free regime.


Additional Experiments

Neural Information Processing Systems

More detailed comparisons with DFO and ZOO methods will be added in the revision. It is indeed valuable to enrich our related work on DFO. We will compare our method with existing DFO solvers, e.g., A preliminary comparison with COBYLA was illustrated in AE. Many benchmark black-box attack methods were built on ZO optimization, e.g., ZO-SignSGD Thus, we focus on the application in adversarial learning. It is shown from Eq. (19) that " should be added at the end of Y es, the stepsize interval should be reversed.


Refining Adaptive Zeroth-Order Optimization at Ease

Shu, Yao, Zhang, Qixin, He, Kun, Dai, Zhongxiang

arXiv.org Artificial Intelligence

Recently, zeroth-order (ZO) optimization plays an essential role in scenarios where gradient information is inaccessible or unaffordable, such as black-box systems and resource-constrained environments. While existing adaptive methods such as ZO-AdaMM have shown promise, they are fundamentally limited by their underutilization of moment information during optimization, usually resulting in underperforming convergence. To overcome these limitations, this paper introduces Refined Adaptive Zeroth-Order Optimization (R-AdaZO). Specifically, we first show the untapped variance reduction effect of first moment estimate on ZO gradient estimation, which improves the accuracy and stability of ZO updates. We then refine the second moment estimate based on these variance-reduced gradient estimates to better capture the geometry of the optimization landscape, enabling a more effective scaling of ZO updates. We present rigorous theoretical analysis to show (I) the first analysis to the variance reduction of first moment estimate in ZO optimization, (II) the improved second moment estimates with a more accurate approximation of its variance-free ideal, (III) the first variance-aware convergence framework for adaptive ZO methods, which may be of independent interest, and (IV) the faster convergence of R-AdaZO than existing baselines like ZO-AdaMM. Our extensive experiments, including synthetic problems, black-box adversarial attack, and memory-efficient fine-tuning of large language models (LLMs), further verify the superior convergence of R-AdaZO, indicating that R-AdaZO offers an improved solution for real-world ZO optimization challenges.


Reviews: ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization

Neural Information Processing Systems

This paper proposes a zeroth-order adaptive momentum method for black-box optimization, by approximating the stochastic gradient using the forward difference of two function values at a random unit direction. The paper also shows the convergence analysis in terms of Mahalanobis distance for both unconstrained and constrained nonconvex optimization with the ZO-AdaMM, which results in sublinear convergence rates that are roughly a factor of the square root of dimension worse than that of the first-order ZO-AdaMM, as well as for constrained convex optimization. The proposed scheme is quite interesting, which solves the (non)convex optimization in a new perspective, and somewhat provides new insight to the adaptive momentum methods. In particular, the paper provides a formal conclusion that the Euclidean projection may results in non-convergence issue in stochastic optimization. The paper also shows the applications to black-box adversarial attacks problems and validate the method by comparing it with other ZO methods.


ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization

Neural Information Processing Systems

The adaptive momentum method (AdaMM), which uses past gradients to update descent directions and learning rates simultaneously, has become one of the most popular first-order optimization methods for solving machine learning problems. However, AdaMM is not suited for solving black-box optimization problems, where explicit gradient forms are difficult or infeasible to obtain. In this paper, we propose a zeroth-order AdaMM (ZO-AdaMM) algorithm, that generalizes AdaMM to the gradient-free regime. We show that the convergence rate of ZO-AdaMM for both convex and nonconvex optimization is roughly a factor of O(\sqrt{d}) worse than that of the first-order AdaMM algorithm, where d is problem size. In particular, we provide a deep understanding on why Mahalanobis distance matters in convergence of ZO-AdaMM and other AdaMM-type methods.


ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization

Chen, Xiangyi, Liu, Sijia, Xu, Kaidi, Li, Xingguo, Lin, Xue, Hong, Mingyi, Cox, David

Neural Information Processing Systems

The adaptive momentum method (AdaMM), which uses past gradients to update descent directions and learning rates simultaneously, has become one of the most popular first-order optimization methods for solving machine learning problems. However, AdaMM is not suited for solving black-box optimization problems, where explicit gradient forms are difficult or infeasible to obtain. In this paper, we propose a zeroth-order AdaMM (ZO-AdaMM) algorithm, that generalizes AdaMM to the gradient-free regime. We show that the convergence rate of ZO-AdaMM for both convex and nonconvex optimization is roughly a factor of $O(\sqrt{d})$ worse than that of the first-order AdaMM algorithm, where $d$ is problem size. In particular, we provide a deep understanding on why Mahalanobis distance matters in convergence of ZO-AdaMM and other AdaMM-type methods.


ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization

Chen, Xiangyi, Liu, Sijia, Xu, Kaidi, Li, Xingguo, Lin, Xue, Hong, Mingyi, Cox, David

arXiv.org Machine Learning

The adaptive momentum method (AdaMM), which uses past gradients to update descent directions and learning rates simultaneously, has become one of the most popular first-order optimization methods for solving machine learning problems. However, AdaMM is not suited for solving black-box optimization problems, where explicit gradient forms are difficult or infeasible to obtain. In this paper, we propose a zeroth-order AdaMM (ZO-AdaMM) algorithm, that generalizes AdaMM to the gradient-free regime. We show that the convergence rate of ZO-AdaMM for both convex and nonconvex optimization is roughly a factor of $O(\sqrt{d})$ worse than that of the first-order AdaMM algorithm, where $d$ is problem size. In particular, we provide a deep understanding on why Mahalanobis distance matters in convergence of ZO-AdaMM and other AdaMM-type methods. As a byproduct, our analysis makes the first step toward understanding adaptive learning rate methods for nonconvex constrained optimization. Furthermore, we demonstrate two applications, designing per-image and universal adversarial attacks from black-box neural networks, respectively. We perform extensive experiments on ImageNet and empirically show that ZO-AdaMM converges much faster to a solution of high accuracy compared with $6$ state-of-the-art ZO optimization methods.