Reviews: Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization
–Neural Information Processing Systems
In this paper, the authors propose a novel variance reduced zeroth-order method for nonconvex optimization, prove theoretical results for three different gradient estimates and demonstrate the performance of the method on two machine learning tasks. The theoretical results highlight the differences and trade-offs between the gradient estimates, and the numerical results show that these trade-offs (estimate accuracy, convergence rate, iterations and function queries) are actually realized in practice. Overall, the paper is well structured and thought out (both the theoretical and empirical portions) and the results are interesting in my opinion (for both the ML and Optimization communities), and as such I recommend this paper for publication at NIPS. - The paper is very well written and motivated, and is very easy to read. The authors should clearly state the differences both algorithmic and theoretical. Is it fair to say that this is due to the errors in the gradient estimates.
Neural Information Processing Systems
Oct-8-2024, 02:07:24 GMT
- Technology: