nip
A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization
We analyze stochastic gradient algorithms for optimizing nonconvex, nonsmooth finite-sum problems. In particular, the objective function is given by the summation of a differentiable (possibly nonconvex) component, together with a possibly non-differentiable but convex component. We propose a proximal stochastic gradient algorithm based on variance reduction, called ProxSVRG+. Our main contribution lies in the analysis of ProxSVRG+. It recovers several existing convergence results and improves/generalizes them (in terms of the number of stochastic gradient oracle calls and proximal oracle calls). In particular, ProxSVRG+ generalizes the best results given by the SCSG algorithm, recently proposed by [Lei et al., NIPS'17] for the smooth nonconvex case. ProxSVRG+ is also more straightforward than SCSG and yields simpler analysis. Moreover, ProxSVRG+ outperforms the deterministic proximal gradient descent (ProxGD) for a wide range of minibatch sizes, which partially solves an open problem proposed in [Reddi et al., NIPS'16].
- North America > United States > Illinois (0.05)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
Reviews: Projected Stein Variational Newton: A Fast and Scalable Bayesian Inference Method in High Dimensions
NeurIPS 2019 Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center "8670" "Projected Stein Variational Newton: A Fast and Scalable Bayesian Inference Method in High Dimensions" The reviewers agree that this submission represents an important contribution to the field. Please be sure to carefully review and address the concerns of all reviewers in the revision.
- Asia > Middle East > Jordan (0.04)
- North America > United States > Illinois > Champaign County > Champaign (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
Reviews: Finite-Time Performance Bounds and Adaptive Learning Rate Selection for Two Time-Scale Reinforcement Learning
NeurIPS 2019 Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center "2626" "Finite-Time Performance Bounds and Adaptive Learning Rate Selection for Two Time-Scale Reinforcement Learning" The reviewers unanimously support acceptance. We encourage the authors to strongly consider the suggestions provided by the reviewers for improving a camera ready version.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > France > Auvergne-Rhône-Alpes > Lyon > Lyon (0.04)
Reviews: A Primal Dual Formulation For Deep Learning With Constraints
NeurIPS 2019 Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center "6594" "A Primal Dual Formulation For Deep Learning With Constraints" All reviewers were positive about the contributions in the paper so I recommend acceptance. Please take into account all the reviewers' comments when preparing the final version of the paper.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > Canada > Quebec > Montreal (0.04)