Reviews: ADMM without a Fixed Penalty Parameter: Faster Convergence with New Adaptive Penalization

Neural Information Processing Systems 

Summary: This paper shows that O(1/eps) iteration complexity of ADMM can be improved to O(1/eps (1-theta)) where theta is a parameter that characterizes how sharply the objective function increases with respect to increasing distance to the optimal solution. This improvement is shown under a locally adaptive version of the ADMM where the penalty parameter is increased after every t steps of ADMM. The method is extended to stochastic ADMM whose O(1/eps 2) iteration complexity is shown to similarly improve. The results are backed by experiments on generalized Lasso problems. Overall, the paper is well written and makes an important contribution towards improving the analysis of ADMM under adaptive penalty parameters.