Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems 

The scheme finds a target point for each block in parallel in a chosen subset of blocks by minimizing the sum of a strongly convex approximation to the smooth part on this block (with matching gradients) and the non-smooth part. Each block in this subset is then updated (in parallel) as a convex combination of the previous value and the target points. A parallel proximal gradient scheme can be obtained as a special case; though using a convex combination of the iterates yield a slightly different scheme than previous work. The suggested algorithm is very similar to [9], except that in [9] the subset was chosen using a greedy scheme (which can be expensive), whereas this submission explores both randomized schemes or a cyclic scheme. For these, the authors prove the asymptotic convergence to a stationary point of the algorithm under standard Lipschitz gradient conditions.