Reviews: Provable Efficient Online Matrix Completion via Non-convex Stochastic Gradient Descent
–Neural Information Processing Systems
The main contribution in this work is showing that in the context of matrix completion, when equipped with a good initialization, the sequence of the solutions produced by a widely used (but without theoretical guarantee) SGD algorithm converges linearly to the true matrix. The paper reads well and most of theoretical result looks sound. But I have some concerns as follows. Is it possible to access fewer samples, e.g., "O(mu * d * k * log(d))" while still ensuring global convergence? Theorem 3.1 says "for any fixed T 1, with probability at least 1 - T/(d 10), we have linear convergence". That means, if we run the algorithm (i.e., repeatedly reveal the samples) for a longer time, we have a LOWER confidence to obtain the true matrix.
Neural Information Processing Systems
Jan-20-2025, 09:42:22 GMT
- Technology: