Goto

Collaborating Authors

 simple analysis


Random Reshuffling: Simple Analysis with Vast Improvements

Neural Information Processing Systems

Random Reshuffling (RR) is an algorithm for minimizing finite-sum functions that utilizes iterative gradient descent steps in conjunction with data reshuffling. Often contrasted with its sibling Stochastic Gradient Descent (SGD), RR is usually faster in practice and enjoys significant popularity in convex and non-convex optimization. The convergence rate of RR has attracted substantial attention recently and, for strongly convex and smooth functions, it was shown to converge faster than SGD if 1) the stepsize is small, 2) the gradients are bounded, and 3) the number of epochs is large. We remove these 3 assumptions, improve the dependence on the condition number from $\kappa^2$ to $\kappa$ (resp.\


Review for NeurIPS paper: Random Reshuffling: Simple Analysis with Vast Improvements

Neural Information Processing Systems

The abstract claims to remove the small step size requirements of prior work. However, to attain a good convergence rate (Corollary 1) the main theorems (Theorems 1 and 2) need a small step size, similar to previous works. In fact Safran and Shamir (2020) show that convergence is only possible for step size O(1/n) . Please modify the claims accordingly. However, the dependence on \mu has worsened.


Random Reshuffling: Simple Analysis with Vast Improvements

Neural Information Processing Systems

Random Reshuffling (RR) is an algorithm for minimizing finite-sum functions that utilizes iterative gradient descent steps in conjunction with data reshuffling. Often contrasted with its sibling Stochastic Gradient Descent (SGD), RR is usually faster in practice and enjoys significant popularity in convex and non-convex optimization. The convergence rate of RR has attracted substantial attention recently and, for strongly convex and smooth functions, it was shown to converge faster than SGD if 1) the stepsize is small, 2) the gradients are bounded, and 3) the number of epochs is large. We remove these 3 assumptions, improve the dependence on the condition number from \kappa 2 to \kappa (resp.\ We argue through theory and experiments that the new variance type gives an additional justification of the superior performance of RR.


Examining the arc of 100,000 stories: a tidy analysis

@machinelearnbot

I recently came across a great natural language dataset from Mark Riedel: 112,000 plots of stories downloaded from English language Wikipedia. This includes books, movies, TV episodes, video games- anything that has a Plot section on a Wikipedia page. This offers a great opportunity to analyze story structure quantitatively. In this post I'll do a simple analysis, examining what words tend to occur at particular points within a story, including words that characterize the beginning, middle, or end. As I usually do for text analysis, I'll be using the tidytext package Julia Silge and I developed last year.