cputime
1abb1e1ea5f481b589da52303b091cbb-Reviews.html
First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. This paper describes an approach to derive the L1 regularized Gaussian maximum likelihood estimator for the sparse inverse covariance estimation problem. The focus of this paper was to scale the previous algorithm QUIC to solve problems involving million of variables. They describe three innovations brought about by this new approach: inexact Hessians, better computation of the logdet function, and carefully selecting the blocks updated in their block coordinate scheme via a smart clustering scheme. The numerical results test their new method against the previous QUIC algorithm, GLASSO and ALM, showing improved performance on a few select problems.
Topmoumoute Online Natural Gradient Algorithm
Roux, Nicolas L., Manzagol, Pierre-antoine, Bengio, Yoshua
Guided by the goal of obtaining an optimization algorithm that is both fast and yielding good generalization, we study the descent direction maximizing the decrease in generalization error or the probability of not increasing generalization error. The surprising result is that from both the Bayesian and frequentist perspectives this can yield the natural gradient direction. Although that direction can be very expensive to compute we develop an efficient, general, online approximation to the natural gradient descent which is suited to large scale problems. We report experimental results showing much faster convergence in computation time and in number of iterations with TONGA (Topmoumoute Online natural Gradient Algorithm) than with stochastic gradient descent, even on very large datasets.