Reviews: SLANG: Fast Structured Covariance Approximations for Bayesian Deep Learning with Natural Gradient

Neural Information Processing Systems 

UPDATE Thank you for your rebuttal. The traditional approach is to use diagonal Gaussian posterior approximations. This paper proposes Gaussian posterior approximations where the covariance matrix is approximated using the sum of a diagonal matrix and a low-rank matrix. The paper then outlines an efficient algorithm (with complexity linear in the number of dimensions of the data) that solely depends on gradients of the log likelihood. This is made possible by an approximation of the Hessian that depends on the gradient instead of the second derivatives of the log likelihood.