Goto

Collaborating Authors

 ridge rider


Ridge Rider: Finding Diverse Solutions by Following Eigenvectors of the Hessian

Neural Information Processing Systems

Over the last decade, a single algorithm has changed many facets of our lives - Stochastic Gradient Descent (SGD). In the era of ever decreasing loss functions, SGD and its various offspring have become the go-to optimization tool in machine learning and are a key component of the success of deep neural networks (DNNs). While SGD is guaranteed to converge to a local optimum (under loose assumptions), in some cases it may matter which local optimum is found, and this is often context-dependent. Examples frequently arise in machine learning, from shape-versus-texture-features to ensemble methods and zero-shot coordination. In these settings, there are desired solutions which SGD on easy' solutions. In this paper, we present a different approach. Rather than following the gradient, which corresponds to a locally greedy direction, we instead follow the eigenvectors of the Hessian. By iteratively following and branching amongst the ridges, we effectively span the loss surface to find qualitatively different solutions. We show both theoretically and experimentally that our method, called Ridge Rider (RR), offers a promising direction for a variety of challenging problems.


Review for NeurIPS paper: Ridge Rider: Finding Diverse Solutions by Following Eigenvectors of the Hessian

Neural Information Processing Systems

The problem itself involves an extremely wide interest group across all areas of ML/AI. The outcome and the insights from this work can be applied to any problem in AI where non-convex optimization is involved. For instance Theorem 1, provides a very important theoretical result that is essential towards realizing why the approach might work. 5. The approximate version of the RR algorithm is also a a very nice addition, given that for larger sample spaces eigen decomposition of Hessian computation can be prohibitive and thus updates using Hessian-eigenvector products is a very nice touch.


Review for NeurIPS paper: Ridge Rider: Finding Diverse Solutions by Following Eigenvectors of the Hessian

Neural Information Processing Systems

This paper proposes Ridge Regression, a novel method for exploring local minima using eigenvectors of Hessian. The novelty of the method was appreciated and it was also noted that the method can connect to classic methods such as Branch and Bound technique. There was also appreciation of the empirical results. However, the lack of mathematical insights related to diversity of solutions, --a key claim of the paper-- was missed. Even if there was no analysis a comparison with the random orthonormal set of descent directions in producing diverse directions would have been illustrative to understand the power of the method.


Ridge Rider: Finding Diverse Solutions by Following Eigenvectors of the Hessian

Neural Information Processing Systems

Over the last decade, a single algorithm has changed many facets of our lives - Stochastic Gradient Descent (SGD). In the era of ever decreasing loss functions, SGD and its various offspring have become the go-to optimization tool in machine learning and are a key component of the success of deep neural networks (DNNs). While SGD is guaranteed to converge to a local optimum (under loose assumptions), in some cases it may matter which local optimum is found, and this is often context-dependent. Examples frequently arise in machine learning, from shape-versus-texture-features to ensemble methods and zero-shot coordination. In these settings, there are desired solutions which SGD on standard' loss functions will not find, since it instead converges to theeasy' solutions.