gradient-based and gradient-free optimization method
Learning Supervised PageRank with Gradient-Based and Gradient-Free Optimization Methods
In this paper, we consider a non-convex loss-minimization problem of learning Supervised PageRank models, which can account for features of nodes and edges. We propose gradient-based and random gradient-free methods to solve this problem. Our algorithms are based on the concept of an inexact oracle and unlike the state-of-the-art gradient-based method we manage to provide theoretically the convergence rate guarantees for both of them. Finally, we compare the performance of the proposed optimization methods with the state of the art applied to a ranking task.
Learning Supervised PageRank with Gradient-Based and Gradient-Free Optimization Methods
In this paper, we consider a non-convex loss-minimization problem of learning Supervised PageRank models, which can account for features of nodes and edges. We propose gradient-based and random gradient-free methods to solve this problem. Our algorithms are based on the concept of an inexact oracle and unlike the state-of-the-art gradient-based method we manage to provide theoretically the convergence rate guarantees for both of them. Finally, we compare the performance of the proposed optimization methods with the state of the art applied to a ranking task.
Reviews: Learning Supervised PageRank with Gradient-Based and Gradient-Free Optimization Methods
This paper analyzes the convergence of a non-convex loss-minimization problem for learning the parameters of a general graph-based ranking model, that is defined by a random walk conducted by weights of nodes and edges, which are in turn defined by random walks defined by nodes' and edge's features. The optimization problem can not be solved by existing optimization methods which require exact values of the objective function. The proposed approach hence operates in two level. At the first level, a linearly convergent method is used to estimate an approximation to the stationary distribution of Markov random walk. This approach is validated among others and the authors show the value of the loss function can be approximated with any given precision.
Learning Supervised PageRank with Gradient-Based and Gradient-Free Optimization Methods
Bogolubsky, Lev, Dvurechenskii, Pavel, Gasnikov, Alexander, Gusev, Gleb, Nesterov, Yurii, Raigorodskii, Andrei M., Tikhonov, Aleksey, Zhukovskii, Maksim
In this paper, we consider a non-convex loss-minimization problem of learning Supervised PageRank models, which can account for features of nodes and edges. We propose gradient-based and random gradient-free methods to solve this problem. Our algorithms are based on the concept of an inexact oracle and unlike the state-of-the-art gradient-based method we manage to provide theoretically the convergence rate guarantees for both of them. Finally, we compare the performance of the proposed optimization methods with the state of the art applied to a ranking task. Papers published at the Neural Information Processing Systems Conference.