A Preconditioned Interior Point Method for Support Vector Machines Using an ANOVA-Decomposition and NFFT-Based Matrix-Vector Products

Wagner, Theresa, Pearson, John W., Stoll, Martin

arXiv.org Artificial Intelligence 

The training of support vector machines (SVMs) leads to large-scale quadratic programs (QPs) [1]. An efficient way to solve these optimization problems is the sequential minimal optimization (SMO) algorithm introduced in [2]. The main motivation for the design of the SMO algorithm comes from the fact that existing optimization methods, i.e., quadratic programming approaches, cannot handle the large-scale dense kernel matrix efficiently. The SMO algorithm is motivated by the result obtained in [3] that showed the optimization problem can be decomposed into the solution of smaller subproblems, which avoids the large-scale dense matrices. When tackling the training as a QP programming task, the use of interior point methods (IPM) has also been studied in the seminal paper of Fine and Scheinberg in [4]: the authors use a low-rank approximation of the kernel matrix and propose a pivoted low-rank Cholesky decomposition to approximate the kernel matrix. A similar matrix approximation was also proposed in [5].