Goto

Collaborating Authors

 parallelizing support vector machine


Parallelizing Support Vector Machines on Distributed Computers

Neural Information Processing Systems

Support Vector Machines (SVMs) suffer from a widely recognized scalability problem in both memory use and computational time. To improve scalability, we have developed a parallel SVM algorithm (PSVM), which reduces memory use through performing a row-based, approximate matrix factorization, and which loads only essential data to each machine to perform parallel computation. Let n denote the number of training instances, p the reduced matrix dimension after factorization ( p is significantly smaller than n), and m the number of machines. PSVM reduces the memory requirement from \MO ( n 2) to \MO ( np/m), and improves computation time to \MO ( np 2/m). Empirical studies on up to 500 computers shows PSVM to be effective.


Parallelizing Support Vector Machines on Distributed Computers

Zhu, Kaihua, Wang, Hao, Bai, Hongjie, Li, Jian, Qiu, Zhihuan, Cui, Hang, Chang, Edward Y.

Neural Information Processing Systems

Support Vector Machines (SVMs) suffer from a widely recognized scalability problem in both memory use and computational time. To improve scalability, we have developed a parallel SVM algorithm (PSVM), which reduces memory use through performing a row-based, approximate matrix factorization, and which loads only essential data to each machine to perform parallel computation. Let $n$ denote the number of training instances, $p$ the reduced matrix dimension after factorization ($p$ is significantly smaller than $n$), and $m$ the number of machines. PSVM reduces the memory requirement from $\MO$($n 2$) to $\MO$($np/m$), and improves computation time to $\MO$($np 2/m$). Empirical studies on up to $500$ computers shows PSVM to be effective.


Parallelizing Support Vector Machines on Distributed Computers

Zhu, Kaihua, Wang, Hao, Bai, Hongjie, Li, Jian, Qiu, Zhihuan, Cui, Hang, Chang, Edward Y.

Neural Information Processing Systems

Support Vector Machines (SVMs) suffer from a widely recognized scalability problem in both memory use and computational time. To improve scalability, we have developed a parallel SVM algorithm (PSVM), which reduces memory use through performing a row-based, approximate matrix factorization, and which loads only essential data to each machine to perform parallel computation. Let $n$ denote the number of training instances, $p$ the reduced matrix dimension after factorization ($p$ is significantly smaller than $n$), and $m$ the number of machines. PSVM reduces the memory requirement from $\MO$($n^2$) to $\MO$($np/m$), and improves computation time to $\MO$($np^2/m$). Empirical studies on up to $500$ computers shows PSVM to be effective.