Gear Training: A new way to implement high-performance model-parallel training

Dong, Hao, Li, Shuai, Xu, Dongchang, Ren, Yi, Zhang, Di

arXiv.org Machine Learning 

The training of Deep Neural Networks usually needs tremendous computing resources. Therefore many deep models are trained in large cluster instead of single machine or GPU. Though major researchs at present try to run whole model on all machines by using asynchronous asynchronous stochastic gradient descent (ASGD), we present a new approach to train deep model parallely -- split the model and then seperately train different parts of it in different speed.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found