Goto

Collaborating Authors

 pipe-sgd


Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training

Neural Information Processing Systems

Distributed training of deep nets is an important technique to address some of the present day computing challenges like memory consumption and computational demands. Classical distributed approaches, synchronous or asynchronous, are based on the parameter server architecture, i.e., worker nodes compute gradients which are communicated to the parameter server while updated parameters are returned. Recently, distributed training with AllReduce operations gained popularity as well. While many of those operations seem appealing, little is reported about wall-clock training time improvements. In this paper, we carefully analyze the AllReduce based setup, propose timing models which include network latency, bandwidth, cluster size and compute time, and demonstrate that a pipelined training with a width of two combines the best of both synchronous and asynchronous training. Specifically, for a setup consisting of a four-node GPU cluster we show wall-clock time training improvements of up to 5.4x compared to conventional approaches.




Reviews: Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training

Neural Information Processing Systems

This paper proposed a pipelined training setup for neural nets. The pipeline structure allows computation and communication to be carried out concurrently within each worker. Experiments show speed improvements over existing methods. Overall I think the method and the evaluations are convincing. However, I believe the NIPS conference is not the appropriate venue for this paper for the following reasons.


Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training

Li, Youjie, Yu, Mingchao, Li, Songze, Avestimehr, Salman, Kim, Nam Sung, Schwing, Alexander

Neural Information Processing Systems

Distributed training of deep nets is an important technique to address some of the present day computing challenges like memory consumption and computational demands. Classical distributed approaches, synchronous or asynchronous, are based on the parameter server architecture, i.e., worker nodes compute gradients which are communicated to the parameter server while updated parameters are returned. Recently, distributed training with AllReduce operations gained popularity as well. While many of those operations seem appealing, little is reported about wall-clock training time improvements. In this paper, we carefully analyze the AllReduce based setup, propose timing models which include network latency, bandwidth, cluster size and compute time, and demonstrate that a pipelined training with a width of two combines the best of both synchronous and asynchronous training.


Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training

Li, Youjie, Yu, Mingchao, Li, Songze, Avestimehr, Salman, Kim, Nam Sung, Schwing, Alexander

Neural Information Processing Systems

Distributed training of deep nets is an important technique to address some of the present day computing challenges like memory consumption and computational demands. Classical distributed approaches, synchronous or asynchronous, are based on the parameter server architecture, i.e., worker nodes compute gradients which are communicated to the parameter server while updated parameters are returned. Recently, distributed training with AllReduce operations gained popularity as well. While many of those operations seem appealing, little is reported about wall-clock training time improvements. In this paper, we carefully analyze the AllReduce based setup, propose timing models which include network latency, bandwidth, cluster size and compute time, and demonstrate that a pipelined training with a width of two combines the best of both synchronous and asynchronous training. Specifically, for a setup consisting of a four-node GPU cluster we show wall-clock time training improvements of up to 5.4x compared to conventional approaches.


Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training

Li, Youjie, Yu, Mingchao, Li, Songze, Avestimehr, Salman, Kim, Nam Sung, Schwing, Alexander

Neural Information Processing Systems

Distributed training of deep nets is an important technique to address some of the present day computing challenges like memory consumption and computational demands. Classical distributed approaches, synchronous or asynchronous, are based on the parameter server architecture, i.e., worker nodes compute gradients which are communicated to the parameter server while updated parameters are returned. Recently, distributed training with AllReduce operations gained popularity as well. While many of those operations seem appealing, little is reported about wall-clock training time improvements. In this paper, we carefully analyze the AllReduce based setup, propose timing models which include network latency, bandwidth, cluster size and compute time, and demonstrate that a pipelined training with a width of two combines the best of both synchronous and asynchronous training. Specifically, for a setup consisting of a four-node GPU cluster we show wall-clock time training improvements of up to 5.4x compared to conventional approaches.


Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training

Li, Youjie, Yu, Mingchao, Li, Songze, Avestimehr, Salman, Kim, Nam Sung, Schwing, Alexander

arXiv.org Machine Learning

Distributed training of deep nets is an important technique to address some of the present day computing challenges like memory consumption and computational demands. Classical distributed approaches, synchronous or asynchronous, are based on the parameter server architecture, i.e., worker nodes compute gradients which are communicated to the parameter server while updated parameters are returned. Recently, distributed training with AllReduce operations gained popularity as well. While many of those operations seem appealing, little is reported about wall-clock training time improvements. In this paper, we carefully analyze the AllReduce based setup, propose timing models which include network latency, bandwidth, cluster size and compute time, and demonstrate that a pipelined training with a width of two combines the best of both synchronous and asynchronous training. Specifically, for a setup consisting of a four-node GPU cluster we show wall-clock time training improvements of up to 5.4x compared to conventional approaches.