PipeLearn: Pipeline Parallelism for Collaborative Machine Learning
Zhang, Zihan, Rodgers, Philip, Kilpatrick, Peter, Spence, Ivor, Varghese, Blesson
–arXiv.org Artificial Intelligence
Collaborative machine learning (CML) techniques, such as federated learning, were proposed to collaboratively train deep learning models using multiple end-user devices and a server. CML techniques preserve the privacy of end-users as it does not require user data to be transferred to the server. Instead, local models are trained and shared with the server. However, the low resource utilisation of CML techniques makes the training process inefficient, thereby limiting the use of CML in the real world. Idling resources both on the server and devices due to sequential computation and communication is the principal cause of low resource utilisation. A novel framework PipeLearn that leverages pipeline parallelism for CML techniques is developed to improve resource utilisation substantially. A new training pipeline is designed to parallelise the computations on different hardware resources and communication on different bandwidth resources, thereby accelerating the training process in CML. The pipeline is further optimised to ensure maximum utilisation of available resources. The experimental results confirm the validity of the underlying approach of PipeLearn and highlight that when compared to federated learning: (i) the idle time of the server can be reduced by 2.2x - 28.5x, (ii) the network throughput can be increased by 56.6x - 321.3x, and (iii) the overall training time can be accelerated by 1.5x - 21.6x under varying network conditions for two popular convolutional models without sacrificing accuracy. PipeLearn is available for public download from https://github.com/blessonvar/PipeLearn.
arXiv.org Artificial Intelligence
Dec-1-2022
- Country:
- North America (0.28)
- Genre:
- Research Report (0.84)
- Industry:
- Education (0.46)
- Information Technology > Security & Privacy (0.48)
- Technology: