Zero redundancy distributed learning with differential privacy

Bu, Zhiqi, Chiu, Justin, Liu, Ruixuan, Zha, Sheng, Karypis, George

arXiv.org Artificial Intelligence 

Deep learning using large models have achieved great success in a wide range of domains. However, training these models on billions of parameters is very challenging in terms of the training speed, memory cost, and communication efficiency, especially under the privacy-preserving regime with differential privacy (DP). On the one hand, DP optimization has comparable efficiency to the standard non-private optimization on a single GPU, but on multiple GPUs, existing DP distributed learning (such as pipeline parallel) has suffered from significantly worse efficiency. On the other hand, the Zero Redundancy Optimizer (ZeRO) is a state-of-the-art solution to the standard distributed learning, exhibiting excellent training efficiency on large models, but to work compatibly with DP is technically complicated. In this work, we develop a new systematic solution, DP-ZeRO, (I) to scale up the trainable DP model size, e.g. to GPT-100B, (II) to obtain the same computation and communication efficiency as the standard ZeRO, and (III) to enable mixed-precision DP training. Our DP-ZeRO, like the standard ZeRO, has the potential to train models with arbitrary size and is evaluated on the world's largest DP models in terms of the number of trainable parameters. Recent advances in differentially private (DP) deep learning have witnessed the power of large pre-trained models, achieving comparable accuracy to state-of-the-art (SOTA) non-private models across computer vision De et al. (2022); Bu et al. (2022a); Mehta et al. (2022); Xie et al. (2018), natural language processing Yu et al. (2021); Li et al. (2021); Bu et al. (2023a), and many other tasks. Similar to their non-DP counter-parts, it has been observed that larger DP models tend to have better performance.