Parallel Training of Deep Networks with Local Updates

Laskin, Michael, Metz, Luke, Nabarrao, Seth, Saroufim, Mark, Noune, Badreddine, Luschi, Carlo, Sohl-Dickstein, Jascha, Abbeel, Pieter

arXiv.org Artificial Intelligence 

Deep learning models trained on large data sets have been widely successful in both vision and language domains. As state-of-the-art deep learning architectures have continued to grow in parameter count so have the compute budgets and times required to train them, increasing the need for compute-efficient methods that parallelize training. Two common approaches to parallelize the training of deep networks have been data and model parallelism. While useful, data and model parallelism suffer from diminishing returns in terms of compute efficiency for large batch sizes. In this paper, we investigate how to continue scaling compute efficiently beyond the point of diminishing returns for large batches through local parallelism, a framework which parallelizes training of individual layers in deep networks by replacing global backpropagation with truncated layer-wise backpropagation. Local parallelism enables fully asynchronous layer-wise parallelism with a low memory footprint, and requires little communication overhead compared with model parallelism. We show results in both vision and language domains across a diverse set of architectures, and find that local parallelism is particularly effective in the high-compute regime. Backpropagation (Rumelhart et al., 1985) is by far the most common method used to train neural networks. Alternatives to backpropagation are typically used only when backpropagation is impractical due to a non-differentiable loss (Schulman et al., 2015), non-smooth loss landscape (Metz et al., 2019), or due to memory and/or compute requirements (Ororbia et al., 2020). This raises the question of whether there are more efficient training strategies, even for models and losses that are considered well matched to training by backpropagation. Much of the work on training large scale models focuses on designing compute infrastructure which makes backpropagation more efficient, despite growing model size (Dean et al., 2012b; Chen et al., 2015; Sergeev & Balso, 2018). One of the most common ways to achieve efficient training of deep neural networks with backpropagation is to scale utilizing data parallelism (Zhang et al., 1989; Chen et al., 2016), training on bigger batch sizes spread across multiple devices. Order determined via coin flip. While data, model, and pipeline parallelism are existing paradigms for parallelizing learning, we investigate another way of parallelizing learning through local layer-wise training shown in (d). Training based on pipeline parallelism has also been introduced, but still requires large batches for efficient training (Petrowski et al., 1993; Ben-Nun & Hoefler, 2018; Huang et al., 2019).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found