Lowering PyTorch's Memory Consumption for Selective Differentiation
Bhatia, Samarth, Dangel, Felix
–arXiv.org Artificial Intelligence
Memory is a limiting resource for many deep learning tasks. Beside the neural network weights, one main memory consumer is the computation graph built up by automatic differentiation (AD) for backpropagation. We observe that PyTorch's current AD implementation neglects information about parameter differentiability when storing the computation graph. This information is useful though to reduce memory whenever gradients are requested for a parameter subset, as is the case in many modern fine-tuning tasks. Specifically, inputs to layers that act linearly in their parameters (dense, convolution, or normalization layers) can be discarded whenever the parameters are marked as non-differentiable. We provide a drop-in, differentiability-agnostic implementation of such layers and demonstrate its ability to reduce memory without affecting run time.
arXiv.org Artificial Intelligence
Apr-15-2024
- Country:
- North America
- Canada (0.68)
- United States > California (0.28)
- North America
- Genre:
- Research Report (0.50)
- Technology: