The Fine-Grained Complexity of Gradient Computation for Training Large Language Models

Neural Information Processing Systems 

Large language models (LLMs) have made fundamental contributions over the last a few years. To train an LLM, one needs to alternatingly run forward' computations and backward computations. The forward computation can be viewed as attention function evaluation, and the backward computation can be viewed as a gradient computation. In previous work by [Alman and Song, NeurIPS 2023], it was proved that the forward step can be performed in almost-linear time in certain parameter regimes, but that there is no truly sub-quadratic time algorithm in the remaining parameter regimes unless the popular hypothesis \mathsf{SETH} is false. In this work, we show nearly identical results for the harder-seeming problem of computing the gradient of loss function of one layer attention network, and thus for the entire process of LLM training.