Unit Scaling: Out-of-the-Box Low-Precision Training
Blake, Charlie, Orr, Douglas, Luschi, Carlo
–arXiv.org Artificial Intelligence
We present unit scaling, a paradigm for designing deep learning models that simplifies the use of low-precision number formats. Training in FP16 or the recently proposed FP8 formats offers substantial efficiency gains, but can lack sufficient range for out-of-the-box training. Unit scaling addresses this by introducing a principled approach to model numerics: seeking unit variance of all weights, activations and gradients at initialisation. Unlike alternative methods, this approach neither requires multiple training runs to find a suitable scale nor has significant computational overhead. We demonstrate the efficacy of unit scaling across a range of models and optimisers. We further show that existing models can be adapted to be unit-scaled, training BERT-Large in FP16 and then FP8 with no degradation in accuracy.
arXiv.org Artificial Intelligence
May-30-2023
- Country:
- North America > United States > Hawaii (0.14)
- Genre:
- Research Report > New Finding (0.67)
- Technology: