Stable and low-precision training for large-scale vision-language models
–Neural Information Processing Systems
We introduce new methods for 1) accelerating and 2) stabilizing training for large language-vision models. Our main focus is int8 as GPU support for float8 is rare, though we also analyze float8 training through simulation. While SwitchBack proves effective for float8, we show that standard techniques are also successful if the network is trained and initialized so that large feature magnitudes are discouraged, which we accomplish via layer-scale initialized with zeros. As a result, we recommend an AdamW-Adafactor hybrid which avoids loss spikes when training a CLIP ViT-Huge model and outperforms gradient clipping at the scales we test.
Neural Information Processing Systems
Oct-10-2024, 09:41:43 GMT
- Technology:
- Information Technology > Artificial Intelligence > Vision (0.74)