No Train No Gain: Revisiting Efficient Training Algorithms For Transformer-based Language Models
–Neural Information Processing Systems
The computation necessary for training Transformer-based language models has skyrocketed in recent years.This trend has motivated research on efficient training algorithms designed to improve training, validation, and downstream performance faster than standard training. In this work, we revisit three categories of such algorithms: dynamic architectures (layer stacking, layer dropping), batch selection (selective backprop.,
Neural Information Processing Systems
Dec-25-2025, 05:51:29 GMT
- Technology: