SpacTor-T5: Pre-training T5 Models with Span Corruption and Replaced Token Detection
Ye, Ke, Jiang, Heinrich, Rostamizadeh, Afshin, Chakrabarti, Ayan, DeSalvo, Giulia, Kagy, Jean-François, Karydas, Lazaros, Citovsky, Gui, Kumar, Sanjiv
–arXiv.org Artificial Intelligence
Pre-training large language models is known to be extremely resource intensive and often times inefficient, under-utilizing the information encapsulated in the training text sequences. In this paper, we present SpacTor, a new training procedure consisting of (1) a hybrid objective combining span corruption (SC) and token replacement detection (RTD), and (2) a two-stage curriculum that optimizes the hybrid objective over the initial $\tau$ iterations, then transitions to standard SC loss. We show empirically that the effectiveness of the hybrid objective is tied to the two-stage pre-training schedule, and provide extensive analysis on why this is the case. In our experiments with encoder-decoder architectures (T5) on a variety of NLP tasks, SpacTor-T5 yields the same downstream performance as standard SC pre-training, while enabling a 50% reduction in pre-training iterations and 40% reduction in total FLOPs. Alternatively, given the same amount of computing budget, we find that SpacTor results in significantly improved downstream benchmark performance.
arXiv.org Artificial Intelligence
Jan-23-2024
- Country:
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Genre:
- Research Report > New Finding (0.66)
- Technology: