TESS 2: A Large-Scale Generalist Diffusion Language Model
Tae, Jaesung, Ivison, Hamish, Kumar, Sachin, Cohan, Arman
–arXiv.org Artificial Intelligence
We introduce TESS 2, a general instruction-following diffusion language model that outperforms contemporary instruction-tuned diffusion models, as well as matches and sometimes exceeds strong autoregressive (AR) models. We train TESS 2 by first adapting a strong AR model via continued pretraining with the usual cross-entropy as diffusion loss, and then performing further instruction tuning. We find that adaptation training as well as the choice of the base model is crucial for training good instruction-following diffusion models. We further propose reward guidance, a novel and modular inference-time guidance procedure to align model outputs without needing to train the underlying model. Finally, we show that TESS 2 further improves with increased inference-time compute, highlighting the utility of diffusion LMs in having fine-grained controllability over the amount of compute used at inference time. Code and models are available at https://github.com/hamishivi/tess-2.
arXiv.org Artificial Intelligence
Feb-19-2025
- Country:
- Asia > Japan
- North America > United States (0.46)
- Genre:
- Research Report (1.00)
- Industry:
- Leisure & Entertainment (0.93)
- Media > Music (0.46)
- Technology: