BabyHGRN: Exploring RNNs for Sample-Efficient Training of Language Models
Haller, Patrick, Golde, Jonas, Akbik, Alan
–arXiv.org Artificial Intelligence
This paper explores the potential of recurrent neural networks (RNNs) and other subquadratic architectures as competitive alternatives to transformer-based models in low-resource language modeling scenarios. We utilize HGRN2 (Qin et al., 2024), a recently proposed RNN-based architecture, and comparatively evaluate its effectiveness against transformer-based baselines and other subquadratic architectures (LSTM, xLSTM, Mamba). Our experimental results show that BABYHGRN, our HGRN2 language model, outperforms transformer-based models in both the 10M and 100M word tracks of the challenge, as measured by their performance on the BLiMP, EWoK, GLUE and BEAR benchmarks. Further, we show the positive impact of knowledge distillation. Our findings challenge the prevailing focus on transformer architectures and indicate the viability of RNN-based models, particularly in resource-constrained environments.
arXiv.org Artificial Intelligence
Dec-20-2024
- Country:
- Asia
- Middle East > Jordan (0.04)
- Singapore (0.05)
- Europe
- Germany (0.04)
- Slovenia > Drava
- Municipality of Benedikt > Benedikt (0.04)
- North America
- Dominican Republic (0.04)
- Mexico (0.04)
- United States
- California > San Diego County
- San Diego (0.04)
- New York > New York County
- New York City (0.04)
- California > San Diego County
- Oceania > Australia
- Asia
- Genre:
- Research Report > New Finding (1.00)
- Technology: