Sample-Efficient Language Modeling with Linear Attention and Lightweight Enhancements
Haller, Patrick, Golde, Jonas, Akbik, Alan
–arXiv.org Artificial Intelligence
We study architectural and optimization techniques for sample-efficient language modeling under the constraints of the BabyLM 2025 shared task. Our model, BLaLM, replaces self-attention with a linear-time mLSTM token mixer and explores lightweight enhancements, including short convolutions, sliding window attention with dynamic modulation, and Hedgehog feature maps. To support training in low-resource settings, we curate a high-quality corpus emphasizing readability and pedagogical structure. Experiments across both STRICT and STRICT-SMALL tracks show that (1) linear attention combined with sliding window attention consistently improves zero-shot performance, and (2) the Muon optimizer stabilizes convergence and reduces perplexity over AdamW. These results highlight effective strategies for efficient language modeling without relying on scale.
arXiv.org Artificial Intelligence
Nov-11-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America > United States
- Florida > Miami-Dade County
- Miami (0.04)
- New Jersey > Bergen County
- Mahwah (0.04)
- Florida > Miami-Dade County
- Asia > Middle East
- Genre:
- Research Report (0.82)
- Industry:
- Education > Educational Setting (0.46)
- Technology: