Parity-Aware Byte-Pair Encoding: Improving Cross-lingual Fairness in Tokenization
Foroutan, Negar, Meister, Clara, Paul, Debjit, Niklaus, Joel, Ahmadi, Sina, Bosselut, Antoine, Sennrich, Rico
–arXiv.org Artificial Intelligence
Tokenization is the first -- and often least scrutinized -- step of most NLP pipelines. Standard algorithms for learning tokenizers rely on frequency-based objectives, which favor languages dominant in the training data and consequently leave lower-resource languages with tokenizations that are disproportionately longer, morphologically implausible, or even riddled with placeholders. This phenomenon ultimately amplifies computational and financial inequalities between users from different language backgrounds. To remedy this, we introduce Parity-aware Byte Pair Encoding (BPE), a variant of the widely-used BPE algorithm. At every merge step, Parity-aware BPE maximizes the compression gain of the currently worst-compressed language, trading a small amount of global compression for cross-lingual parity. We find empirically that Parity-aware BPE leads to more equitable token counts across languages, with negligible impact on global compression rate and no substantial effect on language-model performance in downstream tasks.
arXiv.org Artificial Intelligence
Aug-25-2025
- Country:
- Africa > Niger (0.04)
- Asia
- Europe
- Germany > Berlin (0.04)
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Italy (0.04)
- Switzerland > Zürich
- Zürich (0.04)
- North America
- Canada > Ontario
- Toronto (0.04)
- Mexico > Mexico City
- Mexico City (0.04)
- United States > Louisiana
- Orleans Parish > New Orleans (0.04)
- Canada > Ontario
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Education (0.46)
- Technology: