Enforcing Orderedness to Improve Feature Consistency
Wang, Sophie L., Quach, Alex, Parsan, Nithin, Yang, John J.
–arXiv.org Artificial Intelligence
Sparse autoencoders (SAEs) have been widely used for interpretability of neural networks, but their learned features often vary across seeds and hyperparame-ter settings. We introduce Ordered Sparse Autoencoders (OSAE), which extend Matryoshka SAEs by (1) establishing a strict ordering of latent features and (2) deterministically using every feature dimension, avoiding the sampling-based approximations of prior nested SAE methods. Theoretically, we show that OSAEs resolve permutation non-identifiability in settings of sparse dictionary learning where solutions are unique (up to natural symmetries). Empirically on Gemma2-2B and Pythia-70M, we show that OSAEs can help improve consistency compared to Matryoshka baselines.
arXiv.org Artificial Intelligence
Dec-3-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe > Latvia
- Lubāna Municipality > Lubāna (0.04)
- North America > United States
- Massachusetts > Middlesex County > Cambridge (0.04)
- Asia > Middle East
- Genre:
- Research Report (0.64)
- Technology: