Sparse Autoencoders for Hypothesis Generation
Movva, Rajiv, Peng, Kenny, Garg, Nikhil, Kleinberg, Jon, Pierson, Emma
–arXiv.org Artificial Intelligence
We describe HypotheSAEs, a general method to hypothesize interpretable relationships between text data (e.g., headlines) and a target variable (e.g., clicks). HypotheSAEs has three steps: (1) train a sparse autoencoder on text embeddings to produce interpretable features describing the data distribution, (2) select features that predict the target variable, and (3) generate a natural language interpretation of each feature (e.g., "mentions being surprised or shocked") using an LLM. Each interpretation serves as a hypothesis about what predicts the target variable. Compared to baselines, our method better identifies reference hypotheses on synthetic datasets (at least +0.06 in F1) and produces more predictive hypotheses on real datasets (~twice as many significant findings), despite requiring 1-2 orders of magnitude less compute than recent LLM-based methods. HypotheSAEs also produces novel discoveries on two well-studied tasks: explaining partisan differences in Congressional speeches and identifying drivers of engagement with online headlines.
arXiv.org Artificial Intelligence
Feb-5-2025
- Country:
- North America > United States (1.00)
- Genre:
- Research Report
- Experimental Study (0.68)
- New Finding (1.00)
- Research Report
- Industry:
- Consumer Products & Services > Restaurants (0.68)
- Government
- Health & Medicine > Therapeutic Area (0.67)
- Law (1.00)
- Technology: