Random Rule Forest (RRF): Interpretable Ensembles of LLM-Generated Questions for Predicting Startup Success
Griffin, Ben, Vidaurre, Diego, Koyluoglu, Ugur, Ternasky, Joseph, Alican, Fuat, Ihlamur, Yigit
–arXiv.org Artificial Intelligence
Predicting rare outcomes such as startup success is central to venture capital, demanding models that are both accurate and interpretable. We introduce Random Rule Forest (RRF), a lightweight ensemble method that uses a large language model (LLM) to generate simple YES/NO questions in natural language. Each question functions as a weak learner, and their responses are combined using a threshold-based voting rule to form a strong, interpretable predictor. Applied to a dataset of 9,892 founders, RRF achieves a 6.9x improvement over a random baseline on held-out data; adding expert-crafted questions lifts this to 8x and highlights the value of human-LLM collaboration. Compared with zero- and few-shot baselines across three LLM architectures, RRF attains an F0.5 of 0.121, versus 0.086 for the best baseline (+0.035 absolute, +41% relative). By combining the creativity of LLMs with the rigor of ensemble learning, RRF delivers interpretable, high-precision predictions suitable for decision-making in high-stakes domains.
arXiv.org Artificial Intelligence
Sep-17-2025
- Country:
- Europe
- Denmark > Central Jutland
- Aarhus (0.04)
- Italy > Apulia
- Bari (0.04)
- Spain > Catalonia
- Barcelona Province > Barcelona (0.04)
- United Kingdom > England
- Oxfordshire > Oxford (0.14)
- Denmark > Central Jutland
- North America > United States
- California > San Francisco County > San Francisco (0.14)
- South America > Uruguay
- Europe
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Banking & Finance
- Capital Markets (0.49)
- Trading (0.46)
- Health & Medicine > Health Care Technology (0.46)
- Banking & Finance
- Technology: