FutureX: An Advanced Live Benchmark for LLM Agents in Future Prediction
Zeng, Zhiyuan, Liu, Jiashuo, Chen, Siyuan, He, Tianci, Liao, Yali, Tian, Yixiao, Wang, Jinpeng, Wang, Zaiyuan, Yang, Yang, Yin, Lingyue, Yin, Mingren, Zhu, Zhenwei, Cai, Tianle, Chen, Zehui, Chen, Jiecao, Du, Yantao, Gao, Xiang, Guo, Jiacheng, Hu, Liang, Jiao, Jianpeng, Li, Xiangsheng, Liu, Jingkai, Ni, Shuang, Wen, Zhoufutu, Zhang, Ge, Zhang, Kaiyuan, Zhou, Xin, Blanchet, Jose, Qiu, Xipeng, Wang, Mengdi, Huang, Wenhao
–arXiv.org Artificial Intelligence
Future prediction is a complex task for LLM agents, requiring a high level of analytical thinking, information gathering, contextual understanding, and decision-making under uncertainty. Agents must not only gather and interpret vast amounts of dynamic information but also integrate diverse data sources, weigh uncertainties, and adapt predictions based on emerging trends, just as human experts do in fields like politics, economics, and finance. Despite its importance, no large-scale benchmark exists for evaluating agents on future prediction, largely due to challenges in handling real-time updates and retrieving timely, accurate answers. To address this, we introduce $\textbf{FutureX}$, a dynamic and live evaluation benchmark specifically designed for LLM agents performing future prediction tasks. FutureX is the largest and most diverse live benchmark for future prediction, supporting real-time daily updates and eliminating data contamination through an automated pipeline for question gathering and answer collection. We evaluate 25 LLM/agent models, including those with reasoning, search capabilities, and integration of external tools such as the open-source Deep Research Agent and closed-source Deep Research models. This comprehensive evaluation assesses agents' adaptive reasoning and performance in dynamic environments. Additionally, we provide in-depth analyses of agents' failure modes and performance pitfalls in future-oriented tasks, including the vulnerability to fake web pages and the temporal validity. Our goal is to establish a dynamic, contamination-free evaluation standard that drives the development of LLM agents capable of performing at the level of professional human analysts in complex reasoning and predictive thinking.
arXiv.org Artificial Intelligence
Sep-8-2025
- Country:
- Asia > China
- Europe > France (0.04)
- North America
- Mexico (0.04)
- United States
- California
- Los Angeles County
- Los Angeles (0.04)
- Santa Monica (0.04)
- San Diego County > San Diego (0.04)
- San Francisco County > San Francisco (0.04)
- Santa Clara County > Palo Alto (0.04)
- Los Angeles County
- Illinois > Cook County
- Chicago (0.04)
- New York > New York County
- New York City (0.04)
- Oregon > Lane County
- Eugene (0.04)
- South Dakota > Minnehaha County
- Sioux Falls (0.04)
- Utah (0.04)
- Virginia > Roanoke (0.04)
- California
- Genre:
- Overview (0.87)
- Research Report > New Finding (1.00)
- Industry:
- Banking & Finance
- Government
- Immigration & Customs (1.00)
- Military (1.00)
- Regional Government > North America Government
- United States Government (1.00)
- Information Technology (1.00)
- Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Leisure & Entertainment > Sports (1.00)
- Media > News (1.00)
- Technology: