Profit Mirage: Revisiting Information Leakage in LLM-based Financial Agents
Li, Xiangyu, Zeng, Yawen, Xing, Xiaofen, Xu, Jin, Xu, Xiangmin
–arXiv.org Artificial Intelligence
LLM-based financial agents have attracted widespread excitement for their ability to trade like human experts. However, most systems exhibit a "profit mirage": dazzling back-tested returns evaporate once the model's knowledge window ends, because of the inherent information leakage in LLMs. In this paper, we systematically quantify this leakage issue across four dimensions and release FinLake-Bench, a leakage-robust evaluation benchmark. Furthermore, to mitigate this issue, we introduce FactFin, a framework that applies counterfactual perturbations to compel LLM-based agents to learn causal drivers instead of memorized outcomes. FactFin integrates four core components: Strategy Code Generator, Retrieval-Augmented Generation, Monte Carlo Tree Search, and Counterfactual Simulator. Extensive experiments show that our method surpasses all baselines in out-of-sample generalization, delivering superior risk-adjusted performance.
arXiv.org Artificial Intelligence
Oct-10-2025
- Country:
- Asia
- China
- Beijing > Beijing (0.04)
- Guangdong Province > Guangzhou (0.04)
- Hong Kong (0.04)
- Middle East > Jordan (0.04)
- China
- North America
- Oceania > Australia
- New South Wales > Sydney (0.04)
- South America > Argentina
- Patagonia > Río Negro Province > Viedma (0.04)
- Asia
- Genre:
- Research Report (0.65)
- Industry:
- Banking & Finance > Trading (1.00)
- Information Technology (1.00)
- Technology: