Enhancing Agentic Autonomous Scientific Discovery with Vision-Language Model Capabilities
Gandhi, Kahaan, Bolliet, Boris, Zubeldia, Inigo
–arXiv.org Artificial Intelligence
We show that multi-agent systems guided by vision-language models (VLMs) improve end-to-end autonomous scientific discovery. By treating plots as verifiable checkpoints, a VLM-as-a-judge evaluates figures against dynamically generated domain-specific rubrics, enabling agents to correct their own errors and steer exploratory data analysis in real-time. Case studies in cosmology and astrochemistry demonstrate recovery from faulty reasoning paths and adaptation to new datasets without human intervention. On a 10-task benchmark for data-driven discovery, VLM-augmented systems achieve pass at 1 scores of 0.7-0.8, compared to 0.2-0.3 for code-only and 0.4-0.5 for code-and-text baselines, while also providing auditable reasoning traces that improve interpretability. Code available here: https://github.com/CMBAgents/cmbagent
arXiv.org Artificial Intelligence
Nov-19-2025
- Country:
- Asia > Singapore (0.04)
- Europe
- Austria > Vienna (0.14)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.14)
- North America > United States
- California > Los Angeles County > Pasadena (0.04)
- Genre:
- Research Report > New Finding (0.68)
- Technology: