ChartGen: Scaling Chart Understanding Via Code-Guided Synthetic Chart Generation
Kondic, Jovana, Li, Pengyuan, Joshi, Dhiraj, He, Zexue, Abedin, Shafiq, Sun, Jennifer, Wiesel, Ben, Schwartz, Eli, Nassar, Ahmed, Wu, Bo, Arbelle, Assaf, Oliva, Aude, Gutfreund, Dan, Karlinsky, Leonid, Feris, Rogerio
–arXiv.org Artificial Intelligence
Chart-to-code reconstruction -- the task of recovering executable plotting scripts from chart images -- provides important insights into a model's ability to ground data visualizations in precise, machine-readable form. Yet many existing multimodal benchmarks largely focus primarily on answering questions about charts or summarizing them. To bridge this gap, we present ChartGen, a fully-automated pipeline for code-guided synthetic chart generation. Starting from seed chart images, ChartGen (i) prompts a vision-language model (VLM) to reconstruct each image into a python script, and (ii) iteratively augments that script with a code-oriented large language model (LLM). Using ChartGen, we create 222.5K unique chart-image code pairs from 13K seed chart images, and present an open-source synthetic chart dataset covering 27 chart types, 11 plotting libraries, and multiple data modalities (image, code, text, CSV, DocTags). From this corpus, we curate a held-out chart-to-code evaluation subset of 4.3K chart image-code pairs, and evaluate six open-weight VLMs (3B - 26B parameters), highlighting substantial room for progress. We release the pipeline, prompts, and the dataset to help accelerate efforts towards robust chart understanding and vision-conditioned code generation: https://github.com/SD122025/ChartGen/
arXiv.org Artificial Intelligence
Jul-29-2025
- Country:
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Genre:
- Research Report (0.54)
- Technology: