MLE-bench: Evaluating Machine Learning Agents on Machine Learning Engineering
Chan, Jun Shern, Chowdhury, Neil, Jaffe, Oliver, Aung, James, Sherburn, Dane, Mays, Evan, Starace, Giulio, Liu, Kevin, Maksin, Leon, Patwardhan, Tejal, Weng, Lilian, Mądry, Aleksander
–arXiv.org Artificial Intelligence
We introduce MLE-bench, a benchmark for measuring how well AI agents perform at machine learning engineering. To this end, we curate 75 ML engineering-related competitions from Kaggle, creating a diverse set of challenging tasks that test real-world ML engineering skills such as training models, preparing datasets, and running experiments. We establish human baselines for each competition using Kaggle's publicly available leaderboards. We use open-source agent scaffolds to evaluate several frontier language models on our benchmark, finding that the best-performing setup--OpenAI's o1-preview with AIDE scaffolding--achieves at least the level of a Kaggle bronze medal in 16.9% of competitions. In addition to our main results, we investigate various forms of resource scaling for AI agents and the impact of contamination from pre-training. We open-source our benchmark code (github.com/openai/mle-bench/) to facilitate future research in understanding the ML engineering capabilities of AI agents.
arXiv.org Artificial Intelligence
Dec-20-2024
- Genre:
- Personal > Honors (1.00)
- Research Report
- Experimental Study (0.93)
- New Finding (0.68)
- Industry:
- Education > Curriculum
- Subject-Specific Education (0.70)
- Energy > Oil & Gas
- Midstream (1.00)
- Health & Medicine > Therapeutic Area (0.94)
- Materials > Chemicals
- Commodity Chemicals > Petrochemicals
- LNG (1.00)
- Industrial Gases > Liquified Gas (1.00)
- Commodity Chemicals > Petrochemicals
- Education > Curriculum
- Technology: