contributors, o3
Competitive Programming with Large Reasoning Models
OpenAI, null, :, null, El-Kishky, Ahmed, Wei, Alexander, Saraiva, Andre, Minaev, Borys, Selsam, Daniel, Dohan, David, Song, Francis, Lightman, Hunter, Clavera, Ignasi, Pachocki, Jakub, Tworek, Jerry, Kuhn, Lorenz, Kaiser, Lukasz, Chen, Mark, Schwarzer, Max, Rohaninejad, Mostafa, McAleese, Nat, contributors, o3, Mürk, Oleg, Garg, Rhythm, Shu, Rui, Sidor, Szymon, Kosaraju, Vineet, Zhou, Wenda
We show that reinforcement learning applied to large language models (LLMs) significantly boosts performance on complex coding and reasoning tasks. Additionally, we compare two general-purpose reasoning models - OpenAI o1 and an early checkpoint of o3 - with a domain-specific system, o1-ioi, which uses hand-engineered inference strategies designed for competing in the 2024 International Olympiad in Informatics (IOI). We competed live at IOI 2024 with o1-ioi and, using hand-crafted test-time strategies, placed in the 49th percentile. Under relaxed competition constraints, o1-ioi achieved a gold medal. However, when evaluating later models such as o3, we find that o3 achieves gold without hand-crafted domain-specific strategies or relaxed constraints. Our findings show that although specialized pipelines such as o1-ioi yield solid improvements, the scaled-up, general-purpose o3 model surpasses those results without relying on hand-crafted inference heuristics. Notably, o3 achieves a gold medal at the 2024 IOI and obtains a Codeforces rating on par with elite human competitors. Overall, these results indicate that scaling general-purpose reinforcement learning, rather than relying on domain-specific techniques, offers a robust path toward state-of-the-art AI in reasoning domains, such as competitive programming.