Goto

Collaborating Authors

 alphaproof


AI could be about to completely change the way we do mathematics

New Scientist

Is an artificial intelligence revolution about to transform mathematics? Some prominent mathematicians think so, thanks to automated tools that can help write proofs suddenly showing impressive leaps in capability, with the potential to change the way maths research is done. Around 100 of the world's top mathematicians gathered at the University of Cambridge in June for a conference whose theme was based on whether computers might help mathematicians resolve some long-standing problems over how to check that their proofs were correct. This process, known as formalisation, doesn't necessarily have to involve artificial intelligence, and indeed a similar meeting held at Cambridge in 2017 made no mention of AI. But eight years later, AI has come on by leaps and bounds, most notably with the success of large language models powering tools like ChatGPT.


Google DeepMind takes step closer to cracking top-level maths

The Guardian

Even though computers were made to do maths faster than any human could manage, the top level of formal mathematics remains an exclusively human domain. But a breakthrough by researchers at Google DeepMind has brought AI systems closer than ever to beating the best human mathematicians at their own game. A pair of new systems, called AlphaProof and AlphaGeometry 2, worked together to tackle questions from the International Mathematical Olympiad, a global maths competition for secondary-school students that has been running since 1959. The Olympiad takes the form of six mind-bogglingly hard questions each year, covering fields including algebra, geometry and number theory. The combined efforts of DeepMind's two systems weren't quite in that league.


DeepMind AI gets silver medal at International Mathematical Olympiad

New Scientist

DeepMind's AlphaProof AI can tackle a range of mathematical problems An AI from Google DeepMind has achieved a silver medal score at this year's International Mathematical Olympiad (IMO), the first time any AI has made it to the podium. The IMO is considered the world's most prestigious competition for young mathematicians. Correctly answering its test questions requires mathematical ability that AI systems typically lack. In January, Google DeepMind demonstrated AlphaGeometry, an AI system that could answer some IMO geometry questions as well as humans. However, this was not from a live competition, and it couldn't answer questions from other mathematical disciplines, such as number theory, algebra and combinatorics, which is necessary to win an IMO medal.


Google DeepMind's AI systems can now solve complex math problems

MIT Technology Review

"It is often easier to train a model for mathematics if you have a way to check its answers (e.g., in a formal language), but there is comparatively less formal mathematics data online compared to free-form natural language (informal language)," says Katie Collins, an researcher at the University of Cambridge who specializes in math and AI but was not involved in the project. Bridging this gap was Google DeepMind's goal in creating AlphaProof, a reinforcement-learning-based system that trains itself to prove mathematical statements in the formal programming language Lean. The key is a version of DeepMind's Gemini AI that's fine-tuned to automatically translate math problems phrased in natural, informal language into formal statements, which are easier for the AI to process. This created a large library of formal math problems with varying degrees of difficulty. Automating the process of translating data into formal language is a big step forward for the math community, says Wenda Li, a lecturer in hybrid AI at the University of Edinburgh, who peer-reviewed the research but was not involved in the project.