summand
- North America > United States > Missouri > Boone County > Columbia (0.13)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.13)
- Europe > Switzerland > Basel-City > Basel (0.04)
- (2 more...)
- Research Report (1.00)
- Workflow (0.67)
O-Forge: An LLM + Computer Algebra Framework for Asymptotic Analysis
Large language models have recently demonstrated advanced capabilities in solving IMO and Putnam problems; yet their role in research mathematics has remained fairly limited. The key difficulty is verification: suggested proofs may look plausible, but cannot be trusted without rigorous checking. We present a framework, called LLM+CAS, and an associated tool, O-Forge, that couples frontier LLMs with a computer algebra systems (CAS) in an In-Context Symbolic Feedback loop to produce proofs that are both creative and symbolically verified. Our focus is on asymptotic inequalities, a topic that often involves difficult proofs and appropriate decomposition of the domain into the "right" subdomains. Many mathematicians, including Terry Tao, have suggested that using AI tools to find the right decompositions can be very useful for research-level asymptotic analysis. In this paper, we show that our framework LLM+CAS turns out to be remarkably effective at proposing such decompositions via a combination of a frontier LLM and a CAS. More precisely, we use an LLM to suggest domain decomposition, and a CAS (such as Mathematica) that provides a verification of each piece axiomatically. Using this loop, we answer a question posed by Terence Tao: whether LLMs coupled with a verifier can be used to help prove intricate asymptotic inequalities. More broadly, we show how AI can move beyond contest math towards research-level tools for professional mathematicians.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > Missouri > Boone County > Columbia (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- (2 more...)
- Research Report (1.00)
- Workflow (0.67)
Quantitative convergence of trained single layer neural networks to Gaussian processes
Mosig, Eloy, Agazzi, Andrea, Trevisan, Dario
In this paper, we study the quantitative convergence of shallow neural networks trained via gradient descent to their associated Gaussian processes in the infinite-width limit. While previous work has established qualitative convergence under broad settings, precise, finite-width estimates remain limited, particularly during training. We provide explicit upper bounds on the quadratic Wasserstein distance between the network output and its Gaussian approximation at any training time $t \ge 0$, demonstrating polynomial decay with network width. Our results quantify how architectural parameters, such as width and input dimension, influence convergence, and how training dynamics affect the approximation error.