Goto

Collaborating Authors

 formalization


Maryna Viazovska's proofs of sphere packing formalized with AI

AIHub

The proofs that earned EPFL professor Maryna Viazovska the Fields Medal in 2022 have reached a new milestone: their complete formalization by computer, achieved through a collaboration between mathematicians and artificial intelligence tools. In 2016, Maryna Viazovska solved the sphere packing problem in dimension 8, proving that the E lattice constitutes the densest possible arrangement. Shortly after, together with collaborators, she established an analogous result in dimension 24 using the Leech lattice. Her method provided an elegant solution to a problem studied for centuries, with close ties to applied fields such as error-correcting codes. For this major contribution, Viazovska was awarded the Fields Medal in 2022, the highest distinction in mathematics.






ActiveLabeling: StreamingStochasticGradients

Neural Information Processing Systems

The workhorse of machine learning is stochastic gradient descent. To access stochastic gradients, it is common to consider iteratively input/output pairs of a training dataset. Interestingly, it appears that one does not need full supervision to access stochastic gradients, which is the main motivation of this paper. After formalizing the"activelabeling" problem, whichfocuses onactivelearningwith partial supervision, we provide a streaming technique that provably minimizes the ratio of generalization error over the number of samples.




PutnamBench: Evaluating Neural Theorem-Provers on the Putnam Mathematical Competition

Neural Information Processing Systems

We present PutnamBench, a new multi-language benchmark for evaluating the ability of neural theorem-provers to solve competition mathematics problems. PutnamBench consists of 1692 hand-constructed formalizations of 640 theorems sourced from the William Lowell Putnam Mathematical Competition, the premier undergraduate-level mathematics competition in North America. All the problems have formalizations in Lean 4 and Isabelle; a substantial subset also has Coq formalizations. PutnamBench requires significant problem-solving ability and proficiency in a broad range of topics taught in undergraduate mathematics courses. We use PutnamBench to evaluate several established neural and symbolic theorem-provers. These approaches can only solve a handful of the PutnamBench problems, establishing the benchmark as a difficult open challenge for research on neural theorem-proving.


Taking the neural sampling code very seriously: A data-driven approach for evaluating generative models of the visual system

Neural Information Processing Systems

Prevailing theories of perception hypothesize that the brain implements perception via Bayesian inference in a generative model of the world.One prominent theory, the Neural Sampling Code (NSC), posits that neuronal responses to a stimulus represent samples from the posterior distribution over latent world state variables that cause the stimulus.Although theoretically elegant, NSC does not specify the exact form of the generative model or prescribe how to link the theory to recorded neuronal activity.Previous works assume simple generative models and test their qualitative agreement with neurophysiological data.Currently, there is no precise alignment of the normative theory with neuronal recordings, especially in response to natural stimuli, and a quantitative, experimental evaluation of models under NSC has been lacking.Here, we propose a novel formalization of NSC, that (a) allows us to directly fit NSC generative models to recorded neuronal activity in response to natural images, (b) formulate richer and more flexible generative models, and (c) employ standard metrics to quantitatively evaluate different generative models under NSC.Furthermore, we derive a stimulus-conditioned predictive model of neuronal responses from the trained generative model using our formalization that we compare to neural system identification models.We demonstrate our approach by fitting and comparing classical-and flexible deep learning-based generative models on population recordings from the macaque primary visual cortex (V1) to natural images, and show that the flexible models outperform classical models in both their generative-and predictive-model performance.Overall, our work is an important step towards a quantitative evaluation of NSC. It provides a framework that lets us \textit{learn} the generative model directly from neuronal population recordings, paving the way for an experimentally-informed understanding of probabilistic computational principles underlying perception and behavior.